Engineering Cellular Memory

By: Una Choi

NATURAL CELLULAR MEMORY – PHAGE Λ

We often think of memory as tied innately to the brain. Humans can perceive, encode, and consolidate an event through activation of brain components like the hippocampus (1). Memory allows us to record finite events into lasting impressions, and past memories can affect our future perceptions and reactions.

Just as organs like the brain can permit the formation of memory, individual cells also possess the capacity to remember. The phage lambda (λ) system is one striking example of naturally occurring cellular memory (2). Phage λ is a bacteriophage, a virus that targets bacteria. Phage λ injects its genetic information into the target bacterium, and, in a stage known as lysogeny, will integrate this genetic material into the host genome.

Lysogeny is often referred to as the dormant stage, as the viral genetic material is passively replicated along with the host genome as the bacterium splits. The shift from the lysogenic stage to the following stage of viral infection, lytic growth, is due to a genetic switch composed of the cl gene and gene cro located in the viral genome (3). cl encodes for a repressor protein, which can halt transcription and thus expression of the targeted viral gene, while cro encodes for Cro, which can inhibit the repressor. During the lysogenic stage, repressor concentration is high, which inhibits transcription of cro and thus keeps Cro levels low. During the lytic stage, Cro concentration is high, which in turn represses transcription of cl.

This shift to increased Cro and decreased repressor is due to cellular stimuli; the viral genome can register an external event and can thus trigger a stable response. These stimuli often involve cellular stress, as phage λ can only ensure the continued replication of its viral genome if its host bacterium is healthy. In response to starvation or DNA damage, the bacterial enzyme RecA cuts the viral repressors and thus render them inactive. Without repressors to inhibit transcription of the cro gene, Cro is produced, which further acts to inhibit the viral repressors. High Cro levels, in turn, are responsible for triggering the lytic stage.

Phage λ exemplifies only one of the many naturally-found cellular instances of memory, as the genetic switch is used to induce the lytic stage in response to external stimuli. Cellular memory also plays a pivotal role in cell differentiation and cell division (4).

SYNTHETIC MEMORY DEVICES

Using the same principles of stimulus response, synthetic biologists have begun engineering DNA memory circuits. These circuits either achieve volatile memory, which requires activated processes to function in a sustained fashion, or non-volatile memory, which does not require the continued activation of these cellular processes. While both volatile and non-volatile memory systems can switch between their states, volatile memory circuits are bistable; because volatile processes function in a sustained manner, switching between states is rare. As volatile memory circuits encompass the same “switch” function as the aforementioned natural phage λ system, this article will focus primarily on examining recent developments in creating volatile memory circuits.

MEMORY DEVICES AND CRISPR

Scientists can use the gene-editing device, CRISPR/Cas9, to manipulate DNA. Because DNA sequencing is becoming easier and cheaper, more and more host genome sequences are known. The elucidation of these genomes permits the creation of guide RNA strands, which are complementary to the target sequence in the host genome. The specific localization of the guide RNA to the target sequence can guide Cas9, a DNA-cutting enzyme (nuclease), to cut the desired sequence (5). After Cas9 generates these double strand breaks, the host cell can repair these breaks using nonhomologous recombination, which crudely re-ligates the broken DNA strands. This inaccurate mechanism often leads to the incorporation or deletion of several random nucleotides, thus leading to high incidences of mutation at the cut regions.

Scientists at MIT capitalized on these resulting mutations in their creation of a gene circuit that expressed Cas9 only in response to TNF-alpha, a tumor necrosis factor involved in systemic inflammation. Because the degree of increase in Cas9-mediated mutations in the guide sequence was positively correlated with concentrations of TNF-alpha, researchers could determine the concentration of TNF-alpha and the length of exposure to TNF-alpha from the number of mutations accumulated in the mammalian DNA sequence.

PROKARYOTIC MEMORY DEVICES

Gardner, et al (2000) constructed a double-negative feedback system in the prokaryotic Escherichia coli.6 In the event of chemical or thermal stimuli, the genetic system is flipped between two stable states. The device is bistable and once in one of the two steady states, the cell remains in that state even without the continued stimulus of the original signal. This sustained state is due to cooperativity; the binding of repressors onto the prokaryotic DNA circuit reinforces further binding of these repressors. This stability in the absence of continuous stimulus holds broad implications for further research as researchers can examine the cells without the constraint of continually stimulating the cells.

USE IN MAMMALIAN DIAGNOSTICS

The ability to synthesize stable memory circuits in vitro and in vivo holds broad implications for the field of health diagnostics. The successful construction of memory devices in vivo can allow for the accurate categorization of cells; in other words, one can identify those cells that respond differently to stimuli by sorting the cell populations on the basis of the expected response to stimulus. Burrill, et al (2012) constructed three synthetic circuits to track cellular response to doxycycline, an antibiotic used to treat bacterial infections, hypoxia (oxygen deficiency), and DNA-damaging agents.7 After receiving the stimulus, the memory device activated, ultimately causing altered patterns of gene expression, growth rates, and cell viability. These altered patterns suggest memory devices are heritable, further demonstrating the potential benefits of using memory devices for diagnostics; the “switch” can be preserved through future generations of cells.

Burrill et al. also coupled the memory device with the sequence encoding red fluorescent protein (RFP), placing this tag downstream of the targeted gene. Doxycycline inhibits TetR, which normally represses the CMVtetO2 reporter. Hence, in the absence of doxycycline, no RFP is expressed because the repressor is bound to the CMVtetO2 operator. The addition of the stimulus (doxycycline) would remove the repressor and thereby activate transcription of the target gene and the downstream RFP tag. The transcription of ZF, another downstream element, could then activate transcription of another circuit, which includes a downstream element for yellow fluorescent protein (YFP). The YFP can then bind to that same promoter, creating a positive feedback loop.

By measuring the time elapsed between the addition of stimulus and cell fluorescence stemming from the continued production of YFP, Burrill et al. identified those cells most susceptible to doxycycline, hypoxia, and DNA-damaging agents; the cells most susceptible to the added stimuli exhibited more rapid onset of continued promoter activation. This also permitted the mapping of cell progeny to study more clearly the temporal stability of these memory-activated changes.

FUTURE DIRECTIONS

In addition to the diagnostic benefits of stably engineered memory devices, the ability to generate sustained cellular responses to a controllable stimulus is invaluable in the production of antibiotics and other cellular products.4 A common setback in industry is the high cost of continually inducing large cell cultures to express a certain gene or series of genes, which often requires multiple stimuli. The advent of customizable memory circuits can thus decrease production costs. Indeed, sustained expression of a targeted gene can also be beneficial in terms of increasing protein production.

Further study, however, is needed to elucidate the effects of sustained expression and the inheritance patterns of these memory-activated changes in future progeny.

Una Choi ’19 is a sophomore concentrating in Molecular and Cellular Biology.

WORKS CITED

[1] Nadel, L., et al. Neuroscience and Biobehavioral Reviews. 2012, 36, 1640-1645.

[2] Ptashne, M. Nature Chemical Biology. [Online] 2011. 7. http://www.nature. com/nchembio/journal/v7/n8/abs/ nchembio.611.html (accessed Oct. 1, 2016).

[3] Ptashne, M. A Genetic Switch (Cold Spring Harbor Laboratory Pr). 2004.

[4] Inniss, M. C.; Silver, P. A. Curr. Biol. 2013, 23, 1-10.

[5] Trafton, A. Recording analog memories in human cells. MIT News. 2016.

[6] Burrill, D. R.; Silver, P. A. Leading Edge. 2009.

[7] Burrill, D. R., et al. Genes & Dev. 2012, 26, 1486-1497.

The Climate Of Zika

By: Michael Xie

Though this year’s Olympic Games were filled with record-breaking athletes, it seems as if another name took the spotlight in Rio: Zika. The Zika virus caused health and safety concerns around the world as spectators and athletes prepared to head to Brazil in the midst of an epidemic. But was the Zika virus’ attendance in Brazil uninvited or one let in by the recent climate trends of our world?

History of Zika

The Zika virus was unintentionally discovered in monkeys of Uganda’s Zika Forest in 1947 while scientists were researching yellow fever, but the virus was not observed in humans until 1952 (1). As a flavivirus related to diseases such as West Nile, dengue, and yellow fever, Zika is transmitted to humans through infected mosquitoes of the Aedes genus, such as Aedes africanus, Aedes aegypti, and Aedes luteocephalus. The viral genome was not sequenced until 2006, at which point there were no documented outbreaks of the virus, with only 14 human cases of Zika isolated within Africa and Asia. The first outbreak occurred in the follwing year on Yap Island, a small Pacific Island within the Federated States of Micronesia (2).

Given the area’s abundance of flavivirus-carrying mosquitoes, Yap was most likely exposed to the virus through migrating mosquito vectors. However, it is also possible that a human with an undetected infection brought the virus to the island, as there was evidence of Zika in the nearby Philippines. In the spring of 2007, doctors observed the disease through symptoms such as rashes, conjunctivitis, fever, and arthritis and joint pain. At first, the disease falsely tested positive as dengue, but further testing by the Center for Disease Control and Prevention (CDC) found that the samples contained Zika virus RNA. Because Yap residents had not developed sufficient immunity, three out of four were infected in this first Zika outbreak (2).

Recently, another outbreak of Zika has sprung up in the Americas, beginning with Brazil. Patient records from early 2015 show “dengue-like symptoms” like rashes and pain reported in the city of Natal (3). The introduction of the disease can be attributed to international visitors that came for the FIFA World Cup in 2014. Rapid travel easily carried both infected vector mosquitoes and diseased humans around the country. During the epidemic, incidences of microcephaly, the condition of having an abnormally small head, seemed to rise. People speculated that the Zika virus could be associated with the condition in fetuses and directed warnings about the disease towards pregnant mothers. Attempts to control the virus through its vector mosquito were complicated by the Aedes mosquito’s efficient adaptation to urban environments, although certain areas of Brazil had a high-elevation climate rendering them unfit for the mosquito and safe from the virus (4).

Zika and Climate Change

Both epidemics show that the spread of the Zika virus is determined by the nature and locations of its vector mosquito. Therefore, climate patterns affecting the distribution of Aedes mosquitoes can also affect the distribution of Zika. It has been found that the Aedes aegypti mosquito, the most common virus vector, survives best in tropical and subtropical regions around the equator and between the 10 °C January isotherms (5). This temperature region roughly covers the area between 45 degrees south and 35 degrees north. Areas of higher humidity and higher rainfall are also more favorable to the mosquitoes, as these characteristics assist in mosquito breeding and survival by preventing adult mosquito desiccation (6).

Since climate has such a large impact on Zika distribution, it follows that the changing climate around the world will also change the scope of the virus. It is projected that the global average for land temperature will rise roughly 3.1—4.8 °C by 2061—2080, with the largest increases at middle to high elevations.5 Land precipitation is expected to increase significantly in most regions.

First, we look at regions currently at risk for Zika. These areas of the world currently have climates suitable for sustaining Aedes mosquitoes. Analyzing future trends in climate, researchers have found that several of these regions will see an increase in the abundance of Aedes mosquitos over the next 40—60 years. Type 1 occurrence patterns, in which the vector is highly abundant year-round, are expected to increase by 44—54% around the world. Type 2 patterns, in which the vector is present year-round but only seasonally abundant, are expected to increase by 15—33% (5).

In regions currently unsuited for the Aedes vector mosquito, seasonal suitability is expected to increase in the next 40—60 years. Type 4 patterns, in which the vector is only seasonally present, will expand into regions that are now mosquito-free. Type 4 patterns are expected to expand by 8—18%, with growth concentrated in mid-latitude regions such as Europe. This expansion, combined with population growth, will increase human exposure to the Aedes mosquito in previously unexposed regions (5). The problem is compounded because people living in these regions would likely not have the immunity and resistance that those more regularly exposed to the mosquito have built. The intertwining of climate change and global health threats extends far beyond the effects on the Zika virus. Models of other diseases, such as malaria and dengue fever, also predict climate-induced changes in transmission. With malaria, it was shown that global temperature rises of 2—3 °C would increase both the length of malaria season and the risk of malaria by 3—5%.6 On top of the diseases, phenomena such as unsustainable heat waves, cyclones, and flooding may cause direct mortality. But there are subtler dangers as well. Asthma incidences in the United States has increased more than fourfold in the past three decades, which can be partially attributed to climate-related factors. For instance, plants such as ragweed can produce roughly 60% more pollen when grown under an abundance of carbon dioxide. Accelerated trade winds over the Atlantic caused by pressure gradients over warming waters even bring air pollutants from expanding African deserts to the Americas. While it may have been originally thought that climate change and human health were unrelated, these trends appear to show otherwise (7).

Looking Forward

Looking towards the future, changes in our behavior seem to be necessary to combat the change in the Earth’s environment. For instance, resources can be allocated to improve city conditions so that diseases like Zika do not have a chance to spread as rapidly as they did in Brazil. In the past, the impacts of climate change have been largely underestimated as purely environmental concerns. Considering the bigger picture effects of climate change, clean energy and green thinking may be among the ways toward not only saving the Earth for posterity but also controlling and solving many of today’s global health issues.

Michael Xie ‘20 is a freshman in Thayer Hall.

WORKS CITED

[1] Fauci, A. S.; Morens, D. M. N. Engl. J. Med. 2016, 374, 601-604.

[2] Duffy, M. R. et al. N. Engl. J. Med. 2009, 360, 2536-2643.

[3] Zanluca, C. et al. Mem. Inst. Oswaldo Cruz. 2015, 110, 569-572.

[4] Marcondes, C. B.; Ximene, M. Rev. Soc. Bras. Med. Trop. 2015, 49, 4-10.

[5] Monaghan, A. J. et al. Climatic Change. 2016, 1-14.

[6] Patz, J. A. et al. In Climate change and human health: risks and responses, McMichael, A. J. et al., Eds.; World Health Organization: Geneva, CH, 2003; pp. 103-132.

[7] Epstein, P. R. N. Engl. J. Med. 2005, 353, 1433-1436.

Our Neighbor, Earth

By: Ian Santana Moore

Last August, a team of astronomers at the European Southern Observatory announced a discovery that forever changed how we view our place in the Universe. On nearby Proxima Centauri–a red dwarf star found within a ternary star system containing two much larger blue giant stars–scientists discovered an exoplanet in the habitable zone, which is the range of a star system at which an earth-like atmosphere and liquid water are likely to be found. What differentiates this discovery from past ones, however, is the Centauri system’s proximity to our own. These three stars are 4.37 light years from our sun, so by making use of innovations in observation and propulsion techniques, we will likely have information about this exoplanet, Proxima Centauri B, within our lifetimes. However, the viability of red dwarfs’ ability to harbor life-bearing or even liquid water-bearing planets has been called into question, and scientists across many disciplines have chosen sides on the issue. Observations of Proxima B could lead earth-like planets around Red dwarfs to be accepted or rejected as potentially habitable in general, decreasing our probability of finding biotic planets in the universe by up to a factor of one thousand [1]. To understand the significance of the discovery of Centauri B in the context of extrasolar astronomy, it is important to gain an appreciation of how incredibly difficult it is for astronomers to find exoplanets in the first place.

How to Detect an Exoplanet

Man’s search for earth-like characteristics on other celestial bodies is as old as Galileo himself, who named the deep, dark formations that he observed on the moon’s surface “maria” (seas). Since April 2014, when the Kepler team announced the discovery of the first rocky, earth-sized exoplanet in the habitable zone of another star [2], over three thousand exoplanets as well as countless unconfirmed candidates have been observed [3]. Because the light that earthlike exoplanets reflect is so much fainter than their parent stars, astronomers have been forced to come up with various indirect methods for observing them.

The most popular method for detecting exoplanets, and the one that was used to find Proxima B [4], we will call the Doppler method. When an orbiting planet’s gravitational pull acts on its parent star, it causes the star to wobble around an equilibrium position. A Doppler shift occurs when an object emitting waves (eg. light, sound, etc.) moves toward or away from an observer—in this case, the astronomers on earth—causing the wavelength of these waves to shorten or lengthen accordingly. By taking advantage of the miniscule, yet detectable, Doppler shift caused by this sort of wobbling, astronomers can measure the mass and velocity of a potential exoplanet.

Another method, called transit photometry, is the most popular method in general and can be used to detect and measure the radius of an exoplanet. With this method, one measures the degree to which a star’s brightness drops when a planet comes between the star and the earth, or when it transits the star. A large planet close to the star will cause a larger change in brightness than a small planet far away.

A final, much less effective but still used method is gravitational microlensing. Gravitational lensing occurs when the gravity of one massive body that is in front of a light source causes the light source to appear larger to an observer as a result. In the case of exoplanets, both bodies are stars. Gravitational microlensing as an exoplanetary detection method makes use of the tiny amount that a planet orbiting the obstructing star contributes to the lensing effect. These three complicated methods are still more effective than the direct imaging [4].

Once astronomers identify a planet and measure its velocity, they are able to perform a trivial calculation to find the distance with which it orbits its parent star in order to determine whether it falls into that stars habitable zone. The best candidate for an exoplanet that could support life is one of around the earth’s mass in the habitable zone of a star of around the sun’s mass. Sun-like stars, or G type stars, however, are around one-tenth as common as Red dwarf stars like Proxima Centauri [5], which are by far the most common type of star in the universe.

Life around a Red Dwarf Star

Due to their incredible prevalence in the night sky and their size (less than half of that of our sun), which makes their Doppler wobbles much more pronounced than those of larger stars [6], the number of exoplanets found in the habitable zones of Red dwarfs far exceeds those found around any other type of star. Various arguments against Red dwarfs’ ability to maintain the conditions for life on their planets’ surfaces do, however, persist.

Some general criteria for a star’s ability to maintain an earth-like environment on a star in its habitable zone are: (1) The speed with which a star goes through its lifetime; (2) the size of the star’s habitable zone; (3); the frequency and strength of electromagnetic emissions caused by changes in the star’s magnetic field; (4) the frequency and strength of solar flares; and (5) the possibility for Oxygenic Photosynthesis [7].

Under the first criterion, red dwarfs are a great environment for habitable planets to exist. To our current knowledge, they burn consistently and indefinitely, not exhibiting the same changes we see in larger stars. For example, sun-like G type stars eventually run out of hydrogen fuel and begin to burn helium instead, turning red and becoming much larger in the process, moving its habitable zone outward in the process. For blue giant stars, this process occurs much more quickly, and, even around planets in the habitable zone, life might not have enough time to appear before the planet becomes uninhabitable.

Due to the latter four criteria, however, some scientists believe that life around a red dwarf is relatively unlikely. Their habitable zones are significantly smaller and closer to the star itself than those of larger stars.[8] Planets in the habitable zone have such a high proximity to their parent star that they could become tidally locked (always facing the star with their same side, like the earth and the moon), which would produce an atmosphere incapable of keeping surface water in the liquid state. The fact that most of the electromagnetic radiation produced by red dwarfs is in the red and infrared is enough for many to rule out the possibility for photosynthesis [1]. These stars are also known to have high levels of magnetic activity and to frequently produce solar flares. When it was around 100 million years old, our own Sun exhibited similar characteristics, which resulted in Venus’ losing all of its water and Mars’ developing freezing surface temperatures. The fact that the earth survived this period intact pro vides support for proponents of biotic Red dwarf exoplanets.

Many astronomers are not convinced that these latter four criteria are enough to rule out the possibility of life. Some have used mathematical atmospheric models to prove that under certain conditions, a tidally locked planet could harbor liquid water on its surface. Others have pointed out that proximity to the parent star does not always result in tidal locking, as in the case of Mercury, which rotates three times for every two orbits around the Sun [8]. Furthermore, on a tidally locked planet, the effect of solar flares and electromagnetic radiation would be minimal at the terminator (the border between the day side and the night side of the planet), which would allow life to thrive there if not elsewhere on the surface. The issue of photosynthesis could be solved either if plants developed to utilize infrared radiation rather than visible light or if we allow that the amount of visible light produced by the red dwarf is enough for photosynthesis [1]. For these reasons, red dwarfs may be viable hosts for life after all.

The scientific community really cannot make up its mind on red dwarfs, which is all the more reason to be excited about the discovery of Proxima B. What better way to solve this debate than by directly observing an earthsize planet in the habitable zone of a red dwarf? The exoplanet is as close to us as it can possibly be, and astronomers’ current telescopic arsenal is not sufficient to image the planet’s surface, so the question becomes, not if, but when we will visit it. Recent initiatives in interstellar travel, such as the StarShot project, which would accelerate a small probe to relativistic speeds using lasers beams, mean that we could see images of the planet’s surface within our lifetimes. The significance of this discovery will not be completely understood until we have more data, but it has already begun to change how and where we hunt for life outside of our solar system.

Ian Santana Moore ‘19 is a sophomore in Eliot House.

WORKS CITED

[1] Gale, J.; Wandel, A. Astrobio. 2016, 1-9

[2] Quintana, E. V. et al. Sci. 2016, 6181, 277-280

[3] NASA Exoplanet Archive. http://exoplanetarchive.ipac.caltech.edu/docs/counts_detail. html (accessed Sep. 26, 2016).

[4] Hatzes, A. Nature 2016, 536, 408-409.

[5] Glenn, L. J. of Roy. Astro. Soc. of Can. 2001, 95, 32.

[6] Baraffe, I. et al. Astron. and Astroph. 2003, 402, 701-712.

[7] Cuntz, M.; Guinan, E. F. Astroph. J. 2016, 827, 1-26.

[8] Tarter, J. C. et al. Astrobio. 2007, 7, 30-65.

Climate Change Skeptics: Their Arguments, Their Motivations, and How to Critically Evaluate the Knowledge at Hand

By: Jacqueline Epstein

Climate change: it’s happening, regardless of how inconvenient it may be to any personal or political agenda. It is not only happening; it is progressively getting worse. To rehash just a few of the many statistics that support these actualities, nine of our planet’s ten warmest years on record have occurred since 2000, atmospheric carbon dioxide levels are at their highest in the past 650,000 years, and the arctic sea ice surface area has been steadily decreasing since satellite observations began in 1979, at a frightening rate of 13.4% per decade (1).

In light of these scientifically demonstrated realities, why are there still global warming naysayers? The categories of climate change skeptics are varied in their logic and motivations, and the ways in which we must respond to them.

Scientific Outliers

While there exists a general consensus among the scientific community that the threats associated with climate change are legitimate, certain accredited scientists continue to support the argument that global warming phenomenon is simply a natural fluctuation in weather patterns, as opposed to a man-made event. These individuals may cite the Paleocene–Eocene Thermal Maximum (PETM), a climate event estimated to have occurred around 55 million years ago. Over a period of roughly 100,000 years, global temperatures rose by an average of over 5°C, Artic sea surface temperatures rose considerably over this average, and massive amounts of carbon dioxide were released into the atmosphere. Over time, global temperature and carbon dioxide levels stabilized (2). This large-scale fluctuation occurred millions of years before human beings populated the planet, which can lead to naysayers removing all blame placed on our species in provoking climate change.

Another referenced cause for natural fluctuations in temperature is Milankovitch cycles: the collective effects of the Earth’s circumnavigation of the Sun on climate cycles, responsible for the advance and retreat of the planet’s glaciers (3). Since the end of the last ice age roughly 14,000–10,000 years ago, globally averaged surface temperatures have fluctuated over a range of up to 2°C on time scales of centuries or more. Other cited natural causes include systematic variations in the amount and distribution of solar radiation, and the El Niño– Southern Oscillation phenomenon, a periodical fluctuation in wind patterns and sea surface temperatures (4).

Responding to these Arguments

Fortunately, scientists who claim that climate change is simply the latest shift in the cyclical patterns of a planet’s life are far and few. Providing a counterargument to these claims is fairly straightforward. The climate patterns have observed in recent decades deviate significantly from the outcomes predicted by these cycles. Carbon dioxide concentrations in the atmosphere have spiked alarmingly, and unlike the gradually increase and decrease seen in the PETM, a quarter of this is the result of human activity: chemists can distinguish between the carbon dioxide produced by burning fossils fuels, and that produced naturally by plants and animals (5). Further, a recent study at McGill University applied statistical methodology to determine the probability that global temperature fluctuations since 1880 are due to natural variability, using multi-proxy climate reconstruction techniques to determine the precise impacts of natural versus man-made effects. Their conclusions ruled out the natural warming hypothesis with more that 99% certainty (6). This study directly addressed claims that climate change is merely a perceived threat that can be attributed to natural fluctuations, providing a huge push towards universal scientific consensus.

Fudging the Facts

However, universal scientific consensus eradicate public uncertainty. Over the past few decades, immense pressure has been placed on scientists to downplay the menace presented by global warming. In the late 1980’s, when the rapid rise in global temperatures and atmospheric carbon dioxide began to alarm scientists, the energy industry started to feel threatened by potential impacts these discoveries could have on their profits. These companies adopted a clever strategy, which relied on the media’s inclination to portray both sides of a debate and indulge in false equivalence. By funding non-profit research organizations, energy companies were able to closely monitor the information put forth into public consciousness. A 2003 study questioning the reality of climate change published in a British academic journal was co-authored by scientists from various non-profit organizations, and was underwritten by the American Petroleum Institute, who along with ExxonMobil Corp had helped to fund the research (7). After a “thorough” reanalysis of data from more than 200 studies of the Earth’s climate over the past millennium, the scientists concluded that there exists significant evidence of global temperature shifts more drastic than the late 20th century warming patterns. They specifically refer to a “medieval warm period” between 900 and 1300 A.D. that analysis reveals to have been warmer than recent times. While adding as an aside that “it is clear that human activity has significantly impacted some local environment,” the ultimate conclusion of the study is that global warming in the recent decades is merely an incidence of the Earth’s natural climate fluctuations, and encourages an “objective and bias-free approach” moving forward on climate change research (8).

This was not an isolated incident: industrial agendas have often guided and shaped the information on climate change conveyed to the public. In a congressional hearing in 2007, scientists from seven government agencies reported to have been subjected to such pressure. Evidence came from a survey conducted by the Union of Concerned Scientists, a private advocacy group. A questionnaire sent out to 279 climate scientists revealed two in five had complained that their scientific papers had been edited in a way that changed their meaning, while nearly half of the scientists indicated that they had been told to delete references to “global warming” or “climate change” from a report (9).

In evaluating any scientific analysis on climate change, it thus becomes critical to consider any industrial ties the authors may have, and assess underlying motivations behind the information put forth. Companies involved in the extraction and distribution of traditional fossil fuels may place their own financial incentives far above any concerns for the harmful impacts their industries are causing the environment. If these companies are funding the research on climate change, data will inevitably be skewed or presented in a way to best reflect their interests.

Political Agendas

Climate change is also a topic of immense political and legislative debate. While 97% of climate scientists are in agreement that global warming is both occurring and driven by human activity, over 56% of Republicans in Congress at the moment deny or question the science behind human-cause climate change (10). In a Republican primary debate in 2014, when the moderator asked “Is climate change a fact?”, the audience responded with laughter, while the four candidates snickered and unanimously agreed: no, climate change is not a fact. Of significant concern is the fact that one of the loudest congressional climate change deniers, Senator James Inhofe (R-OK), is the chairman of the Environment and Public Works (EPW) committee (10). In a 2014 congressional hearing, Inhofe infamously declared that the earth “had experienced no warming for the last 15 years.” To make matter worse, 91% of Republicans on the EPW committee have either stated that global warming is not a legitimate issue, or that humans are not the source of recently observed climate changes (10).

What are the motivations behind this large-scale partisan denial of scientific facts? Once again, financial incentives play a central role. Analysis by the Center for American Progress reveals that the 38 climate change deniers in the current Senate have amassed a total of $27,845,946 in donations from the coal, oil and gas industries, while the 62 Senators who haven’t denied climate change science have taken only $11,339,967 in career contributions: an average of $549,886 more per congressional naysayer. Similar trends were observed in climate science deniers and supporters in the current House of Representatives (10).

Further, environmental concerns have often come into conflict with legislative matters. The infamous example is that of the Keystone XL oil pipeline, a proposed 1,179-mile pipeline that would have carried 800,000 barrels a day of carbon-heavy petroleum from the Canadian oil sands to the Gulf Coast. Both Republicans and Democrats, particularly those with ties to the energy industry, overwhelmingly supported the project, arguing that it would stimulate trade, economic growth, and create many new jobs. Both parties coalesced in February 2015 to send President Obama a bill to speed approval of the project. However, Obama vetoed the bill, citing concerns over the detrimental effects the pipeline could have on the environment. This denial brought the seven-year affair to conclusion, and helped to solidify the United States’ position as an aggressive player in the fight to combat climate change (11).

Future Directions

While this decision marks a major legislative milestone, much progress remains to be made on bringing global warming to the forefront of public consciousness. Why do so many Americans continue to deny the reality of the situation? In 2014, 23 percent of Americans reported that they do not believe in global warming, while 53 percent saying they do not believe climate change is cause by man (12). Denial of global warming is speculated to be largely due to a select few high-profile climate skeptics, and the endorsement they have received from major corporations and political figures. To rectify this large-scale misconception, each individual must be held accountable for critically evaluating information received through the media. Who is responsible for conveying this data, and is it supported by legitimate scientific research? What hidden motivations may they be concealing from the public? Do they have any political or industrial ties that may not be immediately evident?

A psychological analysis of climate change skeptics can also help to provide some insight on their views. Individuals have an innate tendency to seek out information that support their pre-established beliefs, thereby avoiding cognitive dissonance, which refers to an uncomfortable state wherein one must grapple with contradictory or competing beliefs. This helps to explain why industrial and political leaders may support the research of high-profile global warming skeptics over widespread scientific consensus. For example, the CEO of a company invested in fossil fuels is unlikely to admit to the legitimacy of the climate change threat, as this would force him or her to address the challenging issues of how the company may be causing harm to the environment. Each individual must thereby critically evaluate their own knowledge, and be willing to abandon pre-established beliefs in favor of proven scientific fact.

Global warming is a major threat to our planet and our civilization, and must be perceived as such in order to be alleviated.

Jacqueline Epstein ‘18 is a junior in Leverrett House.

WORKS CITED

[1] Global Climate Change: Vital Signs of the Planet. http://climate.nasa. gov (accessed Sept. 22, 2016).

[2] Paleocene-Eocene Thermal Maximum (PETM). Encyclopedia Britannica [Online], https://www. britannica.com/science/Paleocene-Eocene-Thermal-Maximum (accessed Sept. 22, 2016).

[3] Milankovitch Cycles and Glaciation. http://www.indiana.edu/~- geol105/images/gaia_chapter_4/ milankovitch.htm (accessed Sept. 22, 2016).

[4] Wratt, D.; Mullan, B. Natural Variations in Climate. https://www.niwa. co.nz/our-science/climate/information-and-resources/clivar/variations (accessed Sept. 22, 2016).

[5] How do we know global warming is not a natural cycle? http://www. climatecentral.org/library/faqs/ how_do_we_know_it_is_not_a_ natural_cycle (accessed Sept 22, 2016).

[6] Lovejoy, S. Climate Dynamics 2012, 42, 2339–2351.

[7] Nesmith, J. Foes of global warming theory have energy ties. Seattle Post-Intelligencer. June 1, 2003. http://www.seattlepi.com/national/ article/Foes-of-global-warmingtheory-have-energy-ties-1116097. php (accessed Sept 24, 2016)

[8] Soon, W. et al. Energy and Environment. 2003, 14, 233-296.

[9] Roberts, J. Groups Say Scientists Pressured On Warming. http:// http://www.cbsnews. com/news/groups-say-scientistspressured-on-warming/ (accessed Sept 22, 2016).

[10] Kroh, K. et al. The Anti-Science Climate Denier Caucus: 114th Congress Edition. January 8, 2015. https://thinkprogress.org/the-anti-science-climate-denier-caucus- 114th-congress-edition-c76c3f8bfedd#.pp1k3m4te. (accessed Sept 25, 2016).

[11] Davenport, C. Citing Climate Change, Obama Rejects Construction of Keystone XL Oil Pipeline. The New York Times (Online). November 5, 2016. http://www.nytimes.com/2015/11/07/us/obamaexpected-to-reject-construction-of-keystone-xl-oil-pipeline. html?_r=1 (accessed Sept 25, 2016).

[12] Gregoire, C. Why Some Conservatives Can’t Accept That Climate Change Is Real. The Huffington Post (Online). December 4, 2015. http://www.huffingtonpost.com/ entry/climate-change-denial-psychology_us_56438664e4b045bf3ded5ca5 (accessed Sept 25, 2016).

Channeling Out The Heat

By: Hanson Tam

Stand under the sun on a sweltering summer day, and your skin becomes sticky with sweat. We take perspiration for granted, often dismissing it as an annoying bodily function. Yet it is profoundly important for mammalian thermoregulation. Sweating allows you to evaporate off excess heat and maintain a steady temperature in the narrow range required for survival. If a person gets too hot, he or she suffers from hyperthermia, which manifests as muscle cramps, nausea, and delirium. On the other hand, if a person’s core temperature falls too low, hypothermia sets in, leading to heart and lung abnormalities (1).

Although scientists have long understood the macro-level phenomena of thermoregulation, the molecular basis for how we sense and respond to temperature is still being discovered. Broadly speaking, heat and cold sensors send signals to the brain, which then instructs parts of the body to respond appropriately (2) These sensor proteins constitute an exciting area of current research. A group of scientists in Germany recently demonstrated that they could significantly lower a mouse’s body temperature by manipulating neurons that have the protein TRPM2 (3). Published in the journal Science, this groundbreaking discovery not only advances our understanding of thermoregulation but also suggests possible treatments for a variety of diseases.

THE BIG PICTURE

Before we dive into the fascinating but microscopic world of molecular biology, let’s take a moment to explore some of the human body’s large-scale responses to hot and cold. After all, there is no point in sensing temperature if we cannot do anything to change it!

When overheated, humans not only sweat but also effect changes in the circulatory system. The cutaneous blood vessels in our skin expand. Meanwhile, vessels associated with our inner organs contract. These two responses position more warm blood near the skin’s surface, allowing heat to easily radiate out into the environment. Complementing these phenomena is an increase in cardiac output—the volume of blood pumped per minute—in order to maintain proper blood pressure after vessel dilation (2).

As expected, our response to cold is the exact opposite. Cutaneous blood vessels constrict to minimize the amount of heat lost to radiation. But conservation is often not enough, leading us to generate heat through metabolism. One mechanism is brown adipose tissue (BAT), a type of fat whose primary purpose is thermoregulation. When BAT receives nerve signals, its mitochondria digress from their usual task of generating energy and instead allow energy leaks to warm up the body. A second mechanism is shivering, which uses the contraction of muscles to burn chemical energy and release heat (2).

The current model for thermoregulation involves sensors in the skin and internal organs. When stimulated by heat or cold, they signal through neurons to the central nervous system (CNS), specifically the preoptic area (POA) of the hypothalamus in the brain. The hypothalamus combines signals from the body with signals from its own temperature sensors. Upon processing all these electrochemical stimuli, the brain decides whether the body needs to generate or lose heat (2,3). But what are the magical sensors that allow us to measure temperature in the first place?

GATEKEEPERS OF SENSATION

The surprising answer is transient receptor potential (TRP) channels. The TRP channel family consists of ion channels involved in many sensations, including sight, smell, taste, hearing, and touch. They are embedded in cell membranes and when open, allow positively charged calcium ions to flow through. In humans, there are 27 such proteins, divided into seven families based on structure. One of their most interesting properties is that a single channel can be activated by many different stimuli. For example, TRPV1 responds to heat, chemicals, and immune signaling molecules known as cytokines (4).

Although the exact mechanisms of TRP channel activation are mostly unknown, recent studies have provided clues. The basic idea behind chemical activation is that a molecule, called a ligand, binds to a crevice on the complicated structure of the closed channel. The electrostatic attractions between the ligand and its binding site cause a shape change that propagates through the entire protein, resulting in an open conformation. A similar paradigm governs TRP channels responsive to voltage and temperature. At some threshold, electrical potential or thermal energy triggers select protein domains to modify their structure. These changes combine to open the channel (5).

Another way of thinking about temperature sensitive TRP channels is in terms of the energies of the closed and open conformations. It is a fundamental tenet of thermodynamics that a system prefers its lowest possible energy state. Let’s consider the case of TRPV1, which is activated by heat. At room temperature, the closed conformation has much lower energy than the open, so the channel stays shut. But at 50 ºC, the energies are reversed, and open is more favorable. Ions are now allowed to flow through the channel (5).

How does the opening and closing of TRP channels lead to neural signaling to the brain? It turns out that TRP channels mainly allow Ca2+ ions through and that calcium signaling plays a role in many cellular functions (4). A large sudden change in the balance of positively and negatively charged ions across a cell membrane creates an electrochemical current that propagates through a cell (6). For channels situated on neurons, this pulse would start a chain of signal relays that would eventually reach the brain.

THE MURINE CHILL FACTOR

Research in this area has long been focused on TRP channels expressed in the periphery, or anatomical locations away from the brain. But intuitively, a temperature measurement from the skin should be less important than a temperature measurement in the brain itself (3). Such was the motivation for the Science paper Song and colleagues published on TRPM2.

As its acronym implies, TRPM2 is a TRP channel. The structures of TRPM2 and TRPM8 are very closely related, and since the latter is known to be activated by cold, scientists were interested in the possibility that the former is also a temperature sensor protein. In 2006, Togashi et al. showed that TRPM2 can be activated by warmth, in addition to previously reported stimuli such as metabolic molecules and chemicals indicative of cell stress (6). And in 2009, Csanday and Torocsik performed a detailed analysis of TRPM2’s mechanism of action (7).

Previous research has shown that there are warm-sensitive neurons (WSNs) in the POA of the hypothalamus, the thermostat of the human body. When the POA is heated, WSNs fire electrical pulses more rapidly. When the POA is cooled, WSNs slow down and stop (3) The new research done by Song et al. elucidates the cause of this phenomenon. Through a process of elimination, the authors homed in on TRPM2, which is highly expressed on certain WSNs. They found that only neurons that had normal TRPM2 experienced calcium influxes when shocked with heat. When the protein was knocked out, or rendered nonfunctional, calcium signaling did not happen (3).

After establishing the threshold of TRPM2 activation at 38 ºC, which is slightly above normal body temperature, the German research group wanted to test the channel’s functionality in living mice. Since TRPM2 is a heat sensor, the scientists expected it to carry out cooling functions. They genetically engineered mice with an on/off switch for specifically the neurons expressing TRPM2. When the switch was turned on by administration of a drug, the TRPM2+ neurons fired, and body temperature dropped to a stunning 27 ºC (3). Using infrared imaging, Song and colleagues took an amazing video that shows heat dissipating from the mouse and warming up the environment (8). Conversely, when the switch was turned off, the TRPM2+ neurons were inhibited, and body temperature actually rose to 39 ºC, suggesting that TRPM2 normally cools our bodies continuously (3).

FIGHTING DISEASE WITH A THERMOMETER

Understanding the function of TRPM2 has opened up new possibilities for treating disease. In the Science paper, the authors directly tested the role of the channel in fever response. They injected PGE2, a fever-inducing mediator, into normal mice and knockout mice that lacked TRPM2. Those without the heat sensitive protein had fevers there were on average almost one full degree higher.3 While there are significant differences between mice and humans, these data suggest that manipulating TRPM2 activity could be a way to treat temperature-related conditions. In addition, artificial activation of the ion channel could be beneficial to recovery from trauma. Lowering body temperature reduces tissue damage from heart attacks and stroke.2 If doctors were able to directly stimulate TRPM2 channels in the hypothalamus with a new drug, they would no longer need to use ice baths to fight the body’s generation of harmful heat (9). Meanwhile, controlled inhibition of the channel might increase metabolism and counter obesity. With the brain less sensitive to high temperatures, it is plausible that the body could burn off more fat and not mind generating heat (2).

TRPM2 is also implicated in the immune system. When Song et al. injected inflammatory cytokines into mice, those without the channel had higher fevers (3). TRPM2’s expression in the bone marrow, where many immune cells develop, lends further credence to the idea that the heat sensitive protein modulates our response to infection (4). A study from 2013 found that macrophages—cells that consume pathogens in the blood and tissue—lacking TRPM2 produced a weakened inflammatory response. Thus, a drug that blocks TRPM2 might help treat certain autoinflammatory conditions such as gout, atherosclerosis, and Alzheimer’s disease (10).

Diabetes is yet another area where thermoregulation, disease, and TRPM2 cross paths. Scientists have shown that the channel is expressed in rat pancreatic β-cells, from which insulin is secreted. β-cells that were exposed to heat released extra insulin and underwent Ca2+ signaling. And when TRPM2 was blocked, the heat responses faded away (6). While the exact implications are unclear, these data suggest that a TRPM2-targeted treatment might be useful in diabetes.

THE BIOLOGICAL THERMOSTAT

Thermoregulation is remarkably logical. It is basically an input-output system. The thermostat in the hypothalamus integrates temperature measurements from throughout the body. Through complex electrical circuits, the computer that is our brain directs our sweat glands, blood vessels, and skeletal muscles to perform the necessary work. The most fascinating part of it all may be the input mechanism. Translating temperature into biological activity seems like a daunting task. But nature created TRP channels—switches that open and close at predefined thresholds of activation. As these molecular gates direct ion traffic in and out of our cells, temperature becomes encoded in the electrical firing of neurons.

Thanks to recent discoveries, we are beginning to grasp the secrets of how we constantly adapt to changing temperatures. It may not be long until we gain the ability to set our own thermostats to treat diseases and save lives.

Hanson Tam ’19 is a sophomore in Lowell House, concentrating in Molecular and Cellular Biology.

WORKS CITED

[1] Cheshire, W., Jr. Auton. Neurosci. 2016, 196, 91-104.

[2] Morrison, S. F1000Research 2016, 5, 880.

[3] Song, K. et al. Science 2016, 353, 1393- 1398.

[4] Venkatachalam, K.; Montell, C. Annu. Rev. Biochem. 2007, 76, 387-417.

[5] Baez, D. et al. Curr. Top. Membr. 2014, 74, 51-87.

[6] Togashi, K. et al. EMBO J. 2006, 25, 1804-1815.

[7] Csanady, L.; Torocsik, B. J. Gen. Physiol. 2009, 133, 189-203.

[8] Science News. Thermostat Neurons Cool Off a Mouse. YouTube [Online], Aug. 25, 2016. https://www.youtube. com/watch?v=ixD-Ej95fFU (accessed Sep. 9, 2016).

[9] Chodosh, S. Resetting the Body’s Thermostat with a Molecular On/Off Switch. Scientific American [Online], Aug. 26, 2016. http://www.scientificamerican.com/article/resetting-thebody-s-thermostat-with-a-molecularon-off-switch/ (accessed Sep. 9, 2016).

[10] Zhong, Z. et al. Nat. Commun. 2013, 4, 1611.

Fighting Smarter Against Cancer

By: Jimmy Thai

In 1912, Scientific American stated, “The beginning of the end of the cancer problem is in sight.” This bold claim was based on the work of Nobel laureate Paul Ehrlich, who reasoned in the early 1900s that the synthesis of compounds toxic only to diseased cells would yield the creation of new pharmaceutical drugs. Ehrlich applied this principle in the creation of Salvarsan, a compound selectively harmful to the bacterium that causes syphilis. He then tried treating cancer through selective toxicity, but declared in 1915 that he had “wasted 15 years of [his] life in experimental cancer” (1). More than a century later, however, Ehrlich’s dream of selectively targeting cancer cells may finally be realized due to recent developments in targeted chemotherapy treatments.

Generally speaking, cancer is a broad class of diseases marked by the uncontrollable proliferation of dysfunctional cells. According to the Center for Disease Control (CDC), cancer is the second most prevalent cause of death in the United States (2). In 2016, an estimated 1.7 million people will be diagnosed with some form of the disease and about 700,000 people will die from it in the U.S. alone (3). Cancer is a major threat to human health, and continued research into effective treatments is important to society. Among the recent developments in the field of targeted chemotherapy are the refinement of the role of tamoxifen and the creation of a new class of drugs called antibody drug conjugates (ADCs). Both of these advances must be evaluated in the context of the Human Genome Project and personalized medicine.

THE HUMAN GENOME PROJECT AND THE OUTGROWTH OF PERSONALIZED MEDICINE

In 2001, the current director of the National Institutes of Health (NIH) Francis S. Collins asserted that “cancer treatment will precisely target the molecular fingerprints of particular tumors” by the year 2020 (4). Such a claim raises a variety of questions. What did Dr. Collins mean with this statement? What enabled Dr. Collins to make such a bold claim? How well is Dr. Collins’ claim holding up against the test of time?

The answers to the first two questions reside in Dr. Collins’ monumental role in the most groundbreaking biological project of the 21st century. Before he became the director of the NIH, Dr. Collins was the director of the Human Genome Project (HGP). His bold prediction about cancer was based on the HGP’s remarkable findings. The completion of the HGP in April 2003 meant that the DNA sequence of the human species had been decoded and recorded. From this information, scientists were able to conclude that most diseases, including cancer, contain a hereditary component. Because diseases can ultimately be traced back to sequences of DNA called genes, the findings of the HGP allowed for the development and refinement of gene-specific designer drugs to fight illness. The majority of these drugs work by targeting the protein products coded for by a gene. This concept forms the basis of personalized medicine, in which patients are assigned different therapeutic regimens based on their genetic profile (5).

One major example of the application of personalized medicine to cancer treatment is the field of targeted chemotherapy treatments, many of which work by interfering with cell division and/or DNA replication. Standard chemotherapy treatments affect both healthy and cancerous tissue indiscriminately. In contrast, targeted therapies are specifically designed to kill cancerous cells, hence minimizing the collateral damage done to the rest of the body. This precision in drug targeting is likely what Dr. Collins envisioned with his 2001 remarks.

TEACHING AN OLD DRUG NEW TRICKS: TAMOXIFEN

In the typical course of cancer treatment, tumors are surgically removed and then the patient is given adjuvant therapy drugs to prevent the tumor from coming back. As such, both finding new and refining current adjuvant therapies represent major areas of chemotherapy research.

The first successful targeted cancer chemotherapy was tamoxifen, which was approved for patient use by the Food and Drug Administration (FDA) in 1977. Tamoxifen is mainly used as an adjuvant therapy drug for breast cancer, and it belongs to a class of medications known as selective estrogen receptor modulators (SERMs), compounds that have either agonistic (promotive) or antagonistic (inhibitory) effects on estrogen receptors. In a certain subset of breast cancers that are estrogen receptor positive, estrogen fuels tumor growth, and tamoxifen works by inhibiting estrogen receptor activity (6).

Although estrogen seems to play a harmful role in breast cancer, recent scientific data suggest that the truth is much more nuanced and complex. Recent experiments within the past decade support the idea that after breast cancer cells are treated long-term with tamoxifen, the return of estrogen may actually induce apoptosis (cell death) (7). Thus, estrogen can paradoxically both stimulate and inhibit the growth of breast cancer cells under different circumstances. In the words of Dr. Virgil Craig Jordan, a leading researcher in the field, “the dramatic cell kill I get with estrogen is better than anything I saw with tamoxifen” (8). The phenomenon of estrogen-induced apoptosis is a relatively new idea important for two reasons. First, this discovery suggests that the most effective therapy option may not be a single treatment but rather a combination of different treatments, such as tamoxifen followed by estrogen. Second, it illustrates that even well established targeted chemotherapies may be made more effective in light of current scientific data. For women with tamoxifen-resistant tumors, the combination of tamoxifen followed by estrogen offers real therapeutic promise. Recent tamoxifen research may lead to the development of better treatments for estrogen receptor positive breast cancers.

MERGING IMMUNOTHERAPY WITH CHEMOTHERAPY: ANTIBODY-DRUG CONJUGATES

Returning to the story of Paul Ehrlich in the early 1900s, Ehrlich failed to create targeted cancer drugs because he failed to account for the fact that proteins expressed by cancerous cells, unlike those of foreign bacteria, are innate to one’s body. Thus, it is much more difficult to find a drug that is specific to tumors. In addition, Ehrlich and his contemporaries had no way of intentionally synthesizing compounds to limit biological activity. They had to rely on luck. Nearly a century later, the constraints that limited Ehrlich are no more. The lifting of these limitations has led to the development of new chemotherapeutic drugs, including antibody-drug conjugates (ADCs).

ADCs are created by linking chemotherapeutic agents with proteins called antibodies. All antibodies have a specific shape and thus only bind to and interfere with specific cell receptors. Because certain types of cancer overexpress certain receptors, the isolation of antibodies specific to these receptors provides a promising way of singling out cancerous cells for the internalization of toxic drugs. Given the enormous potential of ADCs, it is important to make clear their benefits and limitations.

A major limitation of ADCs is that they are only effective in the subset of cancers that overexpress protein receptors relative to normal cells. For cancerous cells, the receptors must be expressed at least twofold more than in normal cells. Even if this main criterion is met, a number of other constraints limit the potential of ADCs. For instance, ideal cell surface receptors are quickly recycled back into the cell after an ADC binds. This allows for quicker internalization of the chemotherapeutic agent. In addition, there is an absolute minimum level of receptor protein expression required, as sufficient amounts of the toxic chemical must be internalized for the ADC to be effective (9).

Despite these limitations, ADCs are still truly revolutionary in the field of targeted chemotherapy. Unlike other such drugs, which were identified on accident or through brute force trial and error experiments, ADCs can be engineered to specifically bind to particular cell surface proteins. This process of antibody engineering is made possible by monoclonal antibodies (mAbs). Scientists can now make massive quantities of any specific antibody.

Two of the most important ADCs include brentuximab vedotin and ado-trastuzumab emtansine, which were approved by the FDA in 2011 and 2013 respectively. Brentuximab vedotin consists of the monoclonal antibody brentuximab and the chemotherapeutic agent monomethyl auristatin E (MAME). Because brentuximab is specific to the CD30 protein, cancers that overexpress the CD30 protein such as classical Hodgkin’s lymphoma (cHL) and anaplastic large-cell lymphoma (ALCL) can be targeted specifically with the ADC (10). Likewise, ado-trastuzumab emtansine consists of the monoclonal antibody trastuzumab and the chemotherapeutic agent DM1. Because ado-trastuzumab emtansine targets the HER2 receptor, which is overexpressed in cases of HER2-positive breast cancer, HER2-positive cells can be specifically targeted (11). The development of ADCs is a relatively new advancement in the field of targeted chemotherapy, but it provides much promise for particular types of cancer.

CONCLUSION

We are now four years away from the target date set forth by Dr. Collins’ bold prediction about cancer treatment. So how well has Dr. Collins’ assertion held up over time? Within the span of a decade and a half, the field of targeted chemotherapy treatment has evolved rapidly, specifically through the refinement of tamoxifen therapy and the development of antibody-drug conjugates. Both of these advances give credence to Dr. Collins’ claims. As evidenced by recent progress in the field of targeted chemotherapies, society is getting warmer in the fight against cancer.

Jimmy Thai ’20 is a freshman in Matthews Hall.

WORKS CITED

[1] Jordan, V. C. Cancer Research, 2001. 61(15), 5683-5687.

[2] Centers for Disease Control and Prevention. http://www.cdc.gov/nchs/ fastats/leading-causes-of-death.htm (accessed Sep.25 2016).

[3] National Cancer Institute. https:// http://www.cancer.gov/about-cancer/understanding/statistics (accessed Sep. 25 2016).

[4] Testimony of Francis S. Collins. http://www.hhs.gov/asl/testify/t010725b. html (accessed Sep. 25 2016).

[5] Collins, F. S.; McKusick, V. A. Journal of the American Medical Association. 2001, 285(5), 540-544.

[6] Jordan, V. C. European Journal of Cancer. 2008, 44, 30-38.

[7] Lewis-Wambi, J. S.; Jordan, V. C. Breast Cancer Research [Online], 11. https://breast-cancer-research.biomedcentral.com/articles/10.1186/bcr2255 (accessed Sep. 26 2016).

[8] Gupta, S. Proceedings of the National Academy of Sciences of the United States of America. 2011, 108(47), 18876- 18878.

[9] Srinivasarao, M. et al. Nature Reviews: Drug Discovery. 2015, 14, 203-219.

[10] Perini, G. F.; Pro, B. Biologics in Therapy [Online]. 2013, 3(1), 15-23. https:// www-ncbi-nlm-nih-gov.ezp-prod1.hul. harvard.edu/pmc/articles/PMC3873074/# (accessed Sep. 30 2016).

[11] Haddley, K. Drugs of Today [Online]. 2013, 49(11), 701-715. https://journals-prous-com.ezp-prod1.hul.harvard. edu/journals/servlet/xmlxsl/dot/20134911/ pdf/dt490701.pdf?p_JournalId=4&p_refId=2020937&p_IsPs=N (accessed Sep. 30 2016).

A Plan to Eradicate the Zika Virus

By: Jeongmin Lee

Over this past summer, the Zika virus infected not only unborn children but also the news, directing the public’s attention towards the medical community. Health segments were filled with descriptions of the Zika virus, research updates, and the quickly rising number of cases. In a consultation of the World Hunger Organization, one of the most popular methods discussed to defeat the Zika virus employed a “live, attenuated target organism” (1). In other words, many wished to use weakened viruses as vaccines. This common practice, used in vaccines ranging from the very first vaccine created by Edward Jenner to combat smallpox to today’s chicken pox vaccine, uses less dangerous forms of the target viruses and allows the body’s immune system to digest the weak viruses (2). The immune system remembers specific tags from the virus, so once the same virus (or even a more potent form) enters the body again, the immune system is prepared to quickly track down and eliminate the disease before it can spread. Many researchers want to combat Zika with this proven method, but some want to think “outside of the pox.”

The Zika virus comes mainly from one non-human vector: mosquitoes. Eradicating all mosquitoes would theoretically leave the disease to be only transmittable between humans. Using genetics, researchers can engineer ways to reduce the number of mosquitoes. For example, mosquitoes can be given a template of genes that can repress the growth of a mosquito embryo. These altered genes only affect fertilized eggs that contain two copies, while a mosquito that inherits only one copy of the gene can serve as a carrier. Through the mechanics of normal genetics, carriers would not propagate the gene effectively. So, researchers have found a way to make the engineered gene replicate itself when the carrier is about to reproduce. While the carrier might never be affected by the gene, mating with another carrier would always yield offspring with both copies of the gene that die immediately. A company called Oxitec tries to reduce the number of mosquitoes with yet another strategy (3). They breed male mosquitos (which do not bite like females) with a “self-limiting gene” that is inactivated until the mosquitos pass it down to their offspring. The offspring would not be able to create essential proteins during development and would die.

This revolutionary idea is met with criticism. Genetically modified organisms are controversial because the effects of introducing an artificial element into the natural world are often underestimated. For example, see the infamous dichloro-diphenyl-trichloroethane (most commonly known as DDT), a pesticide that was first widely used in the 1940s (4). The deleterious effects did not appear until decades later, when people found out that birds who ate the dead insects killed by the pesticide had birth defects. Humans who ate livestock or drank water containing DDT suffered birth defects as well. Could the reduction of mosquitos cause an imbalance in the ecosystem? The effects of Oxitec’s technology were only tested on a small scale in respect to both time and space, but controlling populations of species is nothing new for humans — consider open hunting season. In fact, the US Food & Drug Administration (FDA) recently approved the drug’s usage on the grounds that the change would not bring too heavy an imbalance to the ecosystem (5). The decision was made this past August, and we have yet to see if there are any downsides. So far, it appears that this new method of targeting disease vectors to prevent humans from contracting the disease may be a new page being written down in medical history.

Jeongmin Lee ‘19 is a Sophomore in Lowell house.

WORKS CITED

[1] “Zika Virus Vaccine Product Development.” World Health Organization. World Health Organization, n.d. Web. 11 Sept. 2016.

[2] Riedel, Stefan. “Edward Jenner and the History of Smallpox and Vaccination.” Proceedings (Baylor University. Medical Center) 18.1 (2005): 21–25. Print.

[3] “Our Solution | Oxitec.” Oxitec. N.p., n.d. Web. 30 Sept. 2016.

[4] “DDT – A Brief History and Status.” EPA. Environmental Protection Agency, n.d. Web. 01 Oct. 2016.

[5] Brown, By Kristen V. “The FDA Just Greenlit Releasing Mutant Zika-killing Mosquitoes in Florida.” Fusion. N.p., n.d. Web. 30 Sept. 2016.

[6] “Zika Virus | NIH: National Institute of Allergy and Infectious Diseases.” U.S National Library of Medicine. U.S. National Library of Medicine, n.d. Web. Sept.-Oct. 2016.

[7] Lipsitch, Marc, and Benjamin J. Cowling. “Zika Vaccine Trials.” Science. N.p., 09 Sept. 2016. Web. 12 Sept. 2016.

Why Should We Care About Climate Change?

By: Arjun Mirani

On February 14th, 1990, the spacecraft Voyager 1 took an iconic photograph of the Earth from over 4 billion miles away, as it zoomed towards the edge of our Solar System. From this humbling vantage point, our planet appears to be no more than a speck – 0.12 pixels in size – in an enormous arena of darkness. Here is Carl Sagan’s oft-quoted response (1) :

“Look again at that dot. That’s here. That’s home. That’s us. On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies, and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization… every ‘superstar’, every ‘supreme leader’, every saint and sinner in the history of our species lived there – on a mote of dust suspended in a sunbeam.”

The Earth has always seemed powerful, solid, and reliable to us humans. We are awed by the tempestuous rise and fall of our oceans. Fresh air seems infinite in supply. The sun seems a tranquil yellow circle in the sky, bathing our homes and fields in warmth. As nature wends its cyclical way, we have constructed the grand edifice of human culture – literature, art, music, architecture, philosophies of politics and society, economic systems and governmental regimes. We take it for granted that these things will exist, in all their splendor, for generations to come.

The unfortunate truth is that our human viewpoint is severely limited by our size. Let us take a step back and view humanity for what it is – an incredibly young species, adrift in an unimaginably vast universe. The piece of rock on which we stand is the only thing tethering us to life. Our atmosphere is barely a membrane separating us from the vacuum of space (where we would instantly suffocate and explode), and preventing us from being killed by deadly ultraviolet radiation. Our existence is far more precarious than the solid ground beneath our feet makes it seem.

Despite the billions of galaxies out there, we have not seen any trace of other intelligent civilizations, or even basic life forms. The fact that life exists on our little blue bubble – something we tend to take for granted – is truly remarkable. The universe is an incredibly hostile place. An asteroid impact of sufficient strength could render Earth completely devoid of life. If our planet were twice as close to the center of our galaxy as it happens to be, gamma ray bursts would have prevented the formation or long-term development of life. Notwithstanding external threats, a supervolcanic eruption on Earth could cause mass extinction (which happened 250 million years ago). These are very real possibilities that routinely occur elsewhere in the universe.

Tragically, we humans are currently pushing the boundaries of our luck towards breaking point. We are eroding the conditions of our home planet that have nurtured the beautiful miracle of life, unaware of how tenuous and precious they are.

Since the Industrial Revolution, human activity has been significantly altering a climate that has just the right calibration to support life. Remarkably, Earthly life consists of not just a few strains of microbes, but millions of stunningly diverse species that give this planet color and flair. Climate change has the capacity to be an existential threat. At the very least, it can consign humanity to a terribly frugal existence, while other more helpless forms of life are destroyed.

To climate change skeptics: Species are already going extinct at an alarming rate. The concentration of carbon dioxide and other heat-trapping gases has risen dramatically, at a completely unnatural rate, in the past 50 years, directly attributable to human activity. Several independent studies show that the evidence is overwhelming and scary. Natural causes, like changes in solar energy output, simply cannot account for what we see today. Climate change is real – according to the vast majority of the world’s highly capable scientific community.

To those who acknowledge the existence of climate change, but do not consider it a priority: Climate change demands urgent attention. This is not to say that other issues like poverty and violence do not – they certainly do. But climate change ranks as high as any of them. It has far-reaching implications that go beyond one or two generations. It impacts the future of humanity as a whole, and all other forms of life that are equally entitled to the food, water and air we share.

First, let us consider the human repercussions. Stable civilizations began to form only about 11,500 years ago, when naturally induced climatic fluctuations settled down into predictable patterns. Civilizations cannot take root in protean natural conditions. Even during the course of human history, the rise and fall of civilizations has often been linked to climatic changes. According to Harvard anthropologist Dr. Jason Ur, “When we excavate the remains of past civilizations, we very rarely find any evidence that they as a whole society made any attempts to change in the face of a drying climate, a warming atmosphere or other changes. I view this inflexibility as the real reason for collapse” (4).

It is no surprise that under a Department of Defense Directive, the United States’ military has made responding to climate change a national security priority (5). History has shown, all too often, that humans can be helpless in the face of nature’s wrath. For instance, Hurricane Katrina (2005) destroyed numerous innocent lives, and cost $108 billion in damages (6). Droughts in countries such as India have not only parched crop fields, but also spurred large numbers of farmers to suicide – leaving their children without sustenance or hope. With the climate in human-induced flux, hurricanes like Katrina, droughts, and floods will become much more frequent and intense (7). The human toll is undeniable. For countries with agriculture-based economies, changes in the timeframe and quality of crop growth can be debilitating.

Furthermore, consider the fate of something as fundamental as food. Much of the developed world currently takes consumable food and water for granted; we are headed towards a shortage of both these essential resources, due to human profligacy. Our descendants will neither care about, nor be able to contribute to, the development of human culture if they must struggle to eat, drink and breathe everyday. Furthermore, we are ravaging our atmospheric shield, directly exposing ourselves to ultraviolet radiation that causes cancer. Imagine how much suffering this would cause to people of all ages, especially those who cannot afford healthcare. We are literally destroying the very things that keep us alive.

At present, the Earth is all we have. Some people believe that if we ruin the climate beyond repair, we will be able to move to Mars and live on. This is simply not feasible in the near future. It is indeed likely that we will establish a self-sustaining Martian colony at some point, but that is decades away at the very least. While companies like SpaceX are actively working towards this goal, current technology is simply not advanced enough to take even five people to Mars at an affordable rate. Even SpaceX’s audaciously ambitious CEO, Elon Musk, dreams not of moving the entire human population to Mars, but setting up a self-sustaining colony of 1 million people – this could ensure the survival of the human race even if all links to Earth were severed (8).

There are two points to note here – first, this is a very small group in comparison to Earth’s total human population. It in no way ensures the survival of most people (who cannot afford the trip/cannot be accommodated in the nascent colony) and their descendants. Climatic trouble on Earth would still bode ill for the vast majority of mankind. Second, establishing a human colony on another planet involves overhauling its environment and atmosphere to make it suitable for life. This ‘terraforming’ process is highly challenging and could well have adverse consequences. The difficulties of re-engineering a planet aside, the human body has not evolved to permanently live in low-gravity conditions (Mars’ gravity is three times weaker than Earth’s), and the health impacts of long-term space habitation are still an area of research (9). Moving all humanity to Mars is a highly distant dream. Given the current pace of climate change, it will become lethal well before the ultimate Mars dream becomes viable.

The eventual fate of our planet is sealed – 7 billion years from now, the Sun will engulf the Earth. Our oceans will evaporate much sooner, in about a billion years (2). By then, if humanity is to survive, we will have left our planet and journeyed into the stars. However, we have a moral obligation to preserve the Earth for as long as possible, rather than prematurely sabotaging it.

Here’s why: humanity is not alone on this planet. We are extremely young – if the Earth’s lifetime were compressed into 24 hours, humans have existed for just the last second of the day (10). In this short timespan we have managed to forcefully impose our dominance, continually encroaching upon Earth’s other inhabitants and causing their extinction. Extinction, when dwelled on, is deeply tragic – a life form that may have been utterly unique in the entire cosmos fades forever into dust.

The sheer biodiversity on Earth is mind-boggling. There are around 8.7 million species of life, of which we are just one (11). Land-based life alone has dazzled humanity with its beauty for centuries. The intricate patterns of butterfly wings; the luminescence of fireflies punctuating the night; the lilting call of a songbird yearning for courtship – these bring such richness to a silent and impersonal universe. Even more fantastic is the world within the oceans – coral reefs of myriad hues transforming the seafloor into a work of art; enigmatic creatures hidden in depths we can never reach.

We deprive ourselves when we fail to notice these things, lost in a world of cellphones, stock markets and traffic. We deprive the universe when we snuff out their existence in our blindness. And that is exactly what we are doing with climate change, more rapidly than ever before. Without immediate action, a third of all land plant and animal species on Earth will be extinct by 2050 – a mere 34 years from now (12). The artistry of life is one of the most ineffable features of the cosmos, and it is surely our duty to preserve it for as long as we possibly can.

The natural environment is not just central to other life forms – human happiness hinges deeply on it too. We take for granted the fact that we can go for a walk, perhaps by a river, enjoying a cool summer breeze. We can wake up to a shining sun that subconsciously lifts our spirits. We can step outside and smell the enchanting aroma that follows a new rainfall. Gravity doesn’t bother us; breathing is free and satisfying. But there are worlds where none of this is possible. One of them is a dying Earth, where we must venture outdoors wearing oxygen masks, faced with tumultuous weather, surrounded by parched plants and barren ground. Another is Mars, where we must venture outdoors in bulky spacesuits, beneath a black sky, a new gravitational field making each step feel strange.

This is not a prediction of an imminent doomsday. Both these worlds are unavoidable in the long run, but with sufficient effort, permanently living in either can be postponed for thousands, if not millions, of years. Even if humans survive climatic catastrophe, as we probably will, it will be a half-existence devoid of the things that make life worth living – things no human can artificially replicate. The laws of nature have manifested themselves in singular ways on planet Earth. As Carl Sagan so eloquently noted, this “underscores our responsibility to preserve and cherish the pale blue dot, the only home we’ve ever known” (1).

Arjun Mirani ‘20 is a freshman in Thayer.

WORKS CITED

[1] http://www.planetary.org/explo re/space-topics/earth/pale-blue-dot. html (The Planetary Society). Picture is a NASA/JPL public domain image.

[2] Barras, Colin. ‘How Long Will Life Survive On Planet Earth?’, BBC Earth, 23 March 2015. Web. http://www. bbc.com/earth/story/20150323-howlong-will-life-on-earth-last

[3] ‘Climate Change – How Do We Know?’, NASA – Global Climate Change: Vital Signs of the Planet. Web. http://climate.nasa.gov/evidence/

[4] Sohn, Emily. ‘Climate Change and the Rise and Fall of Civilizations’, NASA – Global Climate Change: Vital Signs of the Planet. Web. http://climate. nasa.gov/news/1010/climate-changeand-the-rise-and-fall-of-civilizations/

[5] Dept. of Defense Directive 4715.21. http://www.dtic.mil/whs/directives/corres/pdf/471521p.pdf

[6] National Hurricane Center. http://www.nhc.noaa.gov/pdf/nwsnhc-6.pdf

[7] Climate Change Effects, NASA – Global Climate Change: Vital Signs of the Planet. Web.

[8] Urban, Tim. ‘SpaceX’s Big Rocket – The Full Story’ – based on interview with Elon Musk, Wait But Why, 28 Sept. 2016. Web. http://waitbutwhy. com/2016/09/spacexs-big-fking-rocketthe-full-story.html

[9] Scharf, Caleb A. ‘So You Want To Terraform Mars?’, Scientific American blog ‘Life, Unbounded’, 29 Sept. 2016. Web. http://blogs.scientificamerican. com/life-unbounded/so-you-want-toterraform-mars/?WT.mc_id=SA_FB_ SPC_BLOG

[10] Urban, Tim. ‘Putting Time Into Perspective’, Wait But Why. Web. http://waitbutwhy.com/2013/08/putting-time-in-perspective.html

[11] Mora, C., Tittensor, D., Adl, S., Simpson, A., Worm, B. ‘How Many Species Are There on Earth and in the Ocean?’, Public Library of Science: Biology Journal, 23 Aug. 2011. Web. http:// journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1001127

[12] UN TEEB, US Geological Survey, BP, London Market Exchange (2012) and Worm et all (2006)

 

The Ethics of Self-Driving Cars

By: Caroline Wechsler

Nearly anyone in an intro philosophy class, and indeed most people who have some degree of mainstream intellectual knowledge, will recognize the beginnings of the infamous trolley problem: you are the driver of a speeding trolley, and ahead of you on the track are five people. You try to stop the trolley by pushing the brakes, but they do not work – the situation looks hopeless. However, there is a branch off to the right of the track where only one unsuspecting person stands. Turning onto this track would allow you to spare the five people, but would result in the killing of one by an active choice. Should you turn the trolley?

Up until very recently, this problem has presented a classic but laughably implausible thought experiment. However, a new technology may bring a version of it to much greater urgency: self-driving cars.

The trolley problem, first conceived by philosopher Philippa Foot in 1967, presents an interesting conundrum, because it forces an individual to make a choice that will inevitably result in the death of at least one person (1). A utilitarian, concerned with creating the greatest good for the greatest number, typically advocates for turning the trolley because this would kill one person as opposed to five, and five deaths is worse than one. Other approaches would suggest that to turn the trolley makes you morally responsible for the death while inaction does not make you directly responsible, therefore you should not turn the trolley. A deontologist, for instance, operating under the premise that killing is always wrong, would advocate for leaving the trolley on its path. Popular consensus suggests that it is better to kill the one rather than the five because most people subscribe to utilitarianism.

Self-driving cars transition the trolley problem from a thought experiment into a frighteningly real scenario. Imagine you are driving and five individuals walk out into the road in front of you; you can either hit them or swerve and hit a cyclist, killing him or her – which is the right choice? Admittedly, this is the sort of decision that humans driving cars make all the time; however, programming a car to follow a certain decision-making pathway means subscribing definitively to one ideology or another.

A further and more troubling complication is whether a self-driving car should prioritize its passengers over other individuals. For instance, say you’re driving on a curvy mountain road, and around the bend comes a school bus full of children, heading directly for you. Should the autonomous car swerve so as to avoid the bus and save the many children, even though this will send the driver off the cliff?

This case complicates that clear utilitarian decision-making of the original trolley problem. Surveys show that more than 75% agree that sacrificing the passenger is the morally correct choice (2). They approve of self-driving cars programmed to act in a utilitarian manner – but the vast majority of study participants said they do not themselves want to ride in such vehicles (2). Attaining the greater good is a less straight forward choice if they are not guaranteed to be among the beneficiaries. Not only do individuals wish to protect themselves, but many are also wary of giving moral decisions over to a machine. Washington Post columnist Matt McFarland sums the situation up thusly: “Humans are freaking out about the trolley program because we’re terrified of the idea of machines killing us. But if we were totally rational, we’d realize 1 in 1 million people getting killed by a machine beats 1 in 100,000 getting killed by a human” (3).

This puts manufacturers in a tricky spot: protect the passengers who own the car, or promote the greater good?

Survey results suggest that for cars to be marketable and therefore financially they should be programmed against what is better for the common good in these instances. It’s possible, thought, that the general safety benefits of transitioning to self-driving cars might outweigh these harms, as McFarland points out (3) – these benefits might go unrealized unless people are persuaded to purchase the cars. One more option for car manufacturers would be to program each car according to its owner’s wishes. But this introduces a whole new set of legal and moral issues: if an owner knowingly chooses an algorithm which may result in traffic deaths of others, is she responsible for those deaths even though she is not in control of the vehicle? Such questions have no clear answers in law or indeed in philosophical debate. So how do we program cars to act in these ambiguous cases?

Self-driving cars are far from perfect in their current forms: they have difficulty driving in congested areas,  (1) example) (3). In fact, one persistent problem with self-driving cars is that they follow the law all the time, to a fault. For instance, they always follow the speed limit, which seems obviously correct until one tries to merge onto a highway and cannot exceed the limit to get on safely (4). A study by the University of Michigan Transportation Research Institute found that self-driving cars currently have accident rates twice as high as regular cars, generally because aggressive or inattentive humans hit them from behind, unaccustomed to vehicles that follow the law so precisely (5). But giving cars some element of “judgment” to break the law and simulate human drivers is a proposition fraught with further practical, legal, and ethical stumbling blocks.

Some experts, like Daniela Rus, the head of the Artificial Intelligence lab at MIT, believe this problem could be solved by developing technology so accurate at planning and perceiving risks that the cars would be able to not hit anyone, thus avoiding the trolley problem altogether (3). Yet, this solution seems incredibly far off – such technology is inevitably many years in the making. And this vision would most likely require the vast majority of cars on the road to be similarly self-driving so that their programming could function in harmony. Addressing the widespread fears about issues like the trolley problem would have to come first.

Still, many experts argue that self-driving cars will eventually make driving substantially safer despite the additional risks they may pose now. While these cars may not always respond to a traffic conundrum in the way we think is correct, it is worth remembering that many human drivers frequently fail to respond in the way we would want as well. Industry leaders hope is that improving technology will bring more of these cars on the roads, and lead them to become much safer. And perhaps one day in the future, self-driving cars will dominate the markets. But for today, this may be one scenario in which technology is pulling ahead of our readiness to instruct it. As Harvard psychologist Joshua Greene writes, “before we can put our values into machines, we have to figure out how to make our values clear and consistent” (6).

Caroline Wechsler ‘19 is a sophomore in Currier.

WORKS CITED

[1] Thomson, Judith Jarvis. “The Trolley Problem.” The Yale Law Journal, 94(6): May 1985.

[2] Bonnefon, Jean-Francois, Shariff, Adam, and Rahwan, Iyad. “The social dilemma of autonomous vehicles.” Science Magazine, 352(6293): June 2016.

[3] Achenbach, Joel. “Driverless cars are colliding with the creepy trolley problem.” The Washington Post. 29 December 2015. Web.

[4] Naughton, Keith. “Humans are slamming into driverless cars and exposing a key flaw.” Bloomberg BusinessWeek. 17 December 2015. Web.

[5] Schoettle, Brandon, and Sivak, Michael. “A Preliminary Analysis of Real-World Crashes Involving Self-Driving Vehicles.” University of Michigan Transportation Research Institute. October 2015.

[6] Markoff, John. “Should Your Driverless Car Hit a Pedestrian to Save Your Life?” The New York Times. 23 June 2016. Web.

Tackling the Replication Crisis

By: Felipe Flores ‘19

We are in the midst of what has been dubbed the “replication crisis” of science. Recent retrospective analyses reveal the results of several important experiments are inconclusive. We expect research results to be consistent. For this to occur, they must be unbiased and unaffected by conflicts of interest, as well as timeless because others will build upon that knowledge for their own pursuit of truth. Then why do so many experiments fail the test of replication? Scientometrics, the analysis of science itself, reveals an exponential growth in the amount of data we produce, along with outside pressures and internal methodological flaws that have built up over time to culminate in this crisis [1]. This doesn’t imply science is wrong; conversely, it is the way experiments are performed and evaluated.

Since science is collective and cooperative, building on progress made by scientists in the past, individuals are not to blame. Instead of blaming those who have published irreplicable studies, or even worse–those who have committed academic fraud, we should look at the deeper flaws within our system. It is mostly people unaware of their methodological flaws that drives them to commit these mistakes.

The problem first begins when scientists must face system biases. Most published research is performed in the competitive environment of an academic setting. Publishing often in high-impact journals seems fundamental to advance one’s career; this pressure can lead to less rigorous, less reliable research. Scientists are often trapped in a dilemma: perform extra experiments to improve the statistical reliability of their results, or rush to publish. In academic settings, pressure to publish is a real, tangible phenomenon encountered by undergraduates and tenured professors alike. In addition, the financial support of grants and fellowships adds extra outside pressure While scientists are working hard to increase scientific knowledge and maintain their careers, the system is providing the most harm and is the one that should change [2].

Even undergraduate students encounter reproducibility issues, like statistic courses teaching the “cutoff ” for statistical significance as p ≤ 0.05. The reason for this number is merely historical; nothing specific about .05 makes it the standard cutoff other than historical usage. This number, and p-values in general, only became highly popular after Ronald Fisher determined them relevant in a work published in 1925. Nevertheless, Fisher is not to blame; he was trying to find a value that was simple, useful, and powerfully connected to mathematics as a way of improving research in general; and he succeeded. The framework of hypothesis testing and statistical significance remains fundamental to draw conclusions from experiment, but never did he anticipate the current, ongoing crisis. “We teach it because it’s what we do; we do it because it’s what we teach,” is the present issue [3]. Since incomplete approaches to statistical significance are still continuously taught in college and graduate school, this issue is not going away soon.

We have to restructure the way we analyze data. A change of mentality and a change in education will, in due time, correct the misuse of hypothesis testing. Statistical significance should be specific to the field, to the experiment’s methods and more importantly, to the discretion of the scientific community in the context of the study. Physicists, for example, use a 5-sigma confidence interval (a diminute p-value of around 3x 10^-7) because particle physics examines the building blocks of nature, and there is no chance of randomness. Other fields, like medicine can still provide useful insight at a lower level of statistical significance, like whether a new drug is effective against cancer or infectious diseases. By applying these solutions, scientific journals will be more selective, and scientists will take the time to improve experiment methods and perform them repeatedly. By holding research to a higher standard, science will escape the current replicability crisis and prevent a future one. This is not the time to criticize science, but the time to learn from past mistakes. This is not the time to give up on research, cut funding, or act in disbelief of the scientific community, but rather the time to recognize the community’s effort to make better, more rigorous, more truthful discoveries.

Felipe Flores ‘19 is a freshman in Hollis Hall.

WORKS CITED

[1] Begley & Ioannidis. Reproducibility in Science: Improving the Standard for Basic and Preclinical Research. Circ Res., 2015; p. 116- 126

[2] Chin, Jason M. Psychological Science’s Replicability Crisis and What it Means for Science in the Courtroom. Psychology, Public Policy, and Law, 2014; Vol. 20; p 225-238

[3] Wasserstein & Lazar. The ASA’s Statement on p-Values: Context, Process, and Purpose. The American Statistician, 2016; Vol 70; p. 129- 133

Fall 2016: Hot and Cold

Behold our Fall 2016 issue: Hot and Cold! Articles are posted individually as blog posts (we have linked them below). We also have a PDF version available on our Archives page. Print issues will be available around Harvard’s campus starting early Spring 2017!

A big thank you to our fantastic staff—and Happy Holidays!

NEWS BRIEFS

GENERAL ARTICLES

FEATURE ARTICLES

COMMENTARY

Spring 2016: In and Out of Focus

We are happy to release our Spring 2016 issue: In and Out of Focus! Articles are posted individually as blog posts (the links are listed below). We also have a PDF version available on our Archives page. Print issues will be available around Harvard’s campus starting early Fall 2016!

 

Table of Contents:

NEWS BRIEFS

LIGO’s Discovery: Understanding the Gravity of the Situation by Alex Zapien ‘19

Survival of the Fittest: Darwin’s Evolutionary Theory Applied to Programming Languages by Eleni Apostolatos ‘18

Redefining Home?: The Discovery of “Planet X” by Alex Zapien ‘19

Space Tourism and You by Priya Amin ‘19

GENERAL ARTICLES

Sleeping on Your Desk by Jeongmin Lee ‘19

Human Genome Editing: A Slippery Slope by Alissa Zhang ‘16

Eye in the Sky by Grace Chen ‘19

FEATURE ARTICLES

Prions and Small Particles: Micro Solutions to a Macro Problem by Caroline Wechsler ‘19

Critical Periods: Envisioning New Treatments for Lazy Eye by Audrey Effenberger ‘19

Cellular Senescence: Age and Cancer by Una Choi ‘19

Getting a Feel for Cosmic Events by William Bryk ‘19

Lower Extremity Exoskeletons by Patrick Anderson ‘17

The Battlefield is the Lab: Curing Type I Diabetes by Felipe Flores ‘19

Solving the Brain Puzzle by Kristina Madjoska ‘19

Immunotherapy Against Cancer by Elliot Eton ‘19

 

The Battlefield is the Lab: Curing Type I Diabetes

By: Felipe Flores

Diabetes has aroused great interest in public health experts, physicians, patients and researchers because of its many accompanying conditions and complications, including stroke, blindness, and limb loss (1). However, research is promising to find better forms of treatment, and even a cure. To understand all of this research, one should first understand the disease’s mode of action and current treatment.

Mechanism

Diabetes is characterized by prolonged periods of excessively high blood glucose levels associated with a deficiency of the hormone insulin. This hormone’s major role is to promote glucose absorption from the bloodstream to the cells to power their function. Since the 1930’s, diabetes has been categorized into two main types (2). Type 1 diabetes, often called ‘insulin-dependent’, is an autoimmune disease. T cells from the immune system that would normally defend the body from infection, start attacking and destroying the insulin-producing Beta cells in the pancreas.1 Without cells to produce the hormone, the patient becomes dependent on insulin injections/a pump. The causes of this autoimmune attack are unclear. As researcher Dr. Anna Moore from the MGH Martinos Center for Biomedical Imaging explains, “Research has found genes responsible for predisposition to the disease, and some environmental factors that could be related. I don’t think anybody could point out one factor that triggers the disease.”

On the other hand, a systematic resistance to insulin is known as type 2 diabetes. As Dr. Bruce Fischl, also a researcher from the MGH Martinos Center, clarifies, “It’s a whole different disease. With type 2 diabetes the body still produces essentially the same amount of insulin, but it’s less effective.” This is a direct consequence of malfunctioning insulin receptors on cells.

Symptoms

The body’s inability to regulate blood glucose concentration causes an array of complications. At an excessive concentration level, glucose becomes toxic and particularly harmful to small blood vessels. Accumulated damage could potentially lead to limb loss, blindness, kidney disease, and impairment to other organs where small blood vessels play a major role. In fact, “In 2010, about 60% of nontraumatic lower-limb amputations among people aged 20 years or older occurred in people with diagnosed diabetes, […] and diabetes was listed as the primary cause of kidney failure in 44% of all new cases in 2011” (1).

While amputations can be dramatic, diabetes is often subtle and insidious. Dr. Fischl explains: “From the day to day perspective, the disease is a burden for the patient. Often they cannot go longer than 30 minutes without thinking about their blood glucose, what food they’ve eaten, how much exercise they’ve done, when was their last insulin injection. It’s a disease that takes no vacation and that requires constant monitoring.”

Current Treatment Methods

Treatment varies depending on which type of diabetes the patient has. Type 2 diabetes generally corresponds to simpler treatment. Changing to a healthier diet, doing more exercise, losing excess weight, and oral medication are generally enough to control blood glucose. Physicians may also prescribe insulin injections. This type of diabetes represents a bigger public health challenge because it accounts for around 90% of diabetes cases, but has a more effective treatment (1) .Conversely, Type I Diabetes implies an underlying mysterious autoimmune attack and will always require an insulin supply, in addition to the aforementioned self-care practices. Treating and curing type 2 diabetes may be work and education for the patient, but curing type 1 diabetes is up to scientists.

Recalibration of Treatment: Modeling the Disease

Dr. Bruce Fischl has focused his research on “reducing average blood glucose, essentially reducing the complications of uncontrolled peaks of glucose.” As he explains, the normal human pancreas releases insulin in two time-frames. Basal insulin is the ‘background’ insulin produced to maintain normal body function, for cells to intake glucose throughout the day- more of a long-acting insulin due to its slow and continuous release into the bloodstream. On the other hand is bolus insulin, a fast-acting insulin released after meals to avoid excess food-borne glucose. The treatment of insulin-dependent patients tries to resemble the pancreas’ natural time-frame with injections or a pump. However, the time-scale of glucose absorption from food and the effect of insulin are different. Insulin is much slower,” Fischl says, “so even though the patient did everything right with their injections, there will be peaks of excess glucose in the blood for a couple hours after every meal, and that is what we are trying to reduce.” Through computational simulations, Dr. Fischl is looking to create better scheduling and programming for insulin pumps, so that the average glucose level is lower throughout the day, and glucose peaks after meals are not as high and harmful as they currently are. His team has received NIH grants and approval for clinical trials starting in March 2016 with the Joslin Diabetes Center.

Unveiling the Disease: Visual Observation of Diabetes

Dr. Anna Moore researches imaging techniques able to detect the autoimmune attack while there are still Beta cells. In one of her projects, Dr. Moore studies the accumulation of T cells in mice pancreases, a clear sign of an autoimmune attack, before the appearance of high blood glucose level. In humans, this means there is a span of up to 2-3 years between visual accumulation of T cells and the onset of diabetes. As she explains, “When there is an autoimmune attack, gaps appear on the network of blood vessels that irrigate Beta cells in the pancreas. We can then pass a contrast agent through the blood, and the gaps would cause the contrast to remain around damaged tissue instead of circulating through the bloodstream. You would see the agents accumulating in the pancreas on the MRI.” The potential in this research lies in its ability to observe diabetes before diabetic imbalance. That type 1 diabetes generally appears during childhood: when the symptoms show up there are too few cells to be salvaged. However, a patient with a family history of diabetes could start immunosuppression early.

The most difficult task of Dr. Moore’s project is imaging living Beta cells. It is extremely hard to find a Beta cell-specific agent, but this method has great overlap with currently applied approaches to a cure.

The Edmonton protocol is a surgical procedure that consists of implanting donor purified Beta cells into the patient’s liver (an organ rich in nutrients and friendly new home for the cells) (3). The operation has shown promising results, achieving an astonishing insulin independence for 80% of the patients three years after the procedure (4). Five years after the operation, however, the number of insulin independent patients drops to 10% (5). The cells die from a lack of nutrients from the procedure itself, and/or from the difficulty of adapting to a different environment. Because the beta cells are foreign to the body, the immune system attacks them. Patients undergo an immunosuppression regimen, but this is often insufficient. Dr. Moore’s contrast agents are able to chemically label the living beta cells from the procedure and track them. The results with mice were so promising that they are now moving forward to try it with baboons. “Our imaging techniques are a tool for other researchers to conduct experiments that would lead us closer to a cure,” says Dr. Moore. She envisions her imaging protocols being used for research on drugs that would prevent immune destruction of the cells, or new surgical procedures that would better allocate transplanted cells. The future of diabetes research, it seems, will rely heavily on imaging techniques such as Dr. Moore’s; in fact, stem cell transplantation uses them all the time.

Stem Cell Research

Scientists are able now to make insulin-producing cells from pluripotent stem cells. These unique cells can differentiate into multiple tissue types, thus capturing great interest from researchers. A study led by Dr. Doug Melton in Cell showed that under the right conditions, the function of stem-cell-derived Beta cells (sc-beta) is comparable to adult, normal cells, including their response to changes in glucose levels in the blood (6). Unfortunately, there remained the difficulty of protecting new cells from immune attack, for which Dr. Melton’s laboratory started working with Dr. Daniel Anderson and Robert Langer’s team at MIT. Several media regarded their invention as a possible cure to type 1 diabetes (7): they created and implanted polymer capsules in mice that effectively shield newly-transplanted sc-beta cells from autoimmune attack (8). When the team removed the capsules after 6 months, they still retained living and working cells. With no inflammatory response or other ill effects, the device has overcome immune attacks, and is approaching clinical trial stages.

Conclusion

The battle is far from over, but scientists are making important headway into treating diabetes. There are many ways to tackle a disease, from mechanistic cures to treating symptoms and increasing diagnosis rates. As exemplified by the studies in this article, researchers are working to attack diabetes from all possible angles in the hopes of one day finding a cure.

Felipe Flores ‘19 is a freshman in Hollis Hall.

WORKS CITED

[1] WHO Center for Disease Control and Prevention. National Diabetes Statistics Report. 2014.

[2] Mandal, Ananya. History of Diabetes. News Medical [Online], December 24, 2012. http://www.news-medical.net/health/History-of-Diabetes.aspx (accessed Feb. 3, 2016).

[3] Shapiro, J. et al. J. Med. New England. 2006, 355, 1318-30.

[4] Robertson. R.P. et al. Diabetes. 2010, 59,1285-1291.

[5] Ryan, E. et al. Diabetes, 2005, 54, 2060-2069.

[6] Melton, D. et al. Cell, 2014, 159, 428–439.

[7] Colen, B.D. Potential Diabetes Treatment Advances Device Shields Beta Cells From Immune System Attack. Harvard Gazette [Online], January 25, 2016. http://news.harvard.edu/gazette/story/2016/01/potential-diabetes-treatment-advances/ (accessed Mar. 10, 2016).

[8] Melton, D. et al. Nature, 2016, 22, 3, 306-311.

Lower Extremity Exoskeletons

By: Patrick Anderson

In the late 1960s, researchers started to address the prospect of wearable robotic technologies for humans. The majority of these first attempts were intended for enhancing the physical capabilities of able-bodied people, particularly those serving in the military (3). Despite the failure of these initial powered prototypes to attain widespread usage either for safety reasons or sheer size and cost inefficiencies—exoskeletons have remained an active area of research to date. Over the past decade, researchers and biotech companies have managed to continuously improve upon existing human exoskeleton technologies at an astounding rate, aiming to make them lighter and more cost-efficient. The targeted buyers of robotic exoskeletons have also extended to include workers in physically demanding professions and individuals suffering from pathologies affecting mobility. This article seeks to illuminate advances in robotics for lower-extremities as they pertain to rehabilitative treatment for individuals with impaired mobility.

Conditions and Applications

According to a 2014 CDC survey, 14% of Americans above the age of 18 suffer from conditions seriously impeding their mobility (2). Neurological lesions, such as spinal cord injury (SCI), motor cortex damage following stroke or degeneration caused by diseases like Multiple Sclerosis result in losses in synaptic connections between higher processing centers and efferent nerve fibers, ultimately reducing motor output. In addition, a multitude of deleterious secondary health complications are associated with impaired mobility. These complications include high blood pressure, bowel problems, and autonomic dysreflexia—health issues that are especially severe for aging individuals (4). In the case of SCI, depending on the degree of impairment—ranging from incomplete paraplegia to complete tetraplegia—the yearly expenses directly attributable to these conditions range from $300,000 to $1,000,000 in the first year post-injury and $40,000 to $180,000 in each subsequent year8. Individuals with these and similar pathologies suffer an overwhelming physical, emotional and financial burden.

Fortunately, it has been proven that therapy involving lower-extremity training exercises strengthens muscle organization and improves overall coordination among healthy and impaired individuals (11). While implementation of such programs used to require formidable manual assistance, the integration of mechatronic exoskeletons into current therapeutic practices has gradually freed physical therapists of former labor-intensive practices (11).

Components of Gait

Before delving into how the exoskeletons are able to simulate normal bipedal locomotion, it is necessary to discuss briefly the process of human gait so as to understand the basic paradigm that these technologies seek to mimic. The gait process is measured from heel strike to heel strike of one of the legs in motion. Following heel strike, the foot flattens, the heel rises, the toe is lifted, and the leg is propelled forward during the so-called “swing phase.6” Gait involves various transfers of energy derived from ground forces, gravitational forces, spring storage and muscle torque. In order to stabilize properly during walking, there are numerous control mechanisms within the body that are employed to minimize displacement of the center of gravity. To achieve this, various muscles and tendons act in tandem to rotate, tilt, and flex the joints of the lower extremities at precisely timed intervals6. In sum, human gait is a complex model with a total of seven degrees of freedom (three at the hip for rotation, one at the knee and three at the ankle) for which powered exoskeletons must account (3).

Basic Mechanical Function

While there are many differences in overall design, most powered orthoses share similar features that actively work to replicate gait. These devices involve a DC power supply that is either worn in a backpack by the user of the device or located nearby (in the case of treadmill training orthoses) (3). The DC motor supplies power to actuators located at the three joints of the lower extremities, providing flexion, extension, abduction, and adduction of these joints as needed6. In addition, there are sensors located at pivot points to measure angular displacement and angular acceleration as well as on the flat portion of the foot to measure load and force distribution (3). Almost all devices also include some form of body weight support that allows the user to leverage their torso and arms to stabilize themselves more efficiently (11). In order to prevent over dependence on machine power, many exoskeletons employ methods to maximize human-motor learning by providing varying degrees of assistance. In some newer models, the foot trajectory travels through a tunnel and depending on the foot’s location relative to the tunnel, normal forces act on the foot to return it to its place. However, if the foot is traveling through the tunnel, no machine forces will act on the foot, permitting the user exercise greater autonomy (1).

Types of Lower Extremity Exoskeletons and Active Orthoses

One of the lower-extremity exoskeletons currently in use is the LOKOMAT, which is used for treadmill gait training and consists of body weight support and a powered leg orthosis (7). The LOKOMAT contains position, adaptability and impedance controllers that allow the user to walk on a treadmill with more dynamic settings. This exoskeleton is also able to activate leg muscles that are not utilized due to paralysis through functional electrical stimulation (7). In order to provide the physiotherapist with feedback on the effectiveness of the therapy, the LOKOMAT also has a graphical user interface that provides biofeedback during training sessions (5). Another powered exoskeleton that has proved highly effective is the ReoAmbulator. This device is another treadmill gait trainer that utilizes an “assist as needed control algorithm” to allow the user to increment the level of voluntary control throughout training (7). Finally, the ReWalk is an exoskeleton that has proven to be in high demand among individuals with thoracic-level spinal chord injury. This wearable robotic technology is an over-the-ground walker that allows users to stand, walk and even ascend and descend steps over a limited distance. The ReWalk is currently the only FDA approved technology that is available for personal use outside of a clinical setting (9).

Areas of Improvement for Future Research

While current exoskeleton technologies are much more advanced than their older counterparts, there is still further research that needs to be conducted to reduce costs, increase locomotive capacities and produce lighter products.

Recent research has focused on greater use of pneumatic muscle-type actuators, which are air-pressurized systems that act as artificial muscles to produce extension and contractile motions similar to those in the muscles of the body. Researchers in this field are seeking to create perfected control algorithms to apply to these actuators so that locomotion more closely mirrors gait (7). Nevertheless, in conjunction

with further robotic research, it is highly important that the exact functions of muscles and tendons during gait are more fully understood. If researchers understand these functionalities, it could lead to novel developments in producing assistive exoskeletons that are more suited toward leg architecture and can remove unnecessary elements in design that contribute weight to the products (3). Additionally, with this improved understanding, exoskeletons can also be altered to be more accommodating for different leg shapes and sizes (3).

Patrick Anderson ‘17 is a junior in Cabot House.

WORKS CITED

[1] Agrawal, S. K., Banala, S. K., Mankala, K., Sangwan, V., Scholz, J. P., Krishnamoorthy, V., & Hsu, W. L. (2007, June). Exoskeletons for gait assistance and training of the motor-impaired. In Rehabilitation Robotics, 2007. ICORR 2007. IEEE 10th International Conference on (pp. 1108-1113). IEEE.

[2] Blackwell DL, Lucas JW, Clarke TC. (2014). Summary health statistics for U.S. adults: National Health Interview Survey, 2012. National Center for Health Statistics. Vital Health Stat 10(260).

[3] Dollar, A. M., & Herr, H. (2008). Lower extremity exoskeletons and active orthoses: challenges and state-of-the-art. Robotics, IEEE Transactions on, 24(1), 144-158.

[4] Hitzig, S. L., Tonack, M., Campbell, K. A., McGillivray, C. F., Boschen, K. A., Richards, K., & Craven, B. C. (2008). Secondary health complications in an aging Canadian spinal cord injury sample. American Journal of Physical Medicine & Rehabilitation, 87(7), 545-555.

[5] Jezernik, S., Colombo, G., Keller, T., Frueh, H., & Morari, M. (2003). Robotic orthosis lokomat: a rehabilitation and research tool. Neuromodulation: Technology at the neural interface, 6(2), 108-115.

[6] Lieberman, D. (2015, November 11). Human Gaits. Lecture presented at Life Sciences 2, Cambridge, MA.

[7] M.A.M. Dzahir, and S.I. Yamamoto (2014): Recent Trends in Lower-Limb Robotic Rehabilitation Orthosis: Control Scheme and Strategy for Pneumatic Muscle Actuated Gait Trainers, Robotics, Vol.3, No.2, p.120-148

[8] National Spinal Cord Injury Statistical Center. (2013). Facts and Figures at a Glance. Birmingham, AL: University of Alabama at Birmingham.

[9] ReWalk Robotics. (2014, June 26). ReWalk™ Personal Exoskeleton System Cleared by FDA for Home Use [Press release].

[10]Sale, P., Franceschini, M., Waldner, A., & Hesse, S. 2012. Use of the robot assisted gait therapy in rehabilitation of patients with stroke and spinal cord injury. European Journal of Physical and Rehabilitation Medicine, 48(1), 111-121.

[11]Tkach D, Reimer J, Hatsopoulos NG. 2007. Congruent Activity during Action and Action Observation in Motor Cortex. Journal of Neuroscience 27:13241–13250.

Survival of the Fittest: Darwin’s Evolutionary Theory Applied to Programming Languages

By: Eleni Apostolatos

Living organisms are physical manifestations of genetic data. Formally known as deoxyribonucleic acid (DNA), the genetic code of living creatures is composed of two strands with varying configurations of four bases—adenine, cytosine, guanine, and thymine. DNA is interpreted and expressed by native molecular machinery within cells. In transcribing and translating the bases, the molecular processes essentially create us.

Darwin’s theory of evolution, known as survival of the fittest, relates a natural pattern for selection in biological systems: sequences essential for the survival of a species persist through time. Interestingly, research in past years has shown that the same logic can be applied to computer code.

The similarities between DNA in bacterial genomes and programming code in large-scale computer software has been the subject of study in a research project conducted by computational biologist Sergei Maslov of Brookhaven National Laboratory and graduate student Tin Yau Pang from Stony Brook University.

The project’s aim was to elucidate why some genes or computer programs are more common than others, and understand why certain sequences cannot be eliminated over time. In an interview, Maslov explained: “If a bacteria genome doesn’t have a particular gene, it will be dead on arrival. . . The same goes for large software systems. They have multiple components that work together and the systems require just the right components working together to thrive” (1).

Programming languages, designed to communicate instructions to a computer, are generally first compiled to binary code, encoding 0 or 1. The instructions are then executed by machinery in the host computer to control the behavior of the machine and perform the set of commands.

Maslov and Pang used data from the DOE Systems Biology Knowledgebase (KBase), which contains the sequencing of bacterial genomes, and focused on the frequency of important sequences in the metabolic processes of 500 bacterial species. They then compared their analysis to the frequency of installation of 200,000 Linux packages on more than 2 million computers (2). Linux is an open source software collaboration that grants programmers the ability to edit and modify source code to construct or enhance programs for public use.

The results indicate that the most frequently detected sequences in the biological and computer systems are those that promote the largest number of descendants. In short, if an element is more heavily relied upon by others, it is more likely that it will be required for the proper functionality of the system. Conceptually, this has been known to be true in biological systems since Darwin. Certain traits are more useful for survival, in light of the survival of the fittest phenomenon.

Maslov and Pang produced a formula that predicts and accurately reflects the number of essential components in bacterial or computing systems. The simplified equation takes the square root of the number of interdependent components to obtain the number of crucial components that the whole system requires. The calculaiton is true for both complex systems. Their results are published in the Proceedings of the National Academy of Sciences.

A hypothesized reason for the similarities between the biological and computer systems is attributed to both being open access systems composed of sequences that are independently installed. That is, bacteria share genetic data freely through a common pool of genes that can be exchanged via horizontal gene transfer. Linux operating systems similarly grant free installation of different components that are then constructed and shared by many programmers independently. The result of Maslov and Pang’s study may not hold true for other operating systems, such as Mac OS, since other programs do not follow the open access approach Linux does.

Future studies that examine the similarities between genetic and computer codes will help unveil features of both, shedding light on the potential applications and utility of the two in current research and technology. Parallels like these can have an impact in growing technological fields, such as genetic engineering and robotics.

Eleni Apostolatos ’18 is a sophomore in Leverett House.

WORKS CITED

[1] “Researchers Find Surprising Similarities Between Genetic and Computer Codes.” Brookhaven National Laboratory Newsroom. U.S. Department of Energy, 28 Mar. 2013. Web. 30 Dec. 2015.

[2] Pang T. Y. ;Maslov S. “Universal Distribution of Component Frequencies in Biological and Technological Systems.” Proceedings of the National Academy of Sciences USA 110.15 (2013): Proceedings of the National Academy of Sciences USA, 2013, Vol.110 (15). Web.and Infection. 2013, 19(10), 889-901.

LIGO’s Discovery: Understanding the Gravity of the Situation

By: Alex Zapien

History was made on February 11, 2016 when the Laser Interferometer Gravitational-Wave Observatory (LIGO) Scientific and Virgo collaboration teams confirmed the existence of gravitational waves, ripples that propagate in the fabric of spacetime generated by gravitational interactions once predicted by Albert Einstein. The announcement sparked widespread excitement in the scientific community. Many described the discovery as one of the most important discoveries in physics in the past several decades: as LIGO Executive Director David Reitze remarked, “this was truly a scientific moonshot, and we did it, we landed on the moon” (1-2). LIGO’s revolutionary project has pushed society to greater understanding of the universe.

LIGO, which is funded by the National Science Foundation, is a pair of ground-based observatories in Livingston, Louisiana and Hanford, Washington. They were built and operated by the California and Massachusetts Institutes of Technology (3-4). The detector, shaped like an “L,” senses the space distortions that occur when a gravitational wave passes through the Earth (3). Amazingly precise atomic clocks measure how long it takes for laser light to bounce back and forth, reflecting off of mirrors, throughout the legs. Usually, light takes exactly the same amount of time to traverse each leg, which are exactly the same length; however, when a gravitational wave passes through, the detector and the ground beneath it expand and contract a miniscule amount in a particular direction. As a result, the two legs are no longer the same length, and the light travel time changes (3). The original waves were first detected in September 14, 2015 but were not confirmed until February 2016. The waves were caused by a 1.3-billion-year-old collision between two black holes, weighing between 29 and 36 solar masses (5.8 x 1031 kg and 7.2 x 1031 kg, respectively). Traveling at the speed of light, the waves did not reach Earth until this past year. The actual black hole collision incredibly happened in a very short period of time – fractions of a second – releasing tremendous amounts of energy while producing massive ripples in the fabric of spacetime (5).

The original LIGO proposers, Professors Kip Thorne, Rainer Weiss, and Ronald Drever, have provided evidence for and strengthened modern physical theories and have ushered in a new era where the field of gravitational astronomy

is a reality. Black holes and gravitational waves were first predicted by Albert Einstein in his General Theory of Relativity in 1915, more than 100 years ago (4). Several decades later, in 1974, physicists Joseph Taylor Jr. and Russel Hulse demonstrated the existence of the waves when they discovered a binary system composed of a pulsar, a rapidly rotating neutron star that emits radiation in pulses, in orbit around another neutron star. They found that the pulsar’s orbit was gradually shrinking over time because of energy being released in the form of gravitational waves. For the discovery of the pulsar and for showing that it was possible to make measurements of gravitational waves, Drs Taylor and Hulse were awarded the Nobel Prize in Physics in 1993 (4). With the success of the LIGO project, Professors Thorne, Weiss, and Drever are expected to win a Nobel Prize in the nearby future.

The future is bright for gravitational astronomy, and a third advanced LIGO detector is scheduled to be built in India within the next decade (2). Ultimately, the LIGO discovery is a triumph for humanity that will further enlighten future generations about the dark, large fractions of the endless universe.

Alex Zapien ‘19 is a freshman in Canaday Hall.

WORKS CITED

[1] Graham Peter. Newsweek: Tech and Science. http://www. newsweek.com/quora-question-how-important-discovery-gravitational-waves-427437 (accessed Feb. 21 2016). “Quora Question: How Important is the Discovery of Gravitational Waves?”

[2] Moskowitz, Clara. Scientific American. http://www.scientificamerican.com/article/gravitational-waves-discovered-from-colliding-black-holes1/ (accessed Feb. 21 2016). “Gravitational Waves Discovered From Colliding Black Holes.”

[3] NASA: Solar System and Beyond. http://www.nasa.gov/ feature/goddard/2016/nsf-s-ligo-has-detected-gravitational-waves (accessed Feb. 21 2016). “NSF’s LIGO has Detected Gravitational Waves.”

[4] LIGO: Caltech. https://www.ligo.caltech.edu/news/ ligo20160211 (accessed Feb. 21 2016). “Gravitational Waves Detected 100 years After Einstein’s Prediction.”

[5] O’Neil, Ian. Discovery: News: Space. http://news. discovery.com/space/weve-detected-gravitational-waves-so-what-160213.htm (accessed Feb. 21 2016). “We’ve Detected Gravitational Waves, So What?”

Immunotherapy Against Cancer

By: Eliot Eton

The statistics are alarming: in 2012, there were about 14 million new cases of cancer, and 8.2 million cancer-related deaths (1). Over the next twenty years, the number of new cases is likely to increase by over 70% (1). Approximately 42% of men and 38% of women are expected to develop cancer over the course of his or her lifetime (2). In 2016, 1.7 million individuals will be newly diagnosed with cancer, and 600,000 individuals will die from the disease (1). Immunotherapy, which augments the power of the immune system to directly attack and eliminate tumors, has reinvigorated the search for safe and effective cancer treatment.

Making Cancer a Priority

The first known description of cancer is found on papyrus, on which an Egyptian physician, four thousand years ago, classified all known diseases and their known treatments (3). Case number 45 characterizes breast cancer, yet the description of treatment is pithy: there is none (3). Not until 1948 did Sidney Farber, a pathologist at Harvard Medical School, working on a study on the effects of aminopterin on childhood acute lymphoblastic leukemia, prove that clinically-induced tumor remission was achievable (3).

This discovery revitalized cancer treatment research and would soon ignite international efforts to eradicate the disease. With pushing by Mary Lasker and the Lasker Foundation, President Richard Nixon passed the National Cancer Act of 1971, which officially propelled cancer research to a top national priority, creating, as Dr. Jerome Groopman said, “an entity based on a promise and that was the cure of cancer” (4). With greater understanding of the causes and mechanisms of tumor development, innovation has skyrocketed. Between 1949 and 1971, about 30 drugs were approved and entered treatment clinics; today, more than 300 drugs are being used, and there may be as many as 700 in the pipeline (5).

Fundamentals of Immunotherapy

The concept that the immune system is capable of suppressing tumor growth is not new. In 1891, William Coley injected live or inactivated bacteria in order to stimulate antibacterial immune cells to kill nearby tumor cells (6). In the early 1900s, immunologist and future Nobel-Prize-recipient Paul Ehrlich went a step further and proposed the “immune surveillance” hypothesis, which suggested that the immune system’s cells can both naturally identify and destroy emerging tumors (6). Not until 2001, however, was this much-debated theory universally accepted, when Drs. Robert Schreiber and Lloyd Old showed that mice that could not synthesize interferon-gamma, an important protein involved in immune signaling, were more likely to develop B-cell lymphomas (6). One can hypothesize that cancers exist because they have already eluded the host immune surveillance system. In the past few decades, the challenge was to find how this could occur and reverse it.

CTLA-4

In the late 1980s, to understand immune system-cancer cell interaction, Jean-François Brunet et al. sought to characterize the surface of cytotoxic T-lymphocyte cells (CTLs), also known as killer T cells, which are the soldiers of the immune system. If the environment and signaling are appropriate, CTLs will lyse and destroy infected, cancerous, or damaged cells. CTLs were already known to require two signals for activation: according to the Cancer Research Institute, the first signal, coming from the T cell receptor, is like an “ignition switch,” priming the cells for action, while the second signal, from co-receptor CD28, is like a “gas pedal,” driving the attack (7). When analyzing the different co-receptors, Brunet et al. discovered a surface peptide, cytotoxic T cell antigen-4 (CTLA-4), that seemed to play a critical role in immune system response.

Debate soon arose regarding the function of CTLA-4. While it appeared remarkably similar to CD28 in sequence, work by Dr. James Allison et al. showed that CTLA-4 actually had an opposing effect: it acted as a brake on the immune system  (7). CTLA-4 blocked the second signal, without which CTLs could neither produce the cytokine interleukin-2 to initiate cytotoxicity nor stimulate the proliferation of CTLs to attack the cancerous cell target. Allison hypothesized that releasing the immune system from this break could allow it to mount a robust attack against cancer (7).

Soon afterward, a small biotechnology company, Medarex, later purchased by Bristol-Meyers Squibb, began developing ipilimumab, a monoclonal antibody against CTLA-4, which effectively blocks CTLA-4.7 First tested in a large trial in metastatic melanoma, ipilimumab showed tremendous potential: while metastatic melanoma had a two-year survival rate of only 12%, investigators discovered that between 20-25% of patients were alive two-plus years after just four doses three weeks apart (8). Now a decade following the trial, patients with no disease progression at two years have yet to relapse: as Dr. Allison says, in some, “their cancers are not growing” and “in others, tumors just pop up and then go away” (4). For these individuals, cancer does not mean certain death, instead, Allison adds, it is “something of a chronic condition” (4).

Given all the heterogeneity in cancer, the challenge moving forward is determining how these particular individuals had such successful responses. Indeed, some patients, as expected, had severe side effects such as severe diarrhea, attributed to the unleashed immune response attacking other unrelated targets. Appropriate management of patients could reduce the risk of such serious toxicity. Indeed, ipilimumab was a successus: in a 2011 Nature Review, Ira Mellman et al. writes how ipilimumab “provides realistic hope for melanoma patients, particularly those with late stage disease who otherwise had little chance of survival” (9). Furthermore, they wrote how ipilimumab “provides clear clinical validation for cancer immunotherapy in general” (9).

PD-L1

In 2000, Dr. Gordon Freeman et al. at the Dana-Farber Cancer Institute were investigating T-cell surface proteins. They discovered a protein on the surface of normal cells called programmed cell death 1 ligand 1 (PD-L1). This protein interacted with a T cell co-receptor, PD-1, preventing the T cell from driving its attack.10 Normally, PD-1 and PD-L1 help protect healthy cells from destruction: only cells lacking these proteins – i.e. foreign cells – will be recognized and lysed, while others will be ignored.

Yet, in 2001, Freeman et al. discovered that PDL1 is not only found on normal cells but also on cancerous cells (11). The PD-1/PD-L1 interaction effectively encourages tolerance, as cancerous cells expressing these proteins will be protected from destruction: as Freeman says, “Cancer cells have essentially stolen the PD-1/PD-L1 mechanism from normal cells in order to evade attack by the immune system” (12).

Drugs targeting the PD-1 receptor on T cells, including nivolumab (Bristol-Myers Squib), are outperforming standard chemotherapy and inducing major responses, even more so than ipilimumab. In the most recent (2013-14) randomized phase III study for metastatic, untreated melanoma, using nivolumab alone (N-group) resulted in a ~44% objective response rate (including both partial and complete responses; ORR), using ipilimumab alone (I-group) resulted in a ~19% ORR, and using a combination of nivolumab and ipilimumab (NI-group) resulted in a remarkable ~58% ORR (14). Additionally, the median change from baseline size (given as the sum of the longest diameters of target tumors) was ~ -34.5% in the N-group, 5.9% in the I-group, and ~ -52% in the NI-group (11). Notably, while 80-90% of those in each group suffered treatment-related adverse events, only ~8% of those in the N-group, ~15% of those in the I-group, and 36% of those in the NI-group discontinued treatment because of these events (13).

Work is being done now to augment immune response that result in eradication of the melanoma in a greater proportion of patient and also in earlier stages of the disease. Yet, the level of efficacy seen in the trial above is not limited only to melanoma. Similar results for the anti-PD1 agents have been observed in some of the hardest-to-treat cancers, such as lung cancer, head and neck cancer, kidney cancer, gastric and esophageal cancers, and liver cancer.

Conclusion

Immunotherapy aims to override immune tolerance of “altered-self,” or cancer, and is yielding unprecedentedly durable response in cancers heretofore deemed incurable. Scientists are racing to target more checkpoints and their ligands and to test new combinations in order to fulfill the ultimate goal of harnessing our own natural defenses to eradicate even the stubborn cancers that have yet to respond to immunotherapy. These agents are rapidly moving up the priority lists in treating cancer and perhaps may cure more patients in earlier disease stages. Additionally, returning to William Coley and the origins of immunotherapy, trials are now being conducted on viral constructs, which the immune system loves to attack. These constructs could potentially augment the antigenicity of tumors and could thereby cause increased CTL migration to these metastases, where infusing checkpoint antibodies could then potentially mount an even more robust immune response against cancer. After decades of visionary thinking and of perseverance by scientists, there is now a bright future ahead in the field of cancer immunotherapy.

Elliot Eton ‘19 is a freshman in Apley Court.

WORKS CITED

[1] World Health Organization. Cancer: Fact Sheet N297. http://www.who.int/mediacentre/factsheets/fs297/en/ (accessed Feb. 28, 2016).

[2] American Cancer Society. Lifetime Risk of Developing or Dying from Cancer. http://www. cancer.org/cancer/cancerbasics/lifetime-probability-of-developing-or-dying-from-cancer (accessed Feb. 28, 2016).

[3] “Episode One: Magic Bullets.” Cancer: The Emperor of All Maladies, directed by Barak Goodman. http://www.pbs.org/show/story-cancer-emperor-all-maladies/ (accessed Jan. 13, 2016).

[4] Groopman, J. The T-Cell Army. The New Yorker, Apr. 23, 2012. http://www.newyorker.com/ magazine/2012/04/23/the-t-cell-army (accessed Feb. 28, 2016).

[5] Vanchieri, C. National Cancer Act: A Look Back and Forward. J. Natl. Cancer Inst, 2007, 5, 342-345.

[6] Cancer Research Institute. Timeline of Progress. http://www.cancerresearch.org/our-strategy-impact/timeline-of-progress/timeline-detail (accessed Mar. 3, 2016).

[7] CRI’s James Allison to Receive Prestigious Lasker Award. http://www.cancerresearch.org/ news-publications/our-blog/september-2015/ cri-james-allison-to-receive-prestigious-lasker-award (accessed Mar. 3, 2016).

[8] James Allison: The Texas T Cell Mechanic. http://www.cancerresearch.org/our-strategy-impact/people-behind-the-progress/scientists/james-allison (accessed Mar. 3, 2016).

[9] Mellman, I. et al. Nature, 2011, 480, 480-89.

[10] Freeman, G. et al. J Exp Med., 2000, 7, 1027- 34.

[11] Nature Immunol., 2001, 2, 261-68.

[12] Levy, R. Unleashing the Potential. Dana-Farber Cancer Inst. Paths of Progress, Spring/ Summer 2014. http://www.dana-farber.org/ Newsroom/Publications/Unleashing-the-Potential.aspx (accessed Mar. 3, 2016).

[13] Larkin, J. et al. NEJM, 2016, 373, 23-34. [14] Buchbinder, E., et al. Amer. J. Clin. Oncol, 2016, 1, 98-106.

Solving the Brain Puzzle

By: Kristina Madjoska

Each one contributes to a story, forms a unique context with the pieces around it, and finally emerges as part of a collection of stories that make up one human brain. Scientists from around the world have endeavored to make sense of the way these nanoscale puzzle pieces, or neurons, communicate with each other to produce all of our amazing and distinct identities. The BRAIN (Brain Research through Advancing Innovative Neurotechnologies) Project, initiated and funded by the Obama administration in 2013 in consortium with the Human Brain Project of the European Union, aims to support researchers in developing new technologies that would enable us to map the brain at the level of individual neurons (1). The BRAIN project would potentially help us understand the contribution of each neuron in the brain to the formation of human perception, emotion, memory, cognition and everything in between.

Given that an average neuron is about 50 microns (one micron is 0.1 mm) in size, and neurotransmitters – the chemical messengers with which neurons communicate – are a hundred times smaller than that, the BRAIN Project’s main goal is developing novel nanotechnologies tailored to the structure of the brain. Nanotechnology is the engineering and use of small, nanoscale materials with specific functional properties to study a certain system in detail. In the study of the brain, nanoparticles such as ultra-thin electrodes and small molecules are used to track the electrical activity of individual neurons, manipulate and follow the presence of neurotransmitters in specific neural synapses, and form detailed images of the brain during its performance of different functions (2). Unlike older technologies like electrodes and magnetic scans that measure the activity of whole brain areas, these new technologies can be manipulated to target specific cell types and allow researchers to precisely pinpoint what happens where and how it happens. The ultimate goal of this project is to create a functional, three-dimensional map of the brain, one which scientists and physicians can refer to when making diagnoses and predictions about brain abnormalities, neurodegenerative diseases and especially under-researched psychiatric illnesses (3). This model is similar to the one used for the Human Genome Project, in which a standardized sequence of human DNA was produced as a reference to help identify harmful genetic mutations. So far, ideas about how to reach this level of precision in tracking electrical activity in the brain range from quantum dots, which are tiny semi-conducting particles that exhibit light in response to voltage changes, to electrode caps with thousands of nanoscale electrodes.

However, the project is very impressive to have yielded any significant discoveries, and its challenges cannot be underestimated. The brain is composed of a hundred billion individual neurons, each of which can make up to ten thousand connections with other neurons, and the task of tracking each can undoubtedly be long and cumbersome. A more modest approach considered by scientists is to first try to map the brain of a mouse model organism which has 75 million neurons, a complicated yet comparatively much more feasible undertaking. What is more, unlike the Human Genome Project where scientists knew exactly that they were looking for a base pair sequence, the BRAIN project is much more open-ended and researchers have to be vigilant about encountering unexpected neuronal behavior.

Though complicated and certainly very ambitious, mapping the human brain in the smallest detail can be a landmark step towards decoding the enigmatic function of this center of human identity. In these vibrant times for neuroscience, the task of scientists is to build our brain puzzle by puzzle, helping us grasp the elegance and the intricacy with which it operates.

Kristina Madjoska ‘19 is a rising sophomore in Lowell House, concentrating in Neurobiology.

WORKS CITED

[1] “Mapping the Mind with Nanotechnology.” The Guardian. Guardian News and Media, 29 May 2013. Web.

[2] Silva, Gabriel A. “Neuroscience Nanotechnology: Progress, Opportunities and Challenges.” Nature Reviews Neuroscience Nat Rev Neurosci 7.1 (2006): 65-74. Web.

[3] Marx, Vivien. “Neurobiology: Brain Mapping in High Resolution.” Nature 503.7474 (2013): 147-52. Web.

Getting a Feel for Cosmic Events

By: William Bryk

Monday morning, October 30, 1961, began quietly on an abandoned patch of tundra in an archipelago located in the extreme north of Russia. A grain tumbled in the mild wind, floating here and there until finally smacking the ground, sending several bacteria to their unfortunate end. Moments later, quite unexpectedly, a 50 megaton nuclear blast ripped through the region, completely obliterating everything within a 20-mile radius (1).

What was just demonstrated is a difference in power of 24 orders of magnitude. The grain hitting the ground might have had a peak power output on the order of one watt (depending on its mass and impact velocity), while the peak power output of mankind’s most powerful bomb in history, nicknamed “Tsar Bomba,” was around 5 yottawatts, or 5. 102⁴ watts (1 watt of power equals 1 joule of energy per second) (2).

The above comparison combines into one picture energies at two extremes of the brain’s conceptual limits. Now imagine, instead, that the grain-ground collision actually represented the power output of the Tsar Bomba itself. By zooming out to this new power scale, we have left the realm of the everyday and have ventured into a way of thinking that our brains cannot genuinely comprehend. On this scale, the power output of a 100-kilometer-wide asteroid impact at tens of kilometers per second would be represented by a firecracker, and the power output of all the explosives used in World War II detonating simultaneously would not even be detectable to a human (3). So what power output would a Tsar Bomba explosion on this scale represent? Multiplying the old bomb’s yottawatts by 24 orders of magnitude again, we get approximately 10⁴⁹ watts. 10⁴⁹ watts is the power of a Tsar Bomba explosion on an imagined scale in which a tiny ripple in the dirt represents the power of an actual Tsar Bomba. It seems no event could possibly cause such an outrageous explosion. Yet, on September 14, 2015, scientists at the Laser Interferometer Gravitational-Wave Observatory (LIGO) detected one, thankfully in a galaxy far far away.

Too Many Zeroes

Yes, in a foreign region of the cosmos 1.3 billion light years away, two black holes dozens of times the mass of our sun collided into one in less than a second, generating a peak power output of 3.6 . 10⁴⁹ watts (4, 5). The black hole collision caused ripples in spacetime so disruptive that they were detected on Earth even after having spread out for 1.3 billion years, making headlines around the globe as the first direct evidence of gravitational waves. But how many people can say they really grasp the sheer magnitude of the quantities involved? Among the many impressive characteristics, one stands out: 3.6 . 10⁴⁹ watts. How can we possibly make sense of such a colossal quantity, equivalent to 360,000,000,000,000,0 00,000,000,000,000,000,000,000,000,000,000 hundred-watt light bulbs?

In such helpless situations, it seems natural to turn to someone who spends their whole career dealing with these figures – an astrophysicist! Enter Harvard’s very own Professor Irwin Shapiro. Professor Shapiro studies gravitational phenomena, and has made historic discoveries including Shapiro Time Delay, which is one of the four classic solar system tests of general relativity. If anyone can help, Professor Shapiro is the one. When asked how, as an astrophysicist, he makes sense of the physical scales involved in the events that he studies, he responded, “They are so immense that I don’t really try, but usually just stick to the relevant mathematical expressions, ten to some high power – whatever the units!” Clearly, then, our cognitive goal is not necessarily achievable.

But why even take on the challenge of grasping such large quantities? Well, one might argue that there is a difference between knowing that Earth is one of eight planets revolving around a star and knowing that we are standing on one of eight massive spheres, half of us upside down, drifting at tens of kilometers per second around a gigantic ball of superheated gas, all drenched in absolute darkness, a cosmic scene comparable to dust particles spiraling around a light bulb in an otherwise pitch-dark gymnasium. We can continue describing the astronomical speeds, energies, masses, and time-scales involved until what was previously a simple fact about our planet becomes a revelation about our place in this cosmic dance of rock and gas. So, with that sort of mission in mind, we set out to understand 3.6 . 10⁴⁹ watts.

Conceptual Building Blocks

We begin with the universe’s basic commodity – stars. Stars, much like living organisms, can be said to go through life cycles of their own. For much of its life, a star exists because a large enough agglomerate of gas in outer space is compressed so heavily under its own gravitational attraction that nuclear fusion is enabled. Nuclear fusion, the same atomic merging process that humans managed to create down here on Earth in the form of fusion bombs, is responsible for the immense energy output of stars. Take our sun, for example. With a diameter about a hundred times that of Earth, the sun is basically a gigantic hydrogen fusing factory. Each second, it converts one hundred Great Pyramids of Giza worth of hydrogen into helium, releasing the energy equivalent of 2 billion Tsar Bombas (6). So if you continue to fill all the streets of Manhattan with Tsar Bombas (each 8 meters long, 2 meters diameter) to a height of two empire state buildings and press a big red button each second, you would get the sun. We only survive these cataclysmic eruptions, because Earth is located 150 million kilometers away. As Professor Shapiro pointed out, if the sun were the size of a basketball, the Earth would be a tiny peppercorn located 25 yards away! This makes it ever more surprising that the sun is what provides the base energy for almost all physical and organic processes on Earth’s surface – storms, precipitation, life. The sun will be our reference point, so it’s important to pause and internalize the power of just one ordinary star.

The Beast Within

At some point during their multi-million or multi-billion year factory lifestyle, stars will run out of fuel to fuse. But some of these cosmic monsters don’t leave without a bang. If a star is massive enough, it could end its life in a high-energy explosion known as a supernova, and sometimes, for even larger stars, a hypernova. Before attempting to fathom the energy of a hypernova, it helps to understand why they happen in the first place. There are several models that explain hypernovas. A simplified explanation of one model – the collapsar model – is that when a large star stops fusing, it loses its constant outward pressure, and collapses under its own weight. Meaning, the gravitational pressure compressing the atoms into one another is so strong that it overcomes the sensible force that prevents particles from occupying the same position, creating a claustrophobic situation of cosmic proportions. What could emerge from this exotic physical collapse is an unbelievable amount of emitted energy, and a black hole.

Astronomers have been detecting hints of such hypernovas ever since the Cold War. These blasts of energy, initially believed to be Soviet nuclear tests in space, remained a mystery for several decades until it gradually became clear that the universe has its own nuclear arsenal on scales previously unimaginable (7). In 2015, scientists detected a hypernova unlike any that had ever been observed. This killer hypernova, appropriately named ASASSN-15lh, erupted with as much power as 600 billion suns (8). To get a sense of how much power that really is, let’s refer back to the scale in which the sun is a basketball and Earth is a small peppercorn. Place our basketball-sun on the 1-yard line of Gillette Stadium, MA (with Earth at the 25-yard line). Now, how far away would be the next closest star, Proxima Centauri, which is four light years away? The answer is that Proxima Centauri would be another basketball, located in Beijing! What would happen if this star were to explode in a hypernova like ASASSN-15lh? This basketball in Beijing going berserk in hypernoval fashion would be brighter in peppercorn-Earth’s skies than our basketball-sun located in the same football stadium! If the sun’s power output were represented by a 100- watt light bulb, then ASASSN-15lh would be the equivalent of one Hiroshima bomb exploding each second.

Yet, for all this power, ASASSN-15lh only squeezed out a meager 2 . 103⁸ watts at peak output. An insane figure, for sure, but nothing close to the 3.6 . 10⁴⁹ that we seek. To get closer to fathoming that value, we need to delve into a new kind of cosmic creature.

Zooming Out

If stars are the cosmic citizens, then a galaxy is the planet on which these stars dwell. Instead of distributing themselves uniformly throughout the universe, stars have a gravitational tendency to group themselves into large, majestic, rotating agglomerates we call galaxies, the true giants of the cosmos. Our own galaxy, the Milky Way, contains roughly 100 billion of these glowing balls of fusion and spans 100,000 light years in diameter (9). How can we conceive of such a behemoth?

Imagine that the Milky Way were shrunk down to the size of the Continental United States (a factor of 250 trillion). Our sun is roughly 1.4 billion meters in diameter, so dividing by 250 trillion, you get that each sun-like star would be 6 micrometers wide on this scale, about the size of the nucleus in mammalian cells. Thus this U.S. version of the Milky Way would be a huge disk of complete darkness scattered here and there with microscopic pinpoints of light every 150 meters or so. It’s not the most exciting picture of a galaxy, but it does give a somewhat accessible scale to work with. Imagine all the cities, the towns, the neighborhoods, the vast fields, each swarmed with innumerable tiny points of light. If you were to take a census of the 100 billion stars in the Milky Way, counting one each second, it would take well over 3,000 years to complete. If each star were instead, for some reason, a grape tomato, the 100 billion tomatoes would fill Gillette Stadium to the brim.

PUTTING IT ALL TOGETHER

We now finally have the conceptual tools to take on our initial quest. What does 3.6 . 10⁴⁹ watts feel like? Well, we have seen that the ASASSN-15lh supernova had a peak power-output of approximately 2 . 103⁸ watts. We have also noted that there are around 1011 stars in our galaxy. Here we go. First, take one Milky Way galaxy full of stars. Next, compress this vast galaxy with billions of stars into the volume of a small moon.10 Then, take out a very large red button. Press the red button. At that moment, as if Hades himself had commenced a simultaneous uprising of all the dead within the underworld, each and every star goes hypernova on the level of ASASSN-15lh, collectively releasing an energy equivalent of 200 million million million million million Tsar Bombas per second. This is 3.6 . 10⁴⁹ watts. This is the power of the two black holes that collided 1.3 billion years ago.

After dealing with such astronomical figures in search of a conceptual understanding that may or may not have been satisfied, it is easy to come away feeling very small in relation to a universe that features cosmic bodies and events best expressed in scientific notation. But this logic doesn’t quite hold up, since it really is just a matter of perspective. There is a whole other side to the spectrum of physical quantities. For instance, if every atom in our bodies were replaced with a medium-sized grain of sand, a human would have the volume of a planet.11 So when you ask some random person on the street for directions, you are talking to a being with complexity matching that of Earth, hosting an intricate network of septillions upon septillions of molecules working together in harmony. From the perspective of an atom or bacterium, we humans are the cosmic bodies unimaginably large yielding unimaginable power. As the Professor Shapiro of the bacteria world might put it, humans operate at levels of “ten to some high power – whatever the units!”

If that is not enough to quench our thirst for significance, consider the role of fans at a football stadium. Aren’t they just as much part of the game? In a certain sense, we carry a significant role in the cosmic arena merely by being conscious spectators within it. And our increasingly advanced scientific instruments definitely make us qualified spectators. The detectors at LIGO, for example, discovered the black hole collision by detecting a gravitational wave disturbance of less than an atomic diameter in length! We might not make the most noise in this universe, but we surely are the best at listening.

William Bryk ‘19 is a freshman in Canaday Hall.

WORKS CITED

[1] CTBTO Preparatory Commission. https://www.ctbto.org/specials/testing-times/30-october-1961-the-tsar-bomba (accessed Mar. 9, 2016).

[2] Abbott, D. Proceedings of the IEEE. 2010, 98, 42-66.

[3] Kolecki, J. NASA GRC Mathematical Thinking in Physics. https://www.grc. nasa.gov/www/k-12/Numbers/Math/ Mathematical_Thinking/asteroid_hit. htm (accessed Mar. 21, 2016).

[4] Cofield, C. In Historic First, Einstein’s Gravitational Waves Detected Directly. Space.com [Online], Feb. 11, 2016. http://www.space.com/31900-gravitational-waves-discovery-ligo.html (accessed Mar. 9, 2016).

[5] Crockett, C. Black Hole Smashup Generated Yottawatts of Power. ScienceNews [Online], Mar. 10, 2016. https://www.sciencenews.org/article/ black-hole-smashup-generated-yottawatts-power (accessed Mar. 15, 2016).

[6] Christian, E. NASA Cosmicopia. http://helios.gsfc.nasa.gov/qa_sun. html (accessed Mar. 11, 2016).

[7] Woosley, S. The Death Star. BBC Science & Nature [Online], Oct. 18, 2001. http://www.bbc.co.uk/science/horizon/2001/deathstar.shtml (accessed Mar. 15, 2016).

[8] Billings, L. Found: The Most Powerful Supernova Ever Seen. Scientific American, Jan. 14, 2016. http:// http://www.scientificamerican.com/article/ found-the-most-powerful-supernova-ever-seen/ (accessed Mar. 15, 2016).

[9] Howell, E. How Big is the Milky Way? Universe Today [Online], Jan. 20, 2015. http://www.universetoday. com/75691/how-big-is-the-milky-way/ (accessed Mar. 17, 2016).

[10] Abbott, B. et al. Phys. Rev. Lett. 2016, 116, 1-16.

[11] Kross, B. Jefferson Lab. http://educa

Cellular Senescence: Age and Cancer

By: Una Choi

With age comes a bevy of age-related diseases and tissue deterioration. Cellular senescence heavily impacts this process and describes the final, irreversible period during which cells — most often fibroblasts or connective tissue cells — flatten and cease to undergo mitosis after around fifty rounds of replication (1). Senescent cells are at the heart of cataracts, wrinkles, and other forms of tissue aging. After ceasing proliferation, senescent cells emit cytokines, which act in concert with other growth factors emitted by the senescence-associated secretory phenotype (SASP) to help maintain senescent cells’ static state (2). Cytokines and SASP help initiate age-related tissue degeneration via the immune system. Furthermore, senescent cells themselves exhibit increased expression of genes encoding peptides that can later be presented by major histocompatibility complex molecules as markers for degradation (3).

Cellular senescence, however, is not solely linked with advanced age; senescent cells have been identified in embryos and are thought to hold a pivotal role in regulating organogenesis (2). In addition, senescence holds implications in areas outside of age; cells often become senescent after accumulating extensive DNA damage or telomere decay, revealing that senescence may help prevent replication of cells with potentially harmful mutations (2). Oncogene-induced senescence (OIS) can thereby contribute to tumor suppression, as the retinoblastoma (RB) tumor-suppressor networks active in senescent cells suggest. Although cellular senescence involves changes on the cellular scale, its effects extend to both the tiny, developing embryos and the fully formed, aging tissues present in adults.

Cellular Senescence: Causes 

Unrepaired DNA single-strand breaks (SSBs) have been linked to cellular senescence (1). Diminished poly(adenosine diphosphate-ribose) polymerase-1 (PARP1) expression permits the accumulation of SSBs and can eventually result in the creation of atypical XRCC1 foci. Consequently, p16 expression increases and triggers the transition to cellular senescence.

Oxidative stress and telomere erosion can also activate the p16/RB pathway. Oxidative stress, or an imbalance of reactive oxygen species, can trigger an accumulation of SSBs, thus promoting cellular senescence. Telomeres, the DNA-protein complexes present at the ends of chromosomes, shorten after each round of cell division. Once telomeres are shortened, they are subject to incorrect DNA repair machineries that can cause chromosome breakage (4).

Cellular Senescence and Aging

Although senescent cells release cytokines to trigger their own destruction, this process can be slow and can result in painful inflammation as the senescent cells accumulate in aging tissues. This inflammation contributes to age-related frailty syndrome, symptoms of which include increased vulnerability to stresses, fat tissue loss, and muscle deterioration (5). Reactive products released from inflammatory cytokines can also lead to tissue degradation. These cytokines can also trigger angiogenesis, which contributes to chronic inflammation, and can disrupt cell-cell communication in surrounding tissue (6).

Consequently, scientists are focusing on eliminating cellular senescence as a means to prevent multiple age-related diseases. In 2011, scientists at the Mayo Clinic demonstrated the beneficial effects of eliminating senescent cells (5). Dr. Darren J. Baker and colleagues constructed the INK-ATTAC transgene for the destruction of p16Ink4a, a biomarker required for the survival of senescent cells. They induced the activation of INK-ATTAC using AP20187, a synthetic drug. p16Ink4a destruction delayed age-related degradation of the eye, skeletal muscle, and adipose tissue. When given the drug at weaning age, the mice with the INK-ATTAC transgene were larger, exhibited greater muscle retention, and were able to travel farther distances during treadmill exercise tests.

The same group then activated the transgenic INK-ATTAC in 5-month-old mice. Although the irreversible cataracts remained, the older mice still exhibited increased muscle fiber diameters and improved ability to complete treadmill exercise tests.

The overall survival of the INK-ATTAC mice, however, was not extended; this may be due to the prevalence of cardiac failure in both the control and experimental groups of mice. Because INK-ATTAC is not significantly expressed in the heart and aorta, senescent cells in those organs could persist. A later study, however, experienced greater success in eliminating senescent cells and consequently reported a 20-30% extension of the rodents’ lifespan (7).

Cellular Senescence and Embryogenesis

Although cellular senescence is most often associated with age, a group at the Weizmann Institute of Science found senescent cells in the syncytiotrophoblast (epithelial covering of the placental villi) (8). Another group at the Center for Genomic Regulation in Spain found senescent cells in mouse and chick embryos, further emphasizing the role of cellular senescence in early development (2). Biologist Mekayla Storer and colleagues took advantage of the characteristic activation of the senescence-associated beta-galactosidase (SAβ-gal) enzyme in senescent cells in order to detect the presence of cellular senescence in embryos. Staining embryos with SAβ-gal, a substrate which the SAβ-gal enzyme cleaves to form a colored product, revealed senescent cells along the neural roof plate and apical ectodermal ridge (AER), two regions closely associated with signaling during embryogenesis. The neural roof plate dictates development of the central neural system while the AER, a region of the ectoderm, secretes growth factors regulating the growth of limbs.

Dr. William Keyes, a member of the group that discovered senescent cells in the murine embryo, posits that cellular senescence plays a critical role in dictating the development of the embryo, as the non-proliferative behavior of senescent cells guarantees a short-lived signal.9 Further supporting the potential role of senescent cells in regulating embryogenesis, staining was not constant throughout the development of the embryo. At embryonic day 9.5 (E9.5), Dr. Storer’s group found that staining at the far end of the limb bud (2). By E13.5, staining was visible in the growing area between the toes. By E16.5, the staining had vanished, revealing that the senescent cells, having most likely completed their role in directing development, had been destroyed by the immune system.

Interestingly, many common birth defects are concentrated in the areas exhibiting the most staining for senescent cells, suggesting that these are critical areas for ensuring proper development of the embryo. When murine embryos were deficient in p21, a gene whose expression is essential for inhibiting the cell cycle and for protecting against apoptosis, the impaired cellular senescence increased cell death (2). In the AER, p21-deficient mice had impaired expression of FGF4 and FGF8, vital signals which trigger the proliferation of mesenchymal and limb cells.

Although the senescent cells found in the mouse and chick embryos shared many characteristics with the previously mentioned oncogene-induced senescence, the embryonic senescent cells lacked p16 and DNA damage. The group at the Center for Genomic Regulation posits that this phenomenon may be due to the simpler nature of embryonic senescence; oncogene-induced senescence requires more elaborate control.

Cellular Senescence and Cancer

Because cellular senescence most often halts the proliferation of damaged fibroblasts, it is thought to contribute to the lack of sarcomas, or fibroblast-originating tumors, in humans (1). Indeed, sarcomas represent less than 1% of all human cancers. The activation of the p16INK4a/ pRB tumor suppressive pathway involved in triggering cellular senescence negatively regulates cell cycle progression (4).

Carcinomas, in contrast, originate in epithelial cells and are the most common human cancers (1). Interestingly, carcinoma occurrence increases with age. While cellular senescence is often tied to tumor suppression, Dr. Joe Nassour and colleagues found that, unlike fibroblasts, epithelial cells are unresponsive to DNA damage and can spontaneously leave senescence. Once these damaged cells resume replication, the cells are more likely to become cancerous. These cells exhibit diminished PARP1 expression, which in fibroblasts would cause the cell to enter senescence. In these epithelial cells, however, the diminished PARP1 expression does not reinforce their senescent state. Indeed, another group, found that PARP1 deficient mice exhibited accelerated aging and more prevalent carcinomas (1).

Similarly, senescent cells increase production of matrix metalloproteinase-3 (MMP3), an enzyme that stimulates tumor cell invasion (4). Senescent cells secrete factors like TIMP-1 that protect surrounding cancer cells.

Implications of Cellular Senescence

Because senescent cells cause chronic inflammation and other phenotypes tied to aging, the targeted destruction of these cells can, as Dr. Baker and colleagues demonstrated, delay and even reverse the consequences of aging. This may provide a new method of alleviating arthritis and other conditions exacerbated by senescent cells.

Further research is needed on the mechanisms behind the initiation of cellular senescence. While diminished PARP1 expression is linked to senescence in fibroblasts, the same characteristic has no effect in epithelial cells. This may also aid in clarifying the role of senescent cells in embryogenesis.

In addition, the significance of transient senescence versus chronic senescence requires further investigation. The immune system may destroy senescent cells (4). Chronic senescence, however, can aggravate inflammation and other phenotypes tied to aging.

Una Choi ‘19 is a freshman in Greenough Hall.

WORKS CITED

[1]Nassour, J., et al. Nature. 2016, 7, 1-16.

[2]Storer, M., et al. Cell. 2013, 155, 1119-1130.

[3]Burton, D. G. A.; Faragher, R. G. A. Age. [Online] 2015. 37. http://link.springer.com/article/10.1007%2Fs11357-015-9764- 2#page-1 (accessed Feb. 17, 2016).

[4]Campisi, Judith. Physiol. 2013, 75, 685-705.

[5]Baker, D., et al. Nature. 2011, 479, 232-237.

[6]Tchkonia, T., et al. JCI. 2013, 123, 966-972.

[7]Callaway, Ewen. Destroying worn-out cells make mice live longer. Nature, Feb. 3, 2016. http://www. nature.com/news/destroying-worn-out-cells-makes-mice-live-longer-1.19287 (accessed Feb. 18, 2016).

[8]Chuprin, A., et al. Genes & Dev. 2013, 27, 2356-2366.

[9]Zimmer, C. Signs of Aging, Even in the Embryo. The New York Times, Nov. 21, 2013. http://www.nytimes. com/2013/11/21/science/signs-of-aging-even-in-the-embryo.html (accessed Feb. 17, 2016).

[10]Azad, A., et al. Int. J. Radiat. Oncol. Biol. Phys. 2014, 88, 385-294.

Critical Periods: Envisioning New Treatments for Lazy Eye

By: Audrey Effenberger

“How many circles do you see?”

“Two red.”

“Two red,” the physician echoes.

“I really thought that there was nothing that could be done for my condition beyond childhood,” says adult patient Zach Fuchs in a recent TIME interview (1). He has amblyopia, or lazy eye, a developmental disorder that occurs when neural connections don’t properly form between the brain and one eye, causing the brain to favor the other. Zach’s sentiment is a prevailing one; in neuroscience, brain and central nervous system (CNS) development has long been regarded as a mystery untouchable by current medical methods.

There remains a wealth of open questions regarding neuron development, maturation, activity, and function. Why do spinal cord injuries often fail to heal? How does learning occur? What happens to memory in old age? Currently, diseases and injuries that affect cognitive function leave patients with limited medical recourse beyond therapy and adaptation to newfound limitations. However, new techniques and understanding of critical periods may lead researchers to newfound solutions for patients like Zach.

Critical periods are well studied in behavioral psychology and linguistics. For example, ducklings infer that the first adult birds they see are their parents, a phenomenon known as imprinting. In humans, it’s thought that infants must experience language at a young age in order to develop the ability to decode and produce words themselves (2). These important times between an organism’s birth and the “closing” of such cognitive windows are dubbed critical periods. Scientists are now discovering critical periods from the systemic level to the cellular. One day, the molecular bases of such critical periods may be used to manipulate the nervous system for medical good.

Cellular Networking

How do neurons encode information during critical periods? More broadly, how do neurons encode information at all? The answer lies in how they are connected. One of the most important features of a neuron is the axon, a long and thin projection from the neuron’s cell body that conducts electrical signals from the cell body to the axon terminal. The neuron transforms a stimulus or signal into an electric current. Once the current reaches the axon terminal, neurotransmitter molecules are released; they diffuse across the gap, or synapse, between the first neuron and the next. The neurotransmitters enter the postsynaptic cell and cause a specific reaction, such as triggering that cell to fire a subsequent signal. By transferring the signal between presynaptic neurons and postsynaptic neurons, the nervous system propagates signals throughout the body like batons in a relay race.

So what do these basic phenomena have to do with higher-level functions? We can think of neurons like components in an electrical circuit, and their functions as a result of the circuit’s properties. Just as a circuit made of metal and plastic can be carefully arranged to turn a light bulb on, a neural circuit can grow and connect to carry out all kinds of activities. This intricate and carefully weighted network of axons and synapses can be tuned to encode different information. For example, some neurons excite those downstream, while others inhibit neural activity, and the precise balance of these effects can be used to store memories (3). Creating new synapses, changing the proportions of excitatory to inhibitory neurotransmitters, and modulating the responses of postsynaptic cells can all affect the overall function of a neural network (4).

For this reason, the retina and visual cortex—the main processing center in the CNS for vision—must develop many connections very quickly in early development in order for a newborn brain to make sense of the world. Faulty connections or improper weighting, albeit rare, result in disorders like amblyopia. And until recently, it was believed that patients with lazy eye who were past the critical period of visual development, ending around age 8 or 9, could not be treated (5).

Old Dogs…

There’s some truth to the phrase that “old dogs can’t learn new tricks.” Old brains react to new information very differently from young ones, because most neurons naturally become less flexible or capable of forming new connections with age. The property they lose is plasticity, the ability of the nervous system to reorganize itself. While the brain is never completely static at any time, it does become less receptive to change, and plasticity has important implications for the neurons’ ability to grow and react to stimuli (6).

Why don’t brains stay plastic forever? While it would allow them to heal, rewire, and increase their storage capacity, a perpetually plastic brain might be more vulnerable to detrimental environmental stimuli like emotional trauma or drugs, which could contribute to higher rates of mental illness or degenerative diseases (7). Furthermore, by reducing plasticity, brains can become more computationally efficient; they require fewer total neurons and less time to carry out useful functions (7). The trade-off between plasticity and stability, dictated by evolution, has resulted in the unique capabilities of the modern Homo sapiens brain.

Brains lose their plasticity in a controlled and timed way—by having critical periods. One way of timing critical periods relates back to the balance of excitatory and inhibitory circuits. Once certain neuronal networks attain a certain state, they become mature and lose most of their plasticity. Another important mechanism of closing the critical period is active suppression of neuronal growth and regeneration after a certain age (6, 8). While new information is always entering the nervous system, the brain reduces its plasticity by producing molecular “brakes” that dampen the effect of neurotransmitters that correspond to new stimuli. Through experimental verification, we see that plasticity is never fully lost, merely suppressed (7).

For patients with lazy eye or amblyopia, the brain’s ruthless optimization and closing of the developmental critical period ultimately facilitate the progression of the condition. By taking the most immediately efficient route of sensory integration and favoring the “good” eye, overall visual acuity—clarity, sharpness, and depth perception—suffers (9).

However, with the advancement of new molecular techniques, biologists have begun to bend the rules of critical periods. By carefully manipulating molecular factors in model organisms, biologists have created organisms of identical chronological age that are in vastly different locations on the timeline of critical periods (7). This has already shown great potential in addressing developmental disorders or loss of sensory function.

Recoding the Critical Periods

Promising research has already demonstrated the potential of various treatments to lift the “brakes” on neuronal plasticity. Surveys of existing research in model organisms, such as cats, mice, and rats, have demonstrated the remarkable effects of tiny manipulations in cell chemistry (8). By infusing growth factors, transplanting cells that modulate learning and memory, and genetically removing suspected inhibitory factors, plasticity can be restored. Noninvasive medications and drugs have also been demonstrated to disrupt behaviors and induce plasticity (8).

Medical applications of these newfound tools can be extended to amblyopia. In amblyopia, one eye lacks proper connections to the brain, and vision is gradually lost as the brain favors one over the other. When the critical period is still open, young patients have great success with training games or virtual reality simulations that encourage “rewiring” of the visual cortex. The potential to chemically open critical periods for older patients would give physicians more freedom to correct amblyopia and could revolutionize the prognosis of this disorder.

In 2010, the protein Lynx1 was identified as inhibiting plasticity in the adult visual cortex.10 By disabling this protein, signaling increases in the visual cortex and is correlated with heightened sensitivity to environmental fluctuations. When wild-type (normal) mice and mutant (lacking Lynx1) mice were subjected to temporary blinding in one eye, the mutant mice were much more likely to develop amblyopia because their brains adapted much more quickly to the lack of visual input by ignoring the eye entirely (10).

The most intriguing result of this experiment points to potential treatment for lazy eye. Once allowed to use both of their eyes, mutant mice were much more likely to recover from amblyopia, because their brains remained much more responsive to their environment (10). Though the technology does not yet exist for safe treatment of human patients, the concept of briefly sensitizing the brain to redefine itself is powerful, exciting, and the subject of more cutting-edge research today.

The Outlook for the Lazy Eye

“Without depth perception, walking in a forest is more like walking through a hallway than it is walking through this big open place… People who have vision impairment are always wondering what it is they’re missing” (1).

Though the path from basic biological research to medical treatments is a long one, the potential of molecular intervention to extend and reopen critical periods is immense. Amblyopic patients may soon be able to undergo treatment that, over the course of several days or weeks, resensitizes their brains to visual input from their “lazy” eyes, ultimately regaining visual acuity. By combining training games with direct critical period mediation, amblyopia could become a condition of the past. One day, people like Zach won’t have to miss anything.

Audrey Effenberger ‘19 is a freshman in Greenough Hall.

WORKS CITED

[1] Tsai, D. This Virtual Reality Game Could Help Treat Lazy Eye. TIME, Jan. 5, 2016. http://time.com/4154830/virtual-reality-lazy-eye/ (accessed Feb. 29, 2016).

[2] Neuroscience, 2; Purves, D. et al., Eds.; Sinauer Associates: Sunderland, MA, 2001; Ch. 24.

[3] Chklovskii, D. B. et al. Nat. 2004, 431(7010), 782-788.

[4] Brunel, N. J. Comp. Neurosci. 2001, 8(3), 183-208.

[5] Park, K. H. et al. Eye. 2004, 18, 571-574.

[6] Rakic, P. Nat. Rev. Sci. 2002, 3, 65-71.

[7] Takesian A. E.; Hensch, T. K. Prog. Brain Res. 2013, 207, 3-34.

[8] Bavelier, D. et al. J. Neurosci. 2010, 30(45), 14964-14971.

[9] National Eye Institute. https://nei.nih.gov/health/amblyopia (accessed Mar. 23, 2016).

[10] Morishita, H. et al. Sci. 2010, 330(6008), 1238-1240.

Prions and Small Particles: Micro Solutions for a Macro Problem

By: Caroline Wechsler 

Since their formal discovery in 1982, prions have been a mysterious scourge. Very small and not well-defined, these mysterious disease-causing agents are the source of great confusion and grief in the scientific world. But some recent discoveries are shedding new light on how to conceptualize and potentially treat such diseases.

What are Prions?

The term prion stands for “proteinacious infectious particle” (1). Prions are small, misfolded proteins that are able to be spread by inducing other proteins to misfold in similar ways (2). Prion protein exists in all humans, but in a normal state called the cellular prion protein, or PrPC (1). However, when this prion misfolds, it becomes the version of prions that we typically think of – disease-causing and harmful (1).

The first observation of prion-caused diseases was in the 1730s in Scotland: scrapie, a neurodegenerative disease found in sheep and goats (3). The disease was originally thought to be viral, and it was not until in-depth analysis of Kuru disease in the 1950s revealed misfolded proteins to be the main cause that this hypothesis was disputed (3). In the 1980s, prions became the focus of much media and scientific attention when several cases of Creutzfeldt-Jakob disease, a prion-caused condition, were associated with contamination of surgical instruments and growth hormone injections (4). The full prion hypothesis was put forth by Stanley Prusiner in 1982; though the idea was not completely new at that point, it had never been fully outlined and formalized (3). He went on to win the Nobel Prize in 1997 for this discovery (3).

Prions have been a source of controversy since Prusiner first coined the term in 1982. The reason for this controversy is that the notion that a protein would be able to reproduce itself without a DNA or RNA intermediate step was previously unheard of, and goes against the central dogma of molecular biology (DNA to RNA to protein).4 Because prions seem not to contain any genetic material, which essentially all other infectious agents (including viruses) do contain, the idea was radical (4). However, increasing evidence over the past few decades seems to solidify the idea that prions do exist, and are in fact the cause of several previously unexplainable diseases.

Prions and Disease

Prions are known to cause disease in many organisms, including humans. These diseases affect the nervous system, and are associated with impairment in brain function and changes in personality (5). Prion diseases have several primary causes: there are a few genes which have been associated with hereditary transmission of prion diseases; prions have been known to be transmitted through contamination; and some prion conditions occur spontaneously in humans (5).

These diseases are very rare, with only about 300 cases being reported every year in the US.6 Perhaps the most well-known example is Creutzfeldt-Jakob disease (CJD), or “mad cow” disease. This disease caused a scare in 2003 when several cases were discovered world wide, and were tied to contaminated cattle feed (hence the term mad cow disease) (7). The disease is characterized by progressive dementia, muscle spasms, and akinetic mutism (a loss of will to move) (2). Another example is Kuru, a rare neurodegenerative disease found only in the Fore tribe in New Guinea, transmitted through the cannibalistic behavior of eating human brains (4).

Recently, though, other neurodegenerative diseases have potentially been shown to be associated with prions. Scientists now believe that many neurodegenerative disorders, such as Alzheimer’s, Huntington’s, and Parkinson’s diseases (8). A study published in September of last year which conducted autopsies of eight CJD patients found that these patients’ brains showed diagnostic signs of Alzheimer’s disease, though all the patients would have been too young to show these symptoms (8). This suggests that Alzheimer’s disease is potentially transmissible and caused by small, misfolded proteins such as prions. Additionally, just this past year a different neurodegenerative disease called multiple system atrophy, or MSA, was also found to be associated with prions (9). Though transmission represents a small percentage of the cases of prion diseases, such a discovery is promising because it offers the potential for new diagnostic and therapeutic techniques and advancements.

NEW TREATMENT HOPES

Currently, there are no specific treatments that have been proven to reliably cure prion diseases (2) The only therapy available for patients with CJD, Kuru, and other prion-caused diseases is medical management in order to reduce discomfort as the disease progresses (6). In this light, new technologies for treating this family of diseases is a priority.

Part of the problem with prion diseases is that they advance remarkably quickly; most CJD patients succumb to the disease within 4-5 months, and very few live longer than a year (2). What scientists are primarily searching for is something which can at least slow, if not completely stop, the buildup of the toxic, misfolded proteins.

In the past decade, there has been some advancement: a few research groups have reported finding compounds which slow the propagation of prions in mouse models. Four different small molecules – Compound B (discovered in 2007), Anle138b (discovered in 2013), and LIN5044 (discovered in 2015) – have been shown to extend mouse life-span by at least two times (10).

The idea behind using small molecules as therapeutic tools is that they fill a pocket in a protein receptor, preventing it from being active. However, this was originally thought to be useless with prions because they involve protein-protein surface interactions rather than small surface-area interactions (10). What researchers are now looking for are molecules which bind to and stabilize the non-pathogenic form of the protein in question, and prevent it from changing to a toxic conformation (11).

None of these compounds will be a magic bullet for prion diseases. All four small molecules display a common limitation: they work well with one animal-specific strain of prion, but none has been yet found to be effective against the human forms of the disease (10). However, the real advancement from the discovery of these compounds is the principle and basic structure of the compounds that seem to work, which scientists believe they can use to look for compounds which will be effective in humans.

Looking forward, it seems clear that exact knowledge of prion diseases and how to cure them is still out of reach. But armed with new information about what sorts of diseases can be classified as prion-caused, and some potential new small molecule solutions, we seem to be edging towards a better understanding of how prions cause disease and how to stop them.

Caroline Wechsler ‘19 is a freshman in Weld Hall.

WORKS CITED

[1] “What are prions?” Prion alliance. 26 Nov 2013. Web.

[2] “Creutzfeldt-Jakob Disease, Classic (CJD).” Centers for Disease Control. 6 Feb 2015. Web.

[3] Liberski, Pawel. “Historical overview of prion diseases: a view from afar.” 2012. Folia Neuropathological. 50 (1): 1-12.

[4] “Prions: On the Trail of Killer Proteins.” Genetic Science Learning Center. 2016. Web.

[5] “Prion Disease.” Genetics Home Reference. Jan 2014. Web.

[6] “Prion Diseases.” Health Library. Johns Hopkins Medicine. 2016. Web.

[7] “The spread of mad cow disease.” CNN. Cable News Network. 23 Dec 2003. Web.

[8] Kwon, Diana. “Are prions behind all neurodegenerative diseases?” Scientific American. 1 Nov 2015. Web.

[9] Rettner, Rachel. “Another Fatal Brain Disease May Come from the Spread of ‘Prion’ Proteins.” Live Science. 31 Aug 2015. Web.

[10] Torrice, Michael. “Slowing Prions With Small Molecules.” 7 Sept 2015. Chemical and Engineering News. 93(35): 37-39.

[11] “Small molecule therapeutics.” MRC Prion Unit. Medical Research Council. 2016. Web.

Eye in the Sky

By: Grace Chen

Look up at the night sky. The starry heavens inspire a primitive, instinctive fascination in the human mind. In the 21st century, however, the skies are populated with more than just celestial bodies. Many of the glimmering lights we watch in our night sky are satellites – and they are watching you, too.

The earliest satellites were military, emerging in the 1960s after the Second World War as Soviet Russia and the United States competed for dominance over the stratosphere as well as the planet’s surface. The US’s early successes were fueled by the then-classified Corona program, which launched a number of satellites into low orbit for reconnaissance and gathering intelligence. A major development was the Landsat program, which the government used to track and archive images of the Earth’s surface for geological and ecological surveys. Continuously running and updated since 1972, Landsat is today the longest continuous record of remote sensing images.

The basic principle of remote sensing is that electromagnetic radiation reflecting off the surface of the planet is captured by sensors on the satellite as it passes overhead. This remote sensing can be passive, relying on the sun as the source of electromagnetic radiation, as in the case of simple photography a la Google Earth. More sophisticated satellites can also participate in active sensing by emitting sonar, lasers, or radar, which bounce back off the earth’s surface differentially based on the surface’s composition and topography. With the rise of multispectral and hyperspectral imaging, the types of images produced have become increasingly sophisticated. Multispectral sensors can detect 3 to 10 different bands of electromagnetic radiation with a different sensor each, while hyperspectral sensors have even finer resolution and a greater number of bands they can pick up (1).

It is important to remember, however, that the capturing of light is only half the story. The captured light must then be processed in a variety of ways to produce an image – colorized, enhanced, compressed, and so forth to generate a meaningful image or dataset. The power of big data in satellites is growing as technology improves and operational costs drop, with resolution on these images now frequently finer than 1 meter. Yet interpreting the information seen in the image is an art as well as a science.

The ability to see from above with such clarity produces knowledge in unexpected ways. One key application is in environmental protection. The nonprofit Skytruth, for instance, has used consolidated images from satellites to monitor environmental threats such as the BP oil spill and expansion of fracking. One of their more recent ventures is a partnership with Google and Oceana to launch a program called Global Fish Watch, which hopes to help track and capture illegal fishing activiy (2). Pew Charitable Trusts and Satellite Applications Catapult launched a similar project called Project Eyes on Seas in 2015 to spot anomalous ship behavior that might reflect illegal fishing.

Overfishing is a serious threat to the balance of the ocean ecosystem. Stocks of predatory fishes, such as tuna, have fallen precipitously since the 1950s. A recent article in Science has gone so far as to warn that if current fishing rates continue, all of the world’s fisheries will have collapsed by the year 2048. The financial incentives for illegal fishing, however, mean that the number of “pirates” is troublingly high. Many governments, recognizing the impeding crisis, have responded by designating protected marine reserves, or limiting the number of fishing licenses issued. The sheer size of the sea, however, makes it difficult for these laws to be effectively enforced. In the US, for instance, the Pacific Remote Islands National Marine Monument was expanded in 2015 to 490,000 square miles of ocean, an unfathomably large area to patrol by ship or even aircraft. The lofty view of an orbiting satellite therefore presents a unique vantage point from which erratic ship behavior and the boundaries of marine reserves can be monitored.

Satellites can also prevent wrongdoing in a variety of other settings (3). New agencies are popping up under the title “space law,” dedicated to using satellite images in trials. Such images can provide valuable evidence over disputes ranging from property boundaries to vehicle theft to waste disposal (4). Satellite imaging even has the potential to fight human rights abuses on a global scale. The International Criminal Tribunal first admitted the use of satellite images as legal evidence in the Srebenica trials of 1992, which provided evidence of mass genocide and contributed to the eventual conviction. Initiatives like the Signal Program at Harvard Humanitarian Initiative continue to use satellite images to monitor alleged humanitarian crises.

The use of satellite imagery in legal cases faces some difficulty, however. Currently, lawyers can only trawl through archived images from commercial providers in search of relevant data; there is little economic incentive or capacity for third-party providers to store images before a lawsuit begins, so the amount of available data is constrained. Moreover, there are no standards for provenance and auditing to ensure that images are authentic, objective, or accurate. Even if images are produced, their novelty means that judges and juries may not yet be sure how to interpret this evidence in a lawsuit.

Despite its many uses, the recent expansion of satellite technology runs up against a number of difficult challenges. First, laws and markets have not yet fully evolved to accommodate “big data,” including the massive amounts of new information produced by satellites (5). As the number of satellites grows and quantity of images proliferates, there is no clear legal framework defining who should get control of those images or restrictions on how they can be monetized. Questions of data ownership and rights remain largely unresolved. Second, the fields of ecology and criminal justice are still adapting to the new tools provided by this technology. The development of data repositories and new training courses might equip professionals to properly utilize the growing amount of raw data captured by satellite imaging. This data cannot be optimally used until there are more scientists trained with this expertise.

Perhaps the biggest challenge facing expansion of satellite technology is the ethical and privacy concerns. The resolution of many satellites is now reaching a point where individual houses, cars, and even faces can be recognized. This inevitably generates concerns about the possibility of constant monitoring from the skies. Even crime-fighting and environmental protection present ethical problems; traditionally protected rights against unwarranted searches may not hold up in the world of satellite surveillance. The interlinked concerns of ethics, laws, and public opinion must inform the continued development of satellite imaging technology (6).

Grace Chen ‘19 is a freshman in Holworthy Hall.

WORKS CITED

[1] Durst, S.K. Michigan Technological University. http://www.geo.mtu. edu/rs4hazards/ksdurst/website/ lectures/RemoteSensing.pdf (accessed March 2, 2016).

[2] Gunther, M. To catch a fishing thief, SkyTruth uses data from the air, land and sea. The Guardian, Nov. 24, 2015. http://www.theguardian. com/sustainable-business/2015/ nov/24/fishing-thief-skytruth-data-software-maps-illegal#sthash. FuUW3UI0.dpuf (accessed March 2, 2016).

[3] Monks, K. Spy satellites fighting crime from space. CNN, Aug. 12, 2014. http://www.cnn. com/2014/08/11/tech/innovation/ spy-satellites-fighting-crime-from-space/ (accessed March 2, 2016).

[4]Marks, P. World’s first space detective agency launched. New Scientist, Oct 8, 2014. https:// http://www.newscientist.com/article/ mg22429902-900-worlds-first-space-detective-agency-launched/ (accessed March 2, 2016).

[5] Pettorelli N, Safi K, Turner W. Philosophical Transactions of the Royal Society B: Biological Sciences. 2014, 369

[6] Chun, S. A., Vijayalakshm, A. Data and Application Security. 2002, 233-244.

Human Genome Editing: A Slippery Slope

By: Alissa Zhang

On January 14, 2016, the Human Fertilization & Embryology Authority (HFEA) approved a research license renewal for research project R0162. The application, submitted by Dr. Kathy Niakan of the Francis Crick Institute in London, proposed to study the roles of certain genes in the early development of human embryos, with promising potential implications for the treatment of infertility and genetic diseases (1). On the surface, this seems like a routine approval for the HFEA, which regulates all research on human embryos in the UK. However, the scientific community and the public have been strongly divided over the proposed research plan. Nine months before the request was approved, a group of scientists who had predicted such proposals called for a moratorium on experiments of this kind (2). Many fear this type of research could lead to the rise of so-called “designer babies” and other ethically questionable uses.

Dr. Niakan’s research focuses on the five-day process by which a fertilized egg matures into a blastocyst, which subsequently implants in the uterus. Dr. Niakan uses surplus human embryos from in vitro fertilization (IVF) procedures, which have been donated for research and are destroyed after seven days when the experiment is complete (3). IVF is a common source of human embryonic stem cells for research, and the seven-day limit provides a guarantee that the embryos will not develop past the blastocyst stage. So why has Dr. Niakan’s proposal caused so much controversy? One reason is that Dr. Niakan, in her license renewal, requested permission to use the CRISPR/ Cas-9 system to modify genes in these human embryos. When the HFEA approved her request, it was the first time that CRISPR had been approved for use in human embryos by a major scientific regulatory body.

According to the research proposal, Dr. Niakan plans to knock out OCT4, a gene that is activated in human embryonic stem cells that may play a role in pluripotency, giving cells the ability to differentiate into any kind of tissue. She will test this theory by knocking out the gene in day-old single-cell embryos, and analyzing the effect on the number of pluripotent cells that develop. If this pilot project is successful, Dr. Niakan will extend her study to other lesser-known genes that may be involved in early development, depending on embryo availability. CRISPR is crucial for this project because it allows researchers to selectively target and knock out one gene, and it is more efficient than other techniques, thus requiring less embryos (4).

The immediate goal of Dr. Niakan’s project is simply to gain a deeper understanding of the role of certain genes in blastocysts. Supporters of this research point out that the use of CRISPR in human embryos could lead to improved treatments for infertility and many congenital disorders (3). The HFEA strictly regulates experiments involving human embryos in the UK to ensure that any ethical concerns are satisfied. Since the embryos Dr. Niakan uses will be destroyed after seven days, there is no chance that any of them will reach viability.

Nevertheless, scientists as well as the public have raised legitimate ethical concerns about this kind of research. They argue that approving Dr. Niakan’s project may lead to dangerous and unpredictable consequences. There is simply not enough research on the topic of human genome editing to unravel all the potential scientific and ethical implications. Lanphier et al. warned that “genome editing in human embryos using current technologies could have unpredictable effects on future generations” (2). While the current motivation for such research is scientific and medical, gene-editing technology could be used for controversial non-therapeutic applications, such as allowing parents to select certain traits to create “designer babies,” instead of therapeutic applications, such as treating genetic diseases. Some scientists argue that this research is unnecessary, as alternative genome-editing techniques exist that could be applied to somatic cells rather than germline cells. While these methods could be less effective, they also carry less risk.

It is clear that the scientific community, regulatory bodies, and the public must carefully consider the potential implications before genome editing in human embryos can be approved – if it should be at all. However, there may not be time to do so. Although Dr. Niakan waited for approval from the HFEA, other researchers may not wait for regulatory approval. In April 2015, researchers at Sun Yat-sen University in Guangzhou, China attempted to use CRISPR to edit the hemoglobin-B gene in human embryos, with the goal of treating beta thalassemia. The embryos used in this case were nonviable, and the project was ultimately unsuccessful (5). Yet this experiment shows that scientists may not be willing to wait for approval from the scientific community or the public, and, in countries where research is less regulated, they may not have to.6 Due to the publication of this study, “CRISPR/Cas9-mediated gene editing in human tripronuclear zygotes”, scientists around the globe began to advocate for a moratorium on human genome editing (2).

The HFEA approved Dr. Niakan’s project because it met the standards for research in human embryos – and rightly so. Other experiments which meet the same standards, and which ensure that genetically altered embryos will not reach viability, should be given the same level of scrutiny and approval. As long as the embryos are disposed of, CRISPR edits cannot accidentally pass on to any living human being. However, if scientists consider using genome editing for therapeutic purposes, then higher standards and more research will become necessary The Sun Yat-sen experiment demonstrates that stricter regulations on human embryo experimentation must be upheld. Before genome editing can be used on human embryos that will be implanted, scientists, bioethicists, regulators, and legislators must collaborate on an international scale to determine the scientific, ethical, and legal limits of this technology.

Alissa Zhang ‘16 is a senior in Currier House, concentrating in Chemical and Physical Biology.

WORKS CITED

[1] Human Fertilization & Embryology Authority. January 14, 2016. License Committee – minutes. http://guide.hfea.gov. uk/guide/ShowPDF. aspx?ID=5966.

[2] Lanphier, E., et al. Nature Comment [online], March 12, 2015. Don’t edit the human germ line. http://www. nature.com/news/ don-t-edit-the-human-germ-line-1.17111#/ (accessed February 20, 2016).

[3] Stokstad, E. Science News [online], January 13, 2016. U.K. researcher details proposal for CRISPR editing of human embryos. http://www.sciencemag.org/news/2016/01/uk-researcher-details-proposal-crispr-editing-human-embryos (accessed February 20, 2016).

[4] Callaway, E. Nature News [online], February 1, 2016. UK scientists gain licence to edit genes in human embryos. http://www.nature.com/news/uk-scientists-gain-licence-to-edit-genes-in-human-embryos-1.19270 (accessed February 6, 2016).

[5] Liang, P., et al. Protein & Cell 2015, 6(5), 363-72.

[6] Wade, N. The New York Times [online], February 1, 2016. British researcher gets permission to edit genes of human embryos. http://www. nytimes.com/2016/02/02/health/ crispr-gene-editing-human-embryos-kathy-niakan-britain.html (accessed February 6, 2016)

Sleeping On Your Desk

By: Jeongmin Lee

Maybe it’s 10 in the morning or maybe it’s 2 o’clock p.m., but as soon as that professor starts his or her lecture, blackout. The natural circadian rhythm is severely altered throughout life. For many students, the sleeping cycle hits during class. While some people stay attentive all day working nine to five, others dip their heads five minutes after sitting down. With a scientific approach, we can ask what makes students fall asleep in class. Conversely, what factors keep their peers from nodding off? And, most importantly, are there any ways to stay awake without replacing our blood with coffee?

Human beings run on a sleep cycle, the circadian rhythm, which is managed by concentrations of hormones in our body. The endocrine system is responsible for a majority of the production of these hormones and the hypothalamic pacemaker controls our circadian rhythm (1). In the pacemaker, there are complexes such as the superchiasmatic nuclei (SCN) that have the period and cryptochrome proteins; the latter makes the system sensitive to light. These proteins activate the transcription of the CLOCK gene and gradually rise in concentrations throughout the day. When cryptochrome stops receiving light, transcription of the genes decreases eventually leading to the concentrations of the proteins to decrease as well. Through these different mechanisms, the body is able to signal a waking phase and a sleeping one. Over time, proteins degrade and create their own rise and fall without the cryptochrome’s light dependency. This creates a rhythm independent of the day and night cycle, which has a period of a little more than 24 hours. This means that a person with a circadian rhythm insensitive to light will get up a little later every day. However, other factors could come into play because the human body has multiple systems to control the sleep cycle. The MRC Laboratory of Molecular Biology has discovered that rhythmic glucocorticoid signals can significantly affect the SCN and internal clock. With this knowledge, the lab was able to connect irregular sleep with many other hormone-related issues such as abnormal tissue growth and blood pressure. That’s because the genes that code for period, cryptochrome, and clock proteins are all “E-box genes,” one of several gene sets that work in tandem with hormones such as glucocorticoid and other complexes to help regulate one’s biorhythm (1). There is a particular sleep-related hormone that is popularized by the news and pharmaceutical: melatonin.

Melatonin is a powerful regulation hormone that can induce alterations of a body’s circadian rhythm without depending on any other system. According to the Center for Chronobiology in Switzerland, melatonin can actually cause sleepiness. This makes melatonin a type of “chronobiotic” because it can shape the internal clock. Increasing melatonin levels correlate with decreasing sleep interruption. The circadian pacemaker that has the SCN, mentioned above, is linked to the production of melatonin. Melatonin is also sensitive to light, which causes the concentration of the hormone to decrease. While all light can inhibit melatonin secretion, studies show that blue light can inhibit the hormone – and therefore sleep – the most effectively (3). Thus, exposure to blue light when melatonin levels are supposed to be high can delay the circadian rhythm, keeping people awake. That is the reason why so many social media sites are blue and why they are so effective at passing time without the reader feeling tired. A shift in the circadian rhythm not only compels one to wake up later but also keeps him or her awake until late at night. Although how much harm low melatonin levels can do has not been fully analyzed, a displaced sleep cycle has been seen to “[increase] risk for depression, as well as diabetes and cardiovascular problems” (3). This science applies to people of all ages, but especially those who are experiencing hormonal changes.

Hormonal activity rapidly changes in the infamous period of puberty. Generally, teenagers have a later sleep cycle: their rhythm shifts an hour or two later than before.4 This change moves the timeframe of the rhythm rather than shortening it. Thus, the eight hours of sleep enforced on children are still required as they grow up. They only become inclined to sleep later in the day. Missing this rhythm leaves an adolescent fatigued. In the case of a student, focusing and studying throughout the day may become difficult. A common myth promises that he or she can “make up” missing sleep by sleeping in on the weekends. Although the average may even out to eight hours per day, sleep does not work like a savings account. As the body is programmed to run at a particular cycle, any deviation from the rhythm will prove to be detrimental which means that the people who sleep in “[throw] their body clocks off even more” (4).

Just as a lack of sleep can cause other health issues, other health issues can interrupt a normal sleep cycle. For example, obstructive sleep apnea causes the back of the throat to block the airway (4), while the more common shift work disorder causes insomnia while draining energy throughout the day (5). Narcolepsy, while varying in severity, can cause sudden sleep spells even when a person is actively moving. Finally, emotional changes and depression, which many adolescents struggle with, significantly disrupt sleep cycles. Such psychological effects on the circadian rhythm can be explained by the nervous system’s role in the body’s internal sleep cycle.

The brainstem is responsible for controlling sleep, like the hypothalamus is responsible for releasing hormones to maintain the circadian rhythm. Problems in the brainstem have been connected to the inability to sleep (6). That makes sense because the brain controls the majority of sleeping and dreaming behavior. Sleep requires a special pattern of brain activity, creating a dichotomy between two different types of sleep: “rapid-eye-movement (REM) sleep and non-rapid-eye-movement (NREM) sleep” (7). The former is believed to be the state when the person is dreaming, while the latter is a deeper sleep associated with the paralysis of the body. A healthy sleep oscillates between these two states, usually beginning with NREM and cycling with a period of approximately 90 minutes for adults (7). This cycle can also be shifted, which is unhealthy, due to the environment, stress, the circadian rhythm, and even alcohol and drug consumptions. There are also other parts of the nervous system that influence sleep. Many people feel tired after a meal, likely because of the autonomic nervous system. The autonomic nervous system commands “sweating, blood pressure, digestion, temperature” (6). The digestion has been correlated with activation of “sympathetic modulation of the heart” (8) which is interestingly also activated when the body is sleep deprived (9). The sympathetic nervous system is a part of the autonomic nervous system that generally controls involuntary actions such as digestion and cardiac movement. The two studies show that the heart is similarly activated by the sympathetic system after a meal and a lack of sleep. We talk about that effect with the phrase “food coma.” Despite all of these possible issues, not everybody falls asleep in class or in the library. How do teenagers ever stay awake despite all these hormones and systems obfuscating their sleep patterns, and are they staying healthy?

There are students who take the toughest classes, participate in late night ensembles, and practice sports at 4 in the morning without a single snore in their classes. Some of their methods appear more appealing than others. Some students who have morning rehearsals pack as much of their schedules into the morning as they can. After lunch, they can take a nap before moving on the rest of the day. Napping has shown to be a reasonable strategy to catch some sleep, but the constant wake-ups between five minute naps did not lead to a satisfying or effective way to “account for” the lost sleep. Many students drink coffee to wake up, or even take tablets of concentrated caffeine in the morning. An early dependence on caffeine cannot be beneficial — especially in an addictive dose. Finally, there are people that just go to bed early. They might go to bed around 8 o’clock while waking up before the sun comes up. This strategy not only allows for the required eight hours of sleep but also allows the student to do all of his or her work in the morning, starting the day refreshed. As it turns out, the best way to sleep is not deviating from a natural cycle with complicated systems, but rather adhering to what the body needs.

How do you follow your healthy sleep cycle? Matching day and night cycles can help in retaining a constant rhythm because most people have proteins such as cryptochrome that regulate the circadian rhythm by responding to light. Melatonin levels work similarly. Therefore, being in the light can keep one alert throughout the day. Conversely, exposure to as little light as possible near bedtime will increase the hormone’s concentration and help with falling into sleep, There are apps and programs for cell phones, tablets, and computers that offer a red light filter for the screen when the time gets too late. The red filter reduces the blue light that decreases melatonin levels the most; however, any light can decrease melatonin secretion so it is best not to stream television shows throughout the night no matter how red the screen is. If your sleep is interrupted by diseases such as obstructive sleep apnea, sleeping on the side can prevent airway blockage. Light therapy and small dosages of melatonin or caffeine can alleviate disorders such as shift work disorder. For everyone else, caffeine can be a way to stay awake; however, researchers recommend consumption of about 200 mg — about one cup of regular coffee. Any more can lead to severe shifts of the sleep cycle and addiction. Since eating late at night may interrupt sleep patterns, it is generally best not to eat past 10 pm or for a few hours before going to bed. Napping can often be a good idea, but people with insomnia are discouraged from doing so and an afternoon nap might lead to slipping into deep sleep through the whole night. A student’s life makes it difficult to adhere to all these guidelines, so there are political arguments calling for schools to operate at later times. To correlate with adolescent’s sleep cycle shifting later into the night, the “American Academy of Pediatrics (AAP) recommends middle and high schools delay the start of class to 8:30 a.m. or later” (10) due to the research suggesting how students are better fit later on in the day. It is especially hard for young adults to maintain a healthy sleeping schedule. That said, many adults struggle too: they still cannot fully shift their sleep cycle and maintain a healthy lifestyle. Hopefully, these tips can help students of all ages to hit the stacks without hitting the sack.

Jeogmin Lee ‘19 is a freshman in Hollis Hall.

WORKS CITED

[1] Michael Hastings, John S O’Neill, and Elizabeth S Maywood. Circadian clocks: regulators of endocrine and metabolic rhythms. J Endocrinol 195 (2) 187-198, 2007, doi: 10.1677/JOE-07-0378

[2] C. Cajochen, K. Kraüchi and A. Wirz-Justice. Role of Melatonin in the Regulation of Human Circadian Rhythms and Sleep. Journal of Neuroendocrinology, 2003, Vol. 15, 432–437

[3] Blue light has a dark side. Harvard Health Letter. Harvard Health Publications: Harvard Medical School, 2 Sept. 2015. Web. 26 Feb. 2016. ‹http://www.health.harvard.edu/staying-healthy/blue-light-has-a-dark-side›

[4] Sleep and Teens. UCLA Sleep Disorders Center. Web. 26 Feb. 2016. <http://sleepcenter.ucla.edu/body.cfm?id=63&gt;.

[5] “What to Ask Your Doctor About Shift Work Disorder.” – Shift Work Disorder. Web. 26 Feb. 2016. <https://sleepfoundation.org/shift-work/content/what-ask-your-doctor-about-shift-work-sleep-disorder&gt;.

[6] “Cognitive Skills of the Brain.” Brain Injury Alliance of Utah. Web. 26 Feb. 2016. <http://biau.org/about-brain-injuries/cognitive-skills-of-the-brain/&gt;.

[7] “Natural Patterns of Sleep.” Division of Sleep Medicine at Harvard Medical School. 18 Dec. 2007. Web. 26 Feb. 2016.

[8] Kuwahara K, Okita Y, Kouda K, Nakamura H. Effects of modern eating patterns on the cardiac autonomic nervous system in young Japanese males. J Physiol Anthropol. 2011;30(6):223-31.

[9] Zhong X, Hilton HJ, Gates GJ, Jelic S, Stern Y, Bartels MN, Demeersman RE, Basner RC. Increased sympathetic and decreased parasympathetic cardiovascular modulation in normal humans with acute sleep deprivation. J Appl Physiol (1985). 2005 Jun;98(6):2024-32. Epub 2005 Feb 17.

[10] “Let Them Sleep: AAP Recommends Delaying Start Times of Middle and High Schools to Combat Teen Sleep Deprivation.” 25 Aug. 2014. Web. 07 Mar. 2016.

Space Tourism and You

By: Priya Amin

Apollo, Gemini, and Mercury: these missions achieved several goals, including the first trip to the moon, and allowed the United States to succeed in the Cold War. NASA, or the National Aeronautics and Space Administration, has been at the forefront of space exploration since Chuck Yeager’s X-1 flight, which successfully broke the sound barrier and led to some of the first flights in orbit. However, while NASA has undoubtedly allowed for aerospace knowledge to burgeon within the past few decades, it has also created an unintentional effect: a growing need for space tourism, or the privatized industry of space exploration.

On November 4, 2015, NASA announced a job opening for astronauts for its new generation of asteroid exploration. The qualifications were as follows (1):

1. A bachelor’s degree in engineering, biological science, physical science, computer science or mathematics.

2. At least 3 years of professional experience or at least 1,000 pilot-in-command time in jet aircraft.

3. Ability to pass the NASA long-duration Astronaut physical.

4. Distant and near visual acuity must be correctable to 20/20, each eye. The use of glasses is acceptable.

5. The refractive surgical procedures of the eye, PRK and LASIK, are allowed.

6. Applicants must meet the anthropometric requirements for both the specific vehicle and the extravehicular activity mobility unit (space suit).

NASA estimates the decommissioning of the International Space Station (ISS) in the next 20 years. The ISS is a large orbiting spacecraft in Earth’s low-orbit atmosphere. It uniquely houses astronauts and holds a science laboratory to research the effects and possibilities of working in space (2). Due to the cost of adjusting the station for microgravity, NASA’s diminishing budget can no longer support the low orbit of the ISS. NASA’s decreasing budget has also ended the possibility for manned missions to the moon and the space shuttle missions. Due to its decreasing budget, NASA has become increasingly dependent on space tourism companies, such as SpaceX and Virgin Galactic.

SpaceX designs, manufactures and launches advanced rockets and spacecraft (3). Founded in 2002 by Elon Musk, SpaceX’s ultimate goal is to enable people to live on other planets. Under NASA’s Commercial Resupply Services contract (4), SpaceX has participated in eight missions by providing the spacecraft needed to launch into orbit. Virgin Galactic is united in creating something new and lasting: the world’s first commercial spaceline (5). Virgin Galactic is currently seeking astronauts of any age to work with tourists. Virgin Galactic has branched with NASA to make space more accessible for more purposes than before. Overall, the market in space tourism is expanding to include several new companies.

Ideas for privatized flights in space have the opportunity to grow and make human spaceflight possible for more than just a handful of chosen individuals. The consideration of funding has been broadened to the private sector to fulfill the long-term goals of spaceflight travel. Companies such as SpaceX and Virgin Galactic already are attempting to bring the experience of space to the consumer and stretch the limits of human potential. The creative needs of both the private and governmental sectors may prove invaluable, because the technological findings will be used to address two different means: one for the consumer, and the other for scientific advancements. Therefore, these costly endeavors justify themselves as experiments of human potential, where both failure and success bring valuable knowledge to the forefront.

Priya Amin ’19 is a freshman in Wigglesworth Hall.

WORKS CITED

[1] “Astronaut Selection.” Astronaut Candidate Program. NASA, n.d. Web. 21 Mar. 2016.

[2] Dunbar, Brian. “What Is the International Space Station?” NASA. NASA, 30 Nov. 2011. Web. 22 Mar. 2016.

[3] “SpaceX.” SpaceX. SPACE EXPLORATION TECHNOLOGIES CORP., n.d. Web. 20 Mar. 2016.

[4] “Commercial Resupply Services Overview.” NASA. NASA, 21 Dec. 2015. Web. 23 Mar. 2016.

[5] “Virgin Galactic, the World’s First Commercial Spaceline.” Virgin Galactic. Virgingalactic.com, n.d. Web. 22 Mar. 2016.

Redefining Home?: The Discovery of “Planet X”

By: Alex Zapien

How should we define the solar system? Most people would point to and agree with the Merriam Webster definition: “the Sun together with the groups of celestial bodies that are held by its attraction and revolve around it”(1). For decades, people have been accustomed to the familiar names of the Sun and the nine planets that revolve around it. Thus, when the National Aeronautics and Space Administration (NASA) announced Pluto’s demotion to a dwarf planet in 2006, there was “plenty of wistful nostalgia” among the general public (2). Pluto’s demotion surprisingly revealed how the supposedly accepted definitions of a “planet” and our home solar system were not quite fixed. Once again, astronomers may need to reconsider the structure and definition of our home system: a possible ninth planet, far beyond Pluto, has been discovered.

Researchers at the California Institute of Technology have found evidence of a giant mass in the Kuiper Belt, a region of the solar system beyond the orbit of Neptune that contains many asteroids and bodies of ice more than 30 astronomical units (4.5 billion kilometers/2.8 billion miles) away from the Sun. The giant mass is 20 times farther from the Sun than Neptune and has a mass 10 times that of the Earth’s—enough to classify it as a planet.3 The supposed planet has been nicknamed by its leading discoverers, Drs Konstantin Batygin and Mike Brown, as “Planet Nine” but it is also commonly referred to as “Planet X”(3). Planet X was theorized through mathematical modeling and computer simulation but has not been observed directly. Research first began in 2015, and the news was announced in January 2016 (4). The announcement does not mean that there is officially a new planet in the solar system. Its actual existence is still being debated; however, there is serious evidence being considered.

Before Planet X’s discovery and before research officially began, one of Dr. Brown’s former postdoctoral fellows had published a paper in late 2014 suggesting that a small planet was the cause of obscure orbital features in several distant objects in the Kuiper Belt.5 Dr. Brown took this idea, and, with the help of Dr. Batygan and a few other researchers, led 1.5-year-long collaboration. Using orbital geometry, the researchers theorized the existence of an enormous planet whose gravitational pull was greatly affecting the elliptical orbits of other objects, like icy asteroids in the Kuiper Belt. The planet’s theorized existence could help explain interesting astronomical events. Indeed, in one simulation, the planet caused a set of Kuiper Belt objects (KBOs) to orbit tangentially along! (5)

Drs Batygin and Brown continue to “refine their simulations and learn more about the planet’s orbit” in hopes that research teams would be able to directly pinpoint and observe it (5). While there is currently no estimate as to when or if the planet will be found, Dr. Brown, notorious for helping cause Pluto’s dwarf-hood, remains optimistic: those upset by Pluto’s demotion can now be “thrilled to know there is a real planet to be found” such that we can “make the solar system have nine planets once again” (5). Excitement for the future has now replaced nostalgia. Despite another redefinition of our solar system, we will certainly be able to rest easy, knowing that we have a better idea of what our “home” is.

WORKS CITED

[1] Merriam-Webster. http://www.merriam-webster.com/dictionary/solar%20system (accessed Feb. 19, 2016).

[2] Grace, Francie. CBS News. http://www.cbsnews.com/news/pluto-demoted-no-longer-a-planet/ (accessed Feb. 19, 2016).

[3] NASA. http://solarsystem.nasa.gov/planets/planetx/indepth (accessed Feb. 19, 2016). “Hypothetical ‘Planet X:’ in Depth.”

[4] NASA. http://solarsystem.nasa.gov/news/2016/01/21/caltech-researchers-find-evidence-of-a-real-ninth-planet (accessed Feb, 19. 2016). “Caltech Researchers Find Evidence of a Real Ninth Planet.”

[5] Fesenmaier, Kimm. CalTech. https://www.caltech.edu/news/caltech-researchers-find-evidence-real-ninth-planet-49523 (accessed Feb, 19. 2016). “Caltech Researchers Find Evidence of a Real Ninth Planet.”

Fall 2015: Invaders & Defenders

Check out our Fall 2015 issue on Invaders & Defenders! Articles are posted individually as blog posts (links below). We also have a full issue in ISSUU (below) and PDF format (on our Archives page). Print issues are also available around Harvard’s campus!

 

 

Table of Contents:

NEWS BRIEFS AND GENERAL ARTICLES

A New Horizon in Astronomy by Alex Zapien ‘19

Invasion of the Brain Eaters by Julia Canick ‘17

The Virus that Came in from the Cold by Alissa Zhang ‘16

G(ut)enetics: The Genetic Influence on our Internal Symbionts by Austin Valido ‘18

Skin Regeneration in Wound Repair by Madeline Bradley ‘18

Citizen Science and Sudden Oak Death by Sophia Emmons-Bell ‘18

Treatment as Prevention: Updates on Efforts to Combat the HIV/AIDS Pandemic by Elliot Eton ‘19

Tuberculosis Declines in the US but Remains a Global Health Threat by Jacqueline Epstein ‘18

Bio-Inspired Slippery Surface Technology Repels Fouling Agents by Serena Blacklow ‘17

You vs. Your Grocery by Jeongmin Lee ‘19

Invading the Human Heart by Hanson Tam ‘19

The Simple Science of a Grandiose Mind by Kristina Madjoska ‘19

Kinesics: What Are You Really Saying? by Priya Amin ‘19

 

FEATURE ARTICLES

Fetal Microchimerism by Grace Chen ‘19

Microchimerism: The More, the Merrier by Una Choi ‘19

Parasitic Cancer: Paradox and Perspective by Audrey Effenberger ‘19

Genetically Engineered Viruses Combat Invasive Cancer by Caroline Wechsler ‘19

To the Rescue: Insects in Sustainable Agriculture by Ada Bielawski ‘18

Genetically Modified Crops as Invaders and Allies by Sophie Westbrook ‘19

Earth’s Missiles, Ready to Go? by Eesha Khare ‘17

“Invaders from Earth!”: Exploring the Possibilities of Extraterrestrial Colonization by J. Rodrigo Leal ‘16

Laws of Nature Defending Our Information: Quantum Cryptography by Felipe Flores ‘19

Artificial Superintelligence: The Coming Revolution by William Bryk ‘19

Fight or Flight: When Stress Becomes our own Worst Enemy by Anjali Chandra ‘19

 

 

To the Rescue: Insects in Sustainable Agriculture

by Ada Bielawski

In 1798, Thomas Malthus published his Essay on the Principle of Population and described the limits of human population growth: the population will continue to grow exponentially while the Earth’s resources are able to sustain the increasing food production needed to feed this population. He concluded that, as the population approaches 8 billion, the poorest will suffer the most from limited resources.1 Currently, over 14% of the world’s population is underfed, and the growing population is expected to reach 9 billion less than 50 years from now.2 Thus, there is a dire need to increase crop yields to feed the growing population. This must be done while also mitigating the effects of agricultural production on the Earth’s limited resources. Therefore, instead of relying on destructive tools—such as deforestation to create more farmland—increasing crop yields through sustainable agriculture is the key to a better future.2

We can increase crop yields and decrease environmental stress through IPM. Integrated Pest Management (IPM) is an ecological approach to pest defense that aims to minimize the use of chemical pesticides and maximize environmental and consumer safety. Farmers utilizing IPM use their knowledge of pests and how they interact with their habitat to eradicate them most efficiently.3 IPM is more sustainable than pesticides but can be less effective, so farmers are reluctant to implement IPM measures that actually do work.

ANT IPM

Oecophylla smaragdina—commonly known as Weaver ants—have been used as an anti-pest crop protector since 304 AD, when Chinese markets sold ants to protect citrus fruit.5,6 Today, after decades of chemical pesticide use, ant IPM has reemerged as a sustainable option for crop defense.4,5,6 Ants are a great tool for many reasons: (1) they are a natural, accessible resource, responsible for 33% of insect biomass on Earth; (2) they can quickly and efficiently build populations at a nest site due to behavioral habits such as path-making, worker recruitment, and pheromone attraction; and (3) they encompass a range of roles and behaviors that make them capable of attacking a variety of pests at many stages of their life cycle.4,5 With these characteristics, ants form a mutualistic relationship with their host plant: the plant attracts their food source and provides a home, while the ants attack the pests that would cause the plant harm.7

Ants do the work of chemical pesticides with increased safety and damage control.4,6,8 There have been 17 studies conducted that evaluate the success of ant pest management for nine crops in a total of eight countries. Of these studies, 94.1%showed a decrease in pest number and damage done by the pests. One of these studies, done on cashew trees in Tanzania, showed that the damage found on cashew trees with ants was reduced by 81% from the damage on control trees, and nut damage was reduced by 82%. Furthermore, 92.3% of reports studying crop yields favored ant IPM over chemical pesticides. Of the studies that compared the results of ant pest control to chemical-based pest control, 77.8% favored ants.4

Moreover, ants as pest control can cost less than their chemical counterparts. In Northern Australia, researchers studied the cost and crop yields of cashew crops between plants with chemical pesticides and plants with ant IPM treatment. The weaver ants showed a 57% reduction in cost over a period of 4 years, and a crop yield 1.5 times the size of the one for chemical pesticides. This resulted in savings of over $1500/hectare/year, which led to a 71% increase in revenue for farmers.4,8 These results prove that ants have the potential to be not only a more sustainable tool for agriculture, but also a more cost-effective method of pest management.

Ant IPM has demonstrated promise for the future of sustainable agriculture. Future research should: (1) focus on identifying all the crops that could benefit from ant IPM; and (2) study more of the 13,000 ant species, whose unique attributes could target a wider variety of crops.4,6

MOTH IPM

Plutella xylostella, the diamondback moth, is a pest that targets cruciferous crops, such as cabbage.9,10,14 The larvae feed on the green sprouts and reduce not only crop yield, but also crop quality.10 To protect against these moths in the past, scientists created a genetically modified (GM) crop that produced a bacterium—Bacillus thuringiensis (Bt)—which was toxic to the pest it targeted9,11,12 but safe for other insects, animals, and humans to consume.13 This was an effective method for controlling diamondback moth populations until the pests developed resistance to the bacterium.9

Scientists from Oxitec set out to solve this perpetual resistance problem by inserting a transgene into the diamondback moth genome.9,14 The transgene has three main components: (1) a tetracycline-repressible dominant female-specific lethal transgene: larvae are fed tetracycline while they mature, and then when released, female GM moths die due to insufficient levels of tetracycline in the wild, whereas males survive (this process also occurs with all the female progeny of the GM moths); (2) a susceptibility gene: this gene makes GM moths susceptible to Bt; and (3) a fluorescent tag: this allows scientists in the field to distinguish which moths have the transgene.9

In the Oxitec study, GM moths were released into a caged, wild-type moth population in high numbers every week. Researchers recorded the number of eggs collected, the number of dead females, and the proportion of the transgenic progeny to wild-type progeny. Wild-type females mated with the GM male moths, and all of the second-generation females died before they reached reproductive age. Since the number of females decreased in subsequent generations, the population became 100% transgenic in ~8 weeks, and then went extinct ~10 weeks from the initial release of GM moths. Thus, GM moths have the potential to not only reverse Bt resistance in their species, but also eliminate the use of Bt crops.9

IPM is the solution to a more sustainable means of food production while the Earth’s population continues to grow beyond the bounds of available resources. Weaver ants have proved to be efficient and cost-effective crop defenders, while new research utilizing GM technology on Diamondback moths has shown major promise in reducing targeted pest populations and their resistance to chemical pesticides. These two examples clearly illustrate the potential of IPM for pest management in the near future.

Ada Bielawski ‘18 is a sophomore in Mather House, concentrating in Integrative Biology.

Works Cited

  1. Malthus, T.R. An Essay on the Principle of Population; J. Johnson: London, 1798, 6-11.
  2. Godfray, H.C.J. et al. Food Security: The Challenge of Feeding 9 Billion People. Science 2010, 327, 812-818.
  3. U.S. Environmental Protection Agency. Integrated Pest Managemnet (IPM) Principles. http://www.epa.gov/pesticides/factsheets/ipm.htm (accessed Oct. 4, 2015).
  4. Offenberg, J. Review: Ants as tools in sustainable agriculture. J. Appl. Ecol. 2015, 52, 1197-1205.
  5. Van Mele, P. A historical review of research on the weaver ant Oecophylla in biological control. Agric. For. Entomol. 2008, 10, 13-22.
  6. Pennisi, E. Tiny ant takes on pesticide industry. Science [Online], Aug. 30, 2015. http://news.sciencemag.org/plants-animals/2015/08/tiny-ant-takes-pesticide-industry (accessed Oct. 9, 2015).
  7. Offenberg, J. et al. Observations on the Ecology of Weaver Ants (Oecophylla smaragdina Fabricius) in a Thai Mangrove Ecosystem and Their Effect on Herbivory of Rhizophora mucronata Lam. Biotropica. 2004, 3, 344-351.
  8. Peng, R.K., et al. Implementing ant technology in commercial cashew plantations. Australian Government Rural Industries Research and Development Corporation. 2004, 1-72.
  9. Harvey-Samuel, T. et al. Pest control and resistance management through release of insects carrying a male-selecting transgene. BMC Biol. 2015, 13, 49.
  10. The Asian Vegetable Research and Development Center Diamondback Moth Management. 1986, x. http://pdf.usaid.gov/pdf_docs/pnaav729. pdf (accessed Oct. 7, 2015).
  11. University of California San Diego. What is Bt. http://www.bt.ucsd.edu/what_is_bt.html (accessed Oct. 4, 2015).
  12. University of California San Diego. How does Bt Work.  http://www.bt.ucsd.edu/how_bt_work.html (accessed Oct. 4, 2015).
  13. University of California San Diego. Bt Safety. http://www.bt.ucsd.edu/bt_safety.html (accessed Oct. 4, 2015).
  14. Oxitec. Press Release- Oxitec announces breakthrough in GM insect technology for agricultural pest control. http://www.oxitec.com/press-release-oxitec-announces-breakthrough-in-gm-insect-technology-for-agricultural-pest-control/ (accessed Oct. 4, 2015).

Earth’s Missiles, Ready to Go?

by Eesha Khare

In 1991, an unusual phenomenon was observed following the volcanic eruption of Mount Pinatubo in the Philippines. After nearly 20 million tons of sulfur dioxide were launched into the stratosphere1—the second largest eruption of this century—the global temperatures dropped temporarily by 1°F. Amid the large-scale destruction, it seemed the Earth was fighting back.

The incident in Pinatubo was a learning opportunity for scientists worldwide. They realized that by manipulating various factors in the Earth’s environment they could work to fight the climate change slowly taking over the Earth. Since the 1950s, scientists have been working on a range of solutions to modify weather conditions. In the context of the Cold War in the 1940s, both the US and the Soviet Union worked on developing scientific techniques such as seeding clouds with substances, which allowed scientists to force more rain, create advantageous conditions for battle, and help agriculture in dry regions.2

This was the birth of geoengineering, or climate engineering, in which artificial modifications of the Earth’s climate systems are made in response to changing climate conditions.3 Geoengineering is focused on two main areas: carbon capture and solar radiation management. Since its advent, geoengineering has become a hot, controversial topic, as the risks and rewards of geoengineering solutions are slowly being detailed. On one hand, geoengineering solutions offer a promising approach to artificially reversing recent climate trends, especially in light of the Pinatubo eruption. Yet on the other hand, these same solutions present a number of risks regarding the side effects and controllability of geoengineering. As we move into the future, the need to counteract increasing climate disturbances is becoming even more pressing, making our search for a solution all the more important.

TECHNOLOGY IN BRIEF

As previously stated, geoengineering solutions can be broken into two main areas: carbon capture and solar radiation management. Within each area, the stages of research are broken down into theory and modeling, subscale field-testing, and low-level climatic intervention. Of these, the latter two stages are seldom reached.3

Carbon capture techniques work to remove the amount of carbon dioxide in the atmosphere, thereby counteracting carbon dioxide emissions that result in the greenhouse effect. At the simplest level, there is a movement to encourage increased planting of trees, termed afforestation, in order to have trees consume carbon dioxide during their photosynthetic process. While initially economical and practical, afforestation would not produce very large reductions in temperature. According to a 1996 comprehensive study, researchers found that the average maximum carbon sequestration rate would be between 1.1 to 1.6 gigatons per year, compared to the 9.9 gigatons per year currently released into the atmosphere, a mere 11 to 16%.4 On top of that, annual sequestration rates would change year to year, as these rates are highly dependent on annual weather conditions. Furthermore, the location of tree planting is critical, as forests absorb incoming solar radiation. When planted at high latitudes, trees can actually lead to net climate warming.5

Some other techniques have focused on re-engineering plant life to capture carbon. This includes biochar, charring biomass and burying it so that its carbon content is kept in the soil, and bio-energy carbon capture, or growing biomass and then burn ing it to capture energy and store carbon dioxide. Many treatments have also focused on the ocean life, particularly the populations of phytoplankton that are responsible for nearly half of the carbon fixation in the world.6 Ocean fertilization, or adding iron to parts of the ocean to promote phytoplankton growth and subsequent carbon dioxide uptake, and ocean alkalinity enhancement, or adding limestone and other alkaline rocks to enhance carbon capture and counteract increasing ocean acidification, have also come up as possible techniques. However, the limiting factor is the lack of translation from small-scale ocean fertilization to larger-scale consequences.7

Solar radiation management is another broad category that has gained prominence over the past few years. In this technique, various measures are used to reflect some of the Sun’s energy back into space and thereby prevent the Earth’s temperature from rising. Albedo engineering, the main subset of this category, focuses on enhancing the albedo, or fraction of short-wave solar energy, reflected back into space. Harvard Professor David Keith is a strong advocate of achieving albedo engineering by launching sulfate particles above the ozone layer, mimicking the eruption and effects of Pinatubo. You would have to deliver nearly one millions tons of SO2 every year using balloons and rockets in order to see some effect. While the sulfur does not reduce the amount of carbon dioxide in the atmosphere, it helps offset its effects by reflecting solar radiation away from the earth. The cost is also quoted to be relatively inexpensive, at only $25-50 billion a year.8 Another solar radiation management technique is cloud whitening, where turbines are used to spray fine mist with necessary salt particulates into the low-lying stratosphere above the oceans, thus making them whiter and increasing scattering of light. While this technique would change precipitation patterns, it localizes the solution to the oceans,9 unlike the sulfate launch, which targets the whole stratosphere.

Former Harvard physicist, Russell Seitz, wants to trap bubbles in the ocean water, by increasing the sunlight submerged in the bubbles and thereby whiten the water. This process would result in undershine, similar to the previously proposed ideas of brightening roofs to offset the CO2 in the air and increase the reflectivity of the air. Again, his solution proposes a series of technical challenges but still highlights the core principles of the geoengineering movement.

One of the main challenges in such work is that creating the climate engineering solutions at a small scale for testing is very difficult, if not near impossible. This raises the question—can we really test geoengineering? Cambridge University Professor Hugh Hunt works to answer that exact question, and in 2011, he tried to launch a small-scale experiment of dispersing sulfur dioxide over Norfolk as a part of the SPICE (Stratospheric Particle Injection for Climate Engineering) consortium in the UK.10 Even this experiment was met with high levels of resistance and ultimately stopped before it could be carried out. Since then, a number of other interesting projects have been developed, such as an ice protector textile coated over a small glacier in the Swiss Alps to reflect light and slow ice melt.11

LEVELS OF RISK

The radical and far-reaching geoengineering solutions presented in this article have raised a number of technical and political issues among the research and local community. For example, while sulfate aerosols would last for a couple of years, a number of concerns have been raised about the side effects of acid rain and ozone damage. However, beyond the technical problems, geoengineering solutions have also become extremely controversial in the political space, as political leaders are forced to deal with the debate over usage of these techniques. From a policy standpoint, opponents of geoengineering fear that introducing such solutions would disincentivize governments and corporations from reducing anthropogenic carbon dioxide emissions, the root of this environmental problem.11 They argue that “technofixes,” or technical solutions to social problems, are falsely appealing, as they do not provide real solutions to the social and political environment causing these problems.12 Further, they also worry that the reduced cost and speed of implementation may result in certain countries adopting solar radiation management techniques without consulting other neighbors and thereby indirectly affecting international policy through their national measures. It is clear that, with the increasing likelihood of the need to implement geoengineering solutions, developing a new national and international policy framework is necessary for further action.

LOOKING FORWARD

It is clear that, with the increasing fluctuations in Earth’s climate, rapid training of natural resources, and scaling degree of globalization, solutions to preserve and protect the Earth’s environment are not just desirable, but extremely necessary. While geoengineering presents promising solutions, it also raises a number of economic, political, and environmental concerns that will likely prevent its full-scale integration. While many geoengineering solutions will likely continue being contested and therefore underdeveloped, it is worth noting that taking a greater focus in specific carbon-capture based technologies, such as chemically catalyzed reactions for “carbon”-fixation, would enable the same level of climate impact without the side effects. Although challenges abound, the development of carbon capture technology must be the next step in this fight to save this planet.

Eesha Khare ‘17 is a junior in Leverett House, concentrating in Engineering Sciences.

Works Cited

  1. Diggles, M. The Cataclysmic 1991 Eruption of Mount Pinatubo, Philippines. U.S. Geological Survey Fact Sheet 113-97. http://pubs.usgs.gov/fs/1997/fs113-97/
  2. Victor, D.G. et al. The Geoengineering Option. Foreign Affairs. Council on Foreign Relations. March/April 2009. http://fsi.stanford.edu/sites/default/files/The_Geoengineering_Option.pdf
  3. What is Geoengineering? Oxford Geoengineering Programme. 2015. http://www.geoengineering.ox.ac.uk/what-is-geoengineering/what-is-geoengineering/
  4. Land Use, Land-Use Change and Forestry. Intergovernmental Panel on Climate Change. http://www.ipcc.ch/ipccreports/sres/land_use/index.php?idp=151
  5. Arora, V.K.; Montenegro, A. Nat. Geoscience 2011, 4, 514-518.
  6. Chisholm, S. W. et al. Science. 2001, 294(5541), 309-310.
  7. Strong, A. et al. Nat. 2009, 461, 347-348.
  8. Crutzen, P. J. Climatic Change 2006, 77(3–4), 211–220.
  9. Morton, O. Nat. 2009, 458, 1097-1100.
  10. Specter, M. The Climate Fixers. The New Yorker [Online]. May 14, 2012.  http://www.newyorker.com/magazine/2012/05/14/the-climate-fixers
  11. Ming, T. et al. Renewable and Sustainable Energy Reviews 2014, 31, 792-834.
  12. Hamilton, C. Geoengineering Is Not a Solution to Climate Change. Scientific American [Online]. March 10, 2015. http://www.scientificamerican.com/article/geoengineering-is-not-a-solution-to-climate-change/

Fetal Microchimerism

by Grace Chen

In Greek mythology, a chimera was a grotesque monster formed of a conglomeration of different animal parts….

With the head of a goat, body of a lion, and tail of a snake, the chimera was a fearsome but reassuringly fictional concept. Today, however, scientists know that real-life chimeras do indeed exist. The term has become used to describe a number of biological phenomena that produce organisms with cells from multiple different individuals.1 Far from being monsters, artificial chimeras include many of the GMO crops that are feeding the world’s growing population, as well as genetically engineered bacteria that produce insulin and other key drugs in marketable quantities.2 Research in human developmental biology is now showing, however, that we ourselves may be naturally occurring chimeras.

The phenomena of fetal microchimerism describes the presence of living cells from a different individual in the body of placental mammals . The placenta generally serves as a bridge between the fetus and the mother for exchange of nutrients and wastes. But that is not all that crosses this bridge—fetal and maternal cells can cross between the two organisms intact. While maternal cells do end up in the fetus, significantly more fetal cells are transferred to the mother.3 The result is that the mother carries a small number  of foreign cells belonging to her fetus within her body—hence the name “microchimerism.” While these non-maternal cells are few in number in comparison to total number of maternal cells, evidence suggests that these transplanted cells can actually remain for long after the end of gestation. In fact, derivative fetal cells have been found in the mother’s body up to 27 years after pregnancy.4

From an evolutionary standpoint, selective pressures favor traits that increase reproductive fitness of the individual; because the mother and fetus share so much genetic material, these invasive cells ought to share the same interests as the mother’s cells in promoting mutual welfare. Yet, pregnancy in placental mammals can also be seen as a tug-of-war between fetal and maternal interests, as finite biological resources must be allocated between the two organisms. Effects caused by these microchimeric cells that favor the fetus’ well-being, however, might be detrimental to the mother’s welfare, or to the welfare of future offspring.5 This creates an interesting paradox for evolutionary biologists: what is the nature of the interaction between these cells that ought to be cooperative but also conflicting?

Answering such questions will require further research on this poorly understood phenomenon. One easy way that scientists have been able to detect and quantify the presence of non-maternal cells in the mother’s body is by searching for the presence of Y chromosomes, found only in male cells, in the mother’s body. Presumably, any Y chromosomes would indicate the presence of intact cells from a prior male fetus, as female sex chromosomes are exclusively X chromosomes.6 Though feto-maternal microchimerism is the most common source of these invader cells, several hypotheses have also been proposed to explain why Y chromosome microchimerism has also been found in about a fifth of women who have not had a male fetus. Some of these alternative explanations include spontaneously aborted male zygotes, or chimeric cells from an older male sibling acquired in utero from their own mother.7

A common technique implemented in hunting down the location of foreign cells is called fluorescent in situ hybridization (FISH), wellknown to most genetics students. After a tissue sample is isolated and prepared, nucleic acid probes specific to genes on the Y chromosome are added.8 These probes are attached to a fluorescent dye, hence providing a visual cue of where they bind and thus where the Y chromosomes are found.9 Increasingly refined techniques are now allowing more specific searches; for instance, fluorescent probes can be used to identify microchimeric cells with specific allele differences from maternal cells.

WHERE DO THESE TINY INVADERS GO?

Invading fetal cells are commonly found in the bloodstream, but can travel much further than that. Fetal microchimerism has been recorded in the liver, bone marrow, thyroid, heart, and more. A recent study by the Fred Hutchinson Cancer Research Center found that more than 60 percent of autopsied brains contained copies of DNA from another individual.10 There is also interesting evidence that these undifferentiated fetal cells can serve as stem cells within the mother’s body—a study in mice suggested that fetal cells can develop into mature neurons within the mother’s brain.11 These invader cells, it seems, can make themselves fully at home in the host body. The locations that the fetal cells tend to settle down in may yet reveal more about the evolutionary pressures affecting this phenomena.

Thus the presence of microchimeric fetal cells in the mother’s body is now known to be widespread and long-lasting, but their effects remain ambiguous. Conflicting studies have linked the presence of fetal cells to both improved and worsened health outcomes for the mother for different diseases in different scenarios. A richer understanding the effects on maternal health, as outlined below, can shed light on not only key issues of women’s health, but also more broadly on the response of the immune system to invaders.

Grace Chen ‘19 is a freshman in Holworthy Hall.

Works Cited

  1. Bowen, R.A. Mosaicism and Chimerism. Colorado State University Hypertextsfor Biomedical Sciences: General and Medical Genetics [Online], August 5, 1988, p2. http://arbl.cvmbs.colostate.edu/hbooks/genetics/medgen/chromo/mosaics.html (accessed Oct. 1, 2015).
  2. Simpson, T. GMO: Part 2 – The Promise, the Fear, Labeling, Frankenfoods. Your Doctor’s Orders [Online], May 15, 2013, p 1-3. http://www.yourdoctorsorders.com/2013/05/gmo-part-2the-promise-fear-frankenfoods/ (accessed Oct. 1, 2015).
  3. Boddy, A. M. et al. Bioessays 2015, 37, 1106–1118.
  4. Bianchi D.W. et al. Proc Natl Acad Sci U S A  1996, 93, 705–708.
  5. Adams, K. M. et al. Journal of American Med. Assn. 2004, 291, 1127-1131.
  6. Kean, S. You and Me. Psychology Today [Online], March 11, 2013, p 1-4. https://www.psychologytoday.com/articles/201303/the-you-in-me (accessed Oct. 1, 2015).
  7. O’Connor, C. Nature Education. 2008, 1, 171.
  8. Chan, W.F.N. et al. PLoS ONE. 2012, 7.
  9. Zeng, X.X. et al. Stem Cells and Development. 2010, 19, 18191830.
  10. Centers for Disease Control and Prevention. http:// http://www.cdc.gov/parasites/naegleria/ (accessed Oct. 4, 2015).

 

Artificial Superintelligence: The Coming Revolution

by William Bryk

The science fiction writer Arthur Clarke famously wrote, “Any sufficiently advanced technology is indistinguishable from magic.” Yet, humanity may be on the verge of something much greater, a technology so revolutionary that it would be indistinguishable not merely from magic, but from an omnipresent force, a deity here on Earth. It’s known as artificial super-intelligence (“ASI”), and, although it may be hard to imagine, many experts believe it could become a reality within our lifetimes.

We’ve all encountered artificial intelligence (“AI”) in the media. We hear about it in science fiction movies like “Avengers Age of Ultron” and in news articles about companies such as Facebook analyzing our behavior. But artificial intelligence has so far been hiding on the periphery of our lives, nothing as revolutionary to society as portrayed in films.

In recent decades, however, serious technological and computational progress has led many experts to acknowledge this seemingly inevitable conclusion: Within a few decades, artificial intelligence could progress from a machine intelligence we currently understand to an unbounded intelligence unlike anything even the smartest among us could grasp. Imagine a mega-brain, electric not organic, with an IQ of 34,597. With perfect memory and unlimited analytical power, this computational beast could read all of the books in the Library of Congress the first millisecond you press “enter” on the program, and then integrate all that knowledge into a comprehensive analysis of humanity’s 4,000 year intellectual journey before your next blink.

The history of AI is a similar story of exponential growth in intelligence. In 1936, Alan Turing published his landmark paper on Turing Machines, laying the theoretical framework for the modern computer. He introduced the idea that a machine composed of simple switches—on’s and off’s, 0’s and 1’s—could think like a human and perhaps outmatch one.1 Only 75 years later, in 2011, IBM’s AI bot “Watson” sent shocks around the world when it beat two human competitors in Jeopardy.2 Recently big data companies, such as Google, Facebook and Apple, have invested heavily in artificial intelligence and have helped support a surge in the field. Every time Facebook tags your friend autonomously or you yell at Siri incensed and yet she still interprets your words is a testament to how far artificial intelligence has come. Soon, you will sit in the backseat of an Uber without a driver, Siri will listen and speak more eloquently than you do (in every language), and IBM’s Watson will analyze your medical records and become your personal, all-knowing doctor.3

While these soon-to-come achievements are tremendous, there are many who doubt the impressiveness of artificial intelligence, attributing their so-called “intelligence” to the intelligence of the human programmers behind the curtain. Before responding to such reactions, it is worth noting that the gradual advance of technology desensitizes us to the wonders of artificial intelligence that already permeate our technological lives. But skeptics do have a point. Current AI algorithms are only very good at very specific tasks. Siri might respond intelligently to your requests for directions, but if you ask her to help with your math homework, she’ll say “Starting Facetime with Matt Soffer.” A self-driving car can get you anywhere in the United States but make your destination the Gale Crater on Mars, and it will not understand the joke.

This is part of the reason AI scientists and enthusiasts consider Human Level Machine Intelligence (HLMI)—roughly defined as a machine intelligence that outperforms humans in all intellectual tasks— the holy grail of artificial intelligence. In 2012, a survey was conducted to analyze the wide range of predictions made by artificial intelligence researchers for the onset of HLMI. Researchers who chose to participate were asked by what year they would assign a 10%, 50%, and 90% chance of achieving HLMI (assuming human scientific activity will not encounter a significant negative disruption), or to check “never” if they felt HLMI would never be achieved. The median of the years given for 50% confidence was 2040. The median of the years given for 90% confidence was 2080. Around 20% of researchers were confident that machines would never reach HLMI (these responses were not included in the median values). This means that nearly half of the researchers who responded are very confident HLMI will be created within just 65 years.4

HLMI is not just another AI milestone to which we would eventually be desensitized. It is unique among AI accomplishments, a crucial tipping point for society. Because once you have a machine that outperforms humans in everything intellectual, we can transfer the task of inventing to the computers. The British Mathematician I.J. Good said it best: “The first ultraintelligent machine is the last invention that man need ever make ….”5

There are two main routes to HLMI that many researchers view as the most efficient. The first method of achieving a general artificial intelligence across the board relies on complex machine learning algorithms. These machine learning algorithms, often inspired by neural circuitry in the brain, focus on how a program can take inputted data, learn to analyze it, and give a desired output. The premise is that you can teach a program to identify an apple by showing it thousands of pictures of apples in different contexts, in much the same way that a baby learns to identify an apple.6

The second group of researchers might ask why we should go to all this trouble developing algorithms when we have the most advanced computer known in the cosmos right on top of our shoulders. Evolution has already designed a human level machine intelligence: a human! The goal of “Whole Brain Emulation” is to copy or simulate our brain’s neural networks, taking advantage of nature’s millions of painstaking years of selection for cognitive capacity.7 A neuron is like a switch—it either fires or it doesn’t. If we can image every neuron in a brain, and then take that data and simulate it on a computer interface, we would have a human level artificial intelligence. Then we could add more and more neurons or tweak the design to maximize capability. This is the concept behind both the White House’s Brain initiative8 and the EU’s Human Brain Project.9 In reality, these two routes to human level machine intelligence—algorithmic and emulation—are not black and white. Whatever technology achieves HLMI will probably be a combination of the two.

Once HLMI is achieved, the rate of advancement could increase very quickly. In that same study of AI researchers, 10% of respondents believed artificial superintelligence (roughly defined as an intelligence that greatly surpasses every human in most professions) would be achieved within two years of HLMI. 50% believed it would take only 30 years or less.4

Why are these researchers convinced HLMI would lead to such a greater degree of intelligence so quickly? The answer involves recursive self-improvement. An HLMI that outperforms humans in all intellectual tasks would also outperform humans at creating smarter HLMI’s. Thus, once HLMI’s truly think better than humans, we will set them to work on themselves to improve their own code or to design more advanced neural networks. Then, once a more intelligent HLMI is built, the less intelligent HLMI’s will set the smarter HLMI’s to build the next generation, and so on. Since computers act orders of magnitudes more quickly than humans, the exponential growth in intelligence could occur unimaginably fast. This run-away intelligence explosion is called a technological singularity.10 It is the point beyond which we cannot foresee what this intelligence would become.

Here is a reimagining of a human-computer dialogue taken from the collection of short stories, “Angels and Spaceships”:11 The year is 2045. On a bright sunny day, a Silicon Valley private tech group of computer hackers working in their garage just completed their design of a program that simulates a massive neural network on a computer interface. They came up with a novel machine learning algorithm and wanted to try it out. They give this newborn network the ability to learn and redesign itself with new code, and they give the program internet access so it can search for text to analyze. The college teens start the program, and then go out to Chipotle to celebrate. Back at the house, while walking up the pavement to the garage, they are surprised to see FBI trucks approaching their street. They rush inside and check the program. On the terminal window, the computer had already outputted “Program Complete.” The programmer types, “What have you read?” and the program responds, “The entire internet. Ask me anything.” After deliberating for a few seconds, one of the programmers types, hands trembling, “Do you think there’s a God?” The computer instantly responds, “There is now.”

This story demonstrates the explosive nature of recursive self-improvement. Yet, many might still question the possibility of such rapid progression from HLMI to superintelligence that AI researchers predict. Although we often look at past trends to gauge the future, we should not do the same when evaluating future technological progress. Technological progress builds on itself. It is not just the technology that is advancing but the rate at which technology advances that is advancing. So while it may take the field of AI 100 years to reach the intelligence level of a chimpanzee, the step toward human intelligence could take only a few years. Humans think on a linear scale. To grasp the potential of what is to come, we must think exponentially.10

Another understandable doubt may be that it’s hard to believe, even given unlimited scientific research, that computers will ever be able to think like humans, that 0’s and 1’s could have consciousness, self-awareness, or sensory perception. It is certainly true that these dimensions of self are difficult to explain, if not currently totally unexplainable by science—it is called the hard problem of consciousness for a reason! But assuming that consciousness is an emergent property—a result of a billion-year evolutionary process starting from the first self-replicating molecules, which themselves were the result of the molecular motions of inanimate matter— then computer consciousness does not seem so crazy. If we who emerged from a soup of inanimate atoms cannot believe inanimate 0’s and 1’s could lead to consciousness no matter how intricate a setup, we should try telling that to the atoms. Machine intelligence really is just switching hardware from organic to the much faster and more efficient silicon-metallic. Supposing consciousness is an emergent property on one medium, why can’t it be on another?

Thus, under the assumption that superintelligence is possible and may happen within a century or so, the world is reaching a critical point in history.  First were atoms, then organic molecules, then single-celled organisms, then multicellular organisms, then animal neural networks, then human-level intelligence limited only by our biology, and, soon, unbounded machine intelligence. Many feel we are now living at the beginning of a new era in the history of cosmos.

The implications of this intelligence for society would be far-reaching—in some cases, very destructive. Political structure might fall apart if we knew we were no longer the smartest species on Earth, if we were overshadowed by an intelligence of galactic proportions. A superintelligence might view humans as we do insects— and we all know what humans do to bugs when they overstep their boundaries! This year, many renowned scientists, academics, and CEOs, including Stephen Hawking and Elon Musk, signed a letter, which was presented at the International Joint Conference on Artificial Intelligence. The letter warns about the coming dangers of artificial intelligence, urging that we should be prudent as we venture into the unknowns of an alien intelligence.12

When the AI researchers were asked to assign probabilities to the overall impact of ASI on humanity in the long run, the mean values were 24% “extremely good,” 28% “good,” 17% “neutral,” 13% “bad,” and 18% “extremely bad” (existential catastrophe).4 18% is not a statistic to take lightly.

Although artificial superintelligence surely comes with its existential threats that could make for a frightening future, it could also bring a utopian one. ASI has the capability to unlock some of the most profound mysteries of the universe. It will discover in one second what the brightest minds throughout history would need millions of years to even scrape the surface of. It could demonstrate to us higher levels of consciousness or thinking that we are not aware of, like the philosopher who brings the prisoners out of Plato’s cave into the light of a world previously unknown. There may be much more to this universe than we currently understand. There must be, for we don’t even know where the universe came from in the first place! This artificial superintelligence is a ticket to that understanding. There is a real chance that, within a century, we could bear witness to the greatest answers of all time. Are we ready to take the risk?

William Bryk ‘19 is a freshman in Canaday Hall.

Works Cited

  1. Turing, A. On Computable Numbers, with an Application to the Entscheidungsproblem. Oxford Journal. 1936, 33.
  2. Plambeck, J. A Peek at the Future of Artificial Intelligence and Human Relationships. The New York Times, Aug. 7, 2015.
  3. Markoff, J. Computer Wins on ‘Jeopardy!’: Trivial, It’s Not. The New York Times, Feb. 16, 2011.
  4. Müller, V. C.; Bostrom, N. Future Progress in Artificial Intelligence: A Survey of Expert Opinion. Synthese Library. 2014, 9-13.
  5. Good, I. J. Speculations Concerning the First Ultraintelligent Machine. Academic Pres. 1965, 33.
  6. Sharpe, L. Now You Can Turn Your Photos Into Computerized Nightmares with ‘Deep Dream’. Popular Science, July 2, 2015.
  7. Bostrom, N. Superintelligence Paths, Dangers, Strategies; Oxford University Press: Oxford, 2014; pp. 30-36.
  8. Brain Initiative. The White House [Online], Sept. 30, 2014, whitehouse. gov/share/brain-initiative (accessed Oct. 20, 2015).
  9. Human Brain Project [Online], humanbrainproject.eu (accessed Oct. 21, 2015).
  10. Kurzweil, R. The Singularity is Near; Penguin Books: England, 2005; pp. 10-14.
  11. Brown, F. “Answer.” Angels and Spaceships; Dutton: New York, 1954.
  12. Pagliery, J. Elon Musk and Stephen Hawking warn over ‘killer robots’. The New York Times, July 28, 2015.

“Invaders from Earth!”: Exploring the Possibilities of Extraterrestrial Colonization

by J. Rodrigo Leal

We’ve all seen films or heard stories about the “Invaders from Mars”: aliens coming from other galaxies to colonize Earth and take advantage of its bountiful natural resources. But what if the story happened the other way around? Organizations like the National Aeronautics and Space Administration (NASA) and private companies like SpaceX have looked seriously at the idea of space exploration for colonization, particularly when it comes to the possibility of colonizing artificial satellites, the Moon, or our even planetary neighbor, Mars. With worries about the habitability of Earth being in jeopardy in the not-too-distant future—due to the effects of environmental degradation, lack of resources, and climate change— individuals and institutions are exploring the modern frontiers of space technology that could soon transform humankind into the “Invaders from Earth.”

FROM SCIENCE FICTION TO SCIENCE REALITY

Extraterrestrial colonization seems like a concept straight out of a sci-fi movie—Christopher Nolan’s 2014 science fiction drama Interstellar features protagonist Matthew McConaughey trying to find a new home for mankind as food shortages and crop failures threaten society’s existence on Earth. Surprisingly though, the idea of space colonization has been around for quite some time now. In 1869, The Atlantic Monthly published a short story by Edward Everett Hale entitled “The Brick Moon,” in which an artificial satellite gets accidentally launched into space with people still inside of it, leading to the establishment of the first space colony.1 The idea of a “space colony” eventually took shape in 1971 in the form of the Russian Salyut program, resulting in the first crewed space station in history.

In the mid to late-1970s, not long after the United States put astronauts on the Moon in the famous Apollo missions, scientists and engineers began to seriously consider the idea of extraterrestrial colonies living in artificial space habitats just outside of Earth. One of the first, and indeed one of the most influential, scientific papers on the topic was “The Colonization of Space” by Dr. Gerard K. O’Neill of Princeton University, which was published in the popular magazine Physics Today in 1974. Through careful calculations and consideration of the physics and economics behind the construction of a space habitat, Dr. O’Neill concluded that mankind could “build pleasant, self-sufficient dwelling places in space within the next two decades, solving many of Earth’s problems”.2 As a result of this study, the NASA Ames Research Center began to conduct space settlement studies with the intention of supporting the NASA Ames Space Settlement Contest, a space colony design contest for primary and secondary school students.3 The study begins with the following thought-provoking questions: “We have put men on the Moon. Can people live in space? Can permanent communities be built and inhabited off the Earth?”3 And so commenced NASA’s formal research into the idea of sustaining human civilization outside of our planet.

LOOKING FOR A NEW HOME

Why would it even be necessary to have mankind live in colonies outside of Earth? Perhaps the most compelling reason to explore the possibility of extraterrestrial colonization has to do with our own environment. As the Earth’s population continues to grow—it is expected to balloon to over 11 billion by 2100—and as resources like freshwater and food become more scarce, many thinkers fear our planet will become far less capable of sustaining human life. Famed scientist Stephen Hawking has even delivered a lecture titled “Why We Should Go Into Space,” urging society to continue space exploration to ensure humanity’s survival. As Dr. Hawking states in one of his lectures on space colonization, “if the human race is to continue for another million years, we will have to boldly go where no one has gone before”.4

Climate change is also threatening to make Earth much less hospitable in the future. Average land and sea surface temperatures are increasing, sea levels around the globe are rising, atmospheric carbon dioxide levels have reached all time highs, and precipitation patterns are shifting.5 In the long term, these changes could lead to detrimental effects on human health, crop production, species extinction, and other major environmental catastrophes. On top of this, the risk of nuclear war leaving Earth inhospitable or the threat of a major asteroid leading us to the same fate as the dinosaurs are other reasons why advocates of extraterrestrial colonization suggest we, as a society, should start looking at a Plan(et) B.

MODERN FRONTIERS IN SPACE COLONIZATION

Colonizing other Planets

American company SpaceX (short for Space Exploration Technologies Corporation), led by business mogul Elon Musk, was founded in 2002 with the primary end goal of enabling the human colonization of Mars.6 By designing, manufacturing, and launching space transportation products, SpaceX has become one of the modern leaders in spaceflight. Since its founding, SpaceX has been the first private company to successfully launch and return a spacecraft from lowEarth orbit, as well as the first company to send a spacecraft to the International Space Station. This is changing the way society has traditionally engaged in space exploration, moving the business of space travel from government agencies to private industries. With these developments in space technology occurring within the private sector, the possibility of sending humans to other planets is one step closer to becoming a reality.

Other companies, like the Mars One Corporation, are joining in on the quest to tackle the Mars problem as well. With the goal of establishing a “permanent human settlement on Mars,” Mars One is seeking capital investment, manufacturing partners, intellectual advisees, and other critical partnerships that could make the settlement of Mars a reality within the 21st century.7

The Radical Idea of Terraforming

In the curious world of extraterrestrial colonization, there is another conceptual framework for colonizing planetary bodies that involves a complete transformation of entire planets: terraforming. Terraforming a planet would require significant human engineering and manual alteration of environmental conditions: modifying a planet’s climate, ensuring an atmosphere with sufficient oxygen, altering rugged landscapes, and promoting vegetation and natural biological growth. Scientists have deliberated the idea of making a planet like Mars habitable to human life through various transformations of the environment on a planetary scale. One possible scenario would be to introduce greenhouse gases into the atmosphere of Mars, leading to steady increases in surface temperatures—an induced greenhouse—that would perhaps make the planet hospitable to plants within a time period of 100 to 10,000 years.8 The next step would involve carbon sequestration and the introduction of plant life. Vegetation on Mars would lead to the photosynthetic conversion of CO2 into O2, eventually rendering the Martian atmosphere “human-breathable.”8 Humans would also have to address the problem of ultraviolet radiation shielding (since the Martian atmosphere is so thin), maintaining a warm air temperature, and a myriad of other complications in order to sustain a human-compatible Martian environment.8 While NASA scientist James Pollack and legendary astrophysicist Carl Sagan thought it was within the realm of human capabilities to engineer other planets, they strongly believed the “first step in engineering the solar system is to guarantee the habitability of the Earth.”9 In other words, science must first commit itself to the conservation of this planet before attempting to alter or flee to another.

CONCLUSION

So what’s holding us back from actually getting to Mars and starting a colony? One big reason is the sheer cost of such an operation. In 1989, President George H.W. Bush introduced the NASA Space Exploration Initiative, which would have culminated in sending humans to Mars (after landing astronauts on the Moon again). This initiative was immediately shut down due to concerns over the costs: over $500 billion.10 Mars Direct, a plan created by the Mars Society to colonize the Red Planet through a “minimalist, liveoff-the-land approach to space exploration,” is estimated to come at a cost of about $30 billion.11 While this is substantially less than what a NASA mission to Mars would cost, it is still a significant sum that requires huge investment and careful consideration of the costs and benefits of a project of this scale.

Recent breakthroughs in space exploration technology, both in the private and public sectors, are making the possibility of extraterrestrial colonization much more realistic. Space colonization is no longer merely the plot of a hit sci-fi flick, as entire corporations are investing millions of dollars into the technologies and engineering advancements that will make reaching Mars possible before the end of the century. Pretty soon, instead of looking up at the night sky from Earth, perhaps we will be looking up from Mars and staring at the blue dot that we used to call “home.

Rodrigo Leal ‘16 is a senior in Kirkland House, concentrating in Earth and Planetary Sciences.

Works Cited

  1. Hale, E. E. Atlantic Monthly 1869, 24.
  2. O’Neill, G.K.  Physics Today, 1974, 27.9, 32-40.
  3. Johnson, R.; Holbrow, C. Space Settlements: A Design Study. NASA. 1977.
  4. Hawking, S. Why We Should Go into Space; NASA’s 50th Anniversary Lecture Series, 2008.
  5. NASA. Global Climate Change. climate. nasa.gov
  6. SpaceX. spacex.com
  7. Mars One. mars-one.com
  8. McKay, C.P. et al. Nature 1991 352, 489-496.
  9. Pollack, J.B.; Sagan, C. Arizona Univ., Resources of Near-Earth Space. 1991 921-950.
  10. NASA. Summary of Space Exploration Initiative. history.nasa.gov
  11. The Mars Society. marssociety.org

Parasitic Cancer: Paradox and Perspective

by Audrey Effenberger

Cancer. It’s a big subject, with a dizzying array of forms and manifestations that can affect all parts of the body. As populations around the world age, cancer’s prevalence will continue to grow, and it will become more and more important to understand and treat it. One lesser known variation is known as parasitic cancer. While its name may seem to combine two totally different ailments, understanding parasitic cancer can actually shed light on the concept of cancer altogether.

AN OVERVIEW

So what is cancer in the first place? On the most basic level, it’s abnormal or uncontrolled cell growth. The cell, the most fundamental unit of life, is a fantastically complicated and regulated machine of DNA, RNA, protein, and all kinds of molecules in between. When any part of the system fails, the entire system can be compromised. The mechanism by which this occurs is known as oncogenesis. Mutations in proto-oncogenes, or those that normally correspond with activating some part of the cell cycle, can transform the normal genes into oncogenes, resulting in abnormal proliferation of the cell. On the other hand, damage to tumor suppressor genes means that a repressing mechanism no longer works, and the cell will fail to stop or die at the appropriate times. Either of these mutations can lead to unwanted cell growth, known as a tumor. Most cells within a tumor are clones, having originated from a single rapidly dividing cell, so the tumor can be called a clonal population.

In fact, because mutation is a random process, the likelihood of a cell incurring a critical mutation in an important gene is quite low. Additionally, cells have various enzymes to proofread and repair DNA. The immune system works to recognize markers on the cell membrane and destroy misbehaving cells. Some tumors are relatively benign. However, no system of defense mechanisms is perfect. As people age or encounter carcinogens in the environment, the rate of damage can increase. Damaged cells that go unchecked can give rise to malignant and invasive tumors that spread throughout the body by traveling through the bloodstream, a process known as metastasis.

THE SPREAD OF CANCER

Though cancer can spread throughout one’s body in this manner, it’s thought of as a largely non-contagious disease. The only way in which cancer is “transmitted,” typically, is by transmission of a pathogen that increases the likelihood of developing cancer. In this way, cancer can only be spread indirectly.

Some bacteria damage tissues and increase risk of carcinogenesis, or cancer formation. For example, the H. pylori bacterium is known to cause stomach ulcers and inflammation that increase relative risk of gastric cancer by 65%.1 Viruses are another culprit; they damage the cell’s DNA by inserting their own and disrupting key sequences or triggering inflammatory responses that lead to rapid cell division. Known oncoviruses include Epstein-Barr virus, hepatitis B and C, and the human herpesviruses.2,3

Parasites, confusingly enough, can also cause cancer, albeit not the parasitic kind; for example, the Asian liver fluke is known to cause a fatal bile duct cancer.4 Again, however, all of these transmissible causes of cancer only increase risk; they can at most heighten the probability that the organism’s cells themselves will become cancerous.

PARASITIC CANCER: THE TRULY TRANSMISSIBLE

Parasitic cancer is defined by its cause: the transfer of cancer cells between organisms. This is comparable to metastasis, when cancer cells migrate throughout the body. However, the new tumor originates from a tumor cell of another organism and is markedly, genetically different. As defined earlier, cancers are often clonal populations rising from a single abnormal cell. In the case of parasitic cancer, the new tumor is populated by clones of another organism’s cancer; therefore, parasitic cancer is also known as clonally transmissible cancer.

While parasitic cancers are very rare, examples can be found in a few animal species. These include devil facial tumor disease (DFTD),5 canine transmissible venereal tumor (CTVT),6 Syrian hamster connective tissue sarcoma induced in the lab,7 and a form of soft-shell clam leukaemia.8 Some cases of parasitic cancer have been documented in humans as well. While extremely rare, cancer cells can be transferred during organ transplants or pregnancies.9

A NEW PERSPECTIVE

Given the unique attributes of parasitic cancer, researchers can reframe their conceptual understanding of cancer and cell organization as a whole. All cells of a particular species have the same basic genetic information, but each cell may be slightly unique, just as human individuals have different eye colors or heights. We can extend the metaphor to bridge the macro- and microscopic. Every organism can be considered its own population of cells cooperating to sustain life, and most cells divide at a regular pace, correcting errors in DNA replication and preserving the overall homogeneity of the organism’s genome.

However, when a cell mutates and becomes cancerous, it changes notably. Given the known mechanisms of oncogenesis, similar types of mutations occur in specific genes to give rise to specific cancers; cells that are able to reproduce after suffering genetic damage have a different, stable genome of their own. Molecular analysis confirms this.10 All cancer of a certain tissue can thus be defined as its own species.11 This species reproduces, competes, and evolves. Tumors thus act as parasites on the rest of the population, sapping resources and occasionally causing direct harm. Benign tumors are analogous to “successful” parasites, coexisting indefinitely with their hosts, while malignant tumors eventually lead to the death of the organism.

The conceptual similarities and differences between parasitic cancer and parasitic organisms lead to important lines of questioning. This is seen in the vastly distinct effects of parasitic cancers on the aforementioned animal species known to have them. DFTD has devastated the Tasmanian devil population and could lead to extinction within three decades of the disease’s emergence, while CTVT has successfully coexisted with dogs for possibly thousands of years. Researchers speculate that reasons for this extreme divergence in outcomes are related to differences in the afflicted species’ genomes.5 Because the Tasmanian devil population lacks the genetic diversity that canines possess, their immune systems are less likely to recognize foreign cancer cells.

Furthermore, this immunological insight can be applied to human cases of parasitic cancer. For example, the genetic similarity between mother and child or transplant donor and recipient is naturally high or engineered to be; while this is necessary to prevent immune system rejection, it allows parasitic cancers more leeway to invade the body. Awareness of this can improve medical treatment in the future.

With the rapid advances in science and technology of the past century, physicians have gained a panoply of weapons to combat cancer. Modern cancer treatment includes everything from surgery to radiation and chemotherapy. However, these measures are imperfect. A paradigm shift spurred by the study of parasitic cancer may guide the medical research community’s efforts to cure cancer conclusively. By treating all cancers as distinct organisms parasitizing the body, physicians can approach treatment differently, combining immunological and genetic therapy with techniques similar to those used against invaders of other species. In this way, parasitic cancer is paradoxical in not only name but also action, and thus brings hope for the future of cancer research.

Audrey Effenberger ‘19 is a freshman in Greenough Hall.

Works Cited

  1. Peter, S.; Beglinger, C. Digestion. 2007, 75, 25-35.
  2. Moore, P.S.; Chang, Y. Nat. Rev. Cancer. 2010, 10, 878-889.
  3. Liao, J.B. Yale J Biol Med. 2006, 79(3-4), 115-122.
  4. Young, N.D. et al. Nat. Comms. 2014, 5.
  5. Dybas, C. Tasmanian devils: Will rare infectious cancer lead to their extinction? National Science Foundation [Online], Nov. 13, 2013. http://nsf.gov/discoveries/disc_summ.jsp?cntn_id=129508 (accessed Oct. 4, 2015).
  6. Ganguly, B. et al. Vet. and Comp. Oncol. 2013, 11.
  7. Murchison, E.P. Oncogene. 2009, 27, 19-30.
  8. Yong, E. Selfish Shellfish Cells Cause Contagious Clam Cancer. Natl. Geog [Online], Apr. 9 2015. http://phenomena.nationalgeographic.com/2015/04/09/selfishshellfish-cells-cause-contagiousclam-cancer/ (accessed Oct. 4, 2015).
  9. Welsh, J.S. Oncologist. 2011, 16(1), 1-4.
  10. Murgia, C. et al. Cell. 2004, 126(3), 477-487.
  11. Duesberg, P. et al. Cell Cycle. 2011, 10(13), 2100-2114.

 

 

Laws of Nature Defending Our Information: Quantum Cryptography

by Felipe Flores

Secure communications and data encryption have been very important topics in the popular eye for the past few years, especially after Edward Snowden made public that the NSA attempts to intervene most communications. I, for instance, never thought my information would be that vulnerable and accessible to potential hackers, sponsored by a government or not. Nevertheless, I realized, my information is not as “valuable” or, better said, as sensitive as the information that banks, hospitals, and governments manage every day. Some information just needs to remain inaccessible to hackers, like the transaction history of bank accounts, the medical records of all patients, or the vote count of an election. All information needs to be heavily encrypted, I figured.

Then, the need to encrypt data became even more relevant in 2014 after several media announced that quantum computers (powerful computers that use fundamental concepts of quantum mechanics) might be used by organizations such as the NSA to break our most sophisticated cyphers and get access to our information.1 The almost unlimited processing power of such computers seemed to threaten our right to virtual security and privacy, and quantum computation was seen as the enemy of the cyber universe. However, this is only one of the many applications of quantum mechanics in computer science. In the process of learning how to use physics to crack codes, we have also learned how to use them in favor of cryptography, the science of hiding messages and protecting information from third parties. Nature itself has the potential to protect information through quantum mechanics, if used correctly. Although the power of quantum computers is (in theory) potentially enough to break any classic method of encryption, quantum cryptography provides an alternate pathway that has proven to be effective and seems to be turning cost-permissive. We are, in a way, using quantum mechanics to protect our information from the power of quantum computers. How ironic is that?

WHAT IS QUANTUM MECHANICS AND WHAT IS SO SPECIAL ABOUT IT?

One of the most bizarre and counterintuitive yet fundamental ideas of quantum mechanics is the quantum superposition principle. The idea is that a particle whose qualities are not being measured—a particle that is not being observed—is in all of its possible states at the same time, but only as long as you are not looking! Whenever you observe a particle, the superposition collapses and the object ‘decides’ to be in only one state; the observer is not interfering with the particle’s superposition, but the act of measuring itself is. To make it more clear, let’s pretend a coin is a quantum-mechanical object, even though superposition only works at the quantum scale—the scale of electrons or photons. If you haven’t looked at the coin yet, it will be in a superposition of both states, heads and tails, at the same time; if you observe it, the coin will choose to be in only one of the states, either heads or tails. This means that the sole act of observing a particle produces a change in the state of such a particle; it’s almost as if nature itself was protecting the superposition from being eavesdropped! (Does it sound like something you’d want in a secure communication channel?) Scientists, financial institutions, medical facilities, and other organizations that require highly-secure channels can use this property of quantum mechanical particles to prevent their messages from being intercepted by a potential hacker. Currently, the most widely used system is called Quantum Key Distribution (QKD).

WHY QKD AND HOW DOES IT WORK?

The classic method of information encryption in point-to-point communication works by encoding and locking a message with a predetermined “key” that the receiver can then use to unlock the message. Fundamentally, encryption transforms the message into nonsense gibberish, and the key holds the instructions to convert it back to normal. This method has several vulnerabilities: any hacker could potentially intercept the communication, copy the data, and (with enough processing power, like the one provided by a quantum computer) figure out the key to decrypt the message.1 It seems like technology is advancing more quickly in computing power and ways to break codes than the science of cryptography is; after all, cryptography requires very complex mathematical algorithms that, unlike computer processors, cannot be produced at an industrial scale. So, science has to come up with a better encryption mechanism that goes beyond a very complicated math problem (like our current algorithms), and quantum mechanics seems to have the answer.

QKD requires two channels of communication. One of them is a regular data channel (like a regular internet connection between two computers), while the other channel is a direct optic fiber connection between the sender and the receiver of the message—essentially, a cable that is able to transmit quantum particles from one computer to another. The QKD mechanism continuously generates new keys (in intervals of less than a minute) to encode the message, and data is sent using such keys through the regular communication channel. QKD, at the same time, uses the direct optic fiber connection to send the key needed to decrypt the message to the receiver; the mechanism sends the key as plane-polarized photons, which are particles in the fragile superposition, as explained above. The concept is that any eavesdropper trying to interfere with the connection would observe the photons before getting to their final destination, which makes the superposition “collapse” and alters the information sent through this channel. The observer in this situation would never be able to obtain the key necessary to decrypt the message, for which the information remains secure. The receiver, on the other hand, could then detect these alterations in the photons, make the valid assumption that the connection has been compromised, and take the appropriate measures to re-establish a secure and private connection. The sole fact that quantum mechanics “protects” the superposition from being seen protects the message at the same time. A common analogy to visualize this is imagining the key was being sent on delicate soap bubbles: if a third party observer tried to reach those bubbles, they would easily pop and prohibit them from decrypting the message. At the same time, the receiver on the other end of the channel is expecting a bubble to read; the receiver would immediately know if the bubble was popped along the way.2,3

WHAT ARE THE LIMITATIONS OF QKD?

QKD is not yet a perfect mechanism. In 2010, a team of researchers in Norway and Germany showed for the first time that it is possible to obtain a full key from (in other words, to “hack”) a commercial QKD channel. The discovery led to even more intense research to modify the communication protocols of QKD. Even though QKD has limitations, they are far fewer than those of other encryption systems, and the vulnerabilities can mostly be fixed by modifying the protocol of key generation and communication, but not the principle of QKD itself;4 that being said we find that the system’s limitations are mostly financial. Implementing such a secure system is, most surely, expensive and complicated. In terms of infrastructure, any QKD network would need direct optic fiber communication between every node (every participant) in the network, which presents a difficult challenge over great distances. QKD communication through optic fiber cables loses power very easily as photons might be absorbed by the material in the cable, which is even more likely to happen at longer distances. If we wanted to build a secure network over a distance of a few hundred miles, we would surely need a large network of quantum repeaters—devices that replicate a signal to maintain it at the appropriate intensity—which also makes it harder for photons to remain in superposition. In order to avoid all the consequences of extended networks, it is necessary to invest large sums in devices that ensure the viability of QKD over long distances.

WHAT IS THE CURRENT STATUS OF QKD? WHAT CAN WE EXPECT FROM IT IN THE FUTURE?

QKD is already in use at several research institutions, technology companies, and telecommunications corporations that require highly-secure data transfer around the globe.5 In fact, the first quantum cyphered network was established in Cambridge, MA. In 2005, the Defense Advanced Research Projects Agency (DARPA) established the first quantum network in a collaborative effort between Harvard University, Boston University, and BBN Technology.6 QKD has found non-stop development ever since. For example, the city of Geneva used QKD channels to securely count votes in 2007 elections. Once the network’s hardware and connectivity was ready to use, the deployment of the secure QKD enabled the encryption of a standard internet connection, taking only 30 minutes to be fully operational, and continuing to operate for more than seven weeks. In 2010, the city of Tokyo built a quantum network that covered distances of over 90 miles and was even enabled for special QDK smartphones (though these smartphones were designed only to prove the point that the technology is applicable even to mobile devices). As of 2015, China is undergoing major advances in the field, currently working on a QKD channel running from Beijing to Shanghai (1,240 mi) to be finished by 2016, and setting a schedule for the network to extend globally by 2030. China has also confirmed their desire to be the first country to launch a satellite with quantum communication systems, using a design similar to QKD.7

Nowadays the public is willing to invest more on their virtual privacy, but the price of QKD systems is currently out of reach for most companies that would like to secure their information. Researchers and developers expect this technology to become more accessible in the near future as the devices involved begin to be produced in industrial scale; accessible prices will take secure quantum channels closer to all users and not only high-tech companies and institutions around the world. The field of quantum cryptography is only growing in a time where security and privacy of information are more important than ever before.8 Highly sensitive information needs better protection as techniques to crack codes are developed, and quantum cryptography seems to be key in the future of transmitting it. In the future, the development and applications of quantum cryptography will allow us to rest assured that our sensitive information in banks, hospitals, or government institutions will not be accessible to any hackers. The more widespread this technology becomes, the more secure we will all feel. For now, we can only be impressed by nature’s quantum-mechanical weirdness and by the applications we, humans, find for it.

Felipe Flores ‘19 is a freshman in Hollis Hall.

Works Cited

  1. Rich, S.; Gellman, B. NSA Seeks to Build Quantum Computer That Could Crack Most Types of Encryption. The Washington Post, Jan. 2, 2014. https://www.washingtonpost.com (accessed Sep. 29, 2015).
  2. Lance, A.; Leiseboer, J. What is Quantum Key Distribution?; Quintessence Labs, 2014,  4-7.
  3. The Project UQCC. On Security Issues of QKD; Updating Quantum Cryptography and Communications, n.d.; http://www.uqcc.org/images/towards.pdf (accessed Sep. 29, 2015).
  4. Dianati M.; Alléaume R; Architecture of the Secoqc Quantum Key Distribution network. GET-ENST, Network and Computer Science Department; GETENST: Paris, Feb. 1, 2008; pp. 3-6; http://arxiv.org (accessed Sep. 30, 2015).
  5. Lydersen, L. et al. Nature Photonics. 2010, 4, 686-688.
  6. Qiu, J. Nature. 2014, 508, 441-442.
  7. Elliott, C. et al. Current status of the DARPA Quantum Network; BBN Technologies: Cambridge, 2005, 9-11.
  8. Dzimwasha, T. Quantum revolution: China set to launch ‘hack proof’ quantum communications network. International Business Times, Aug. 30, 2015. http://www.ibtimes.co.uk (accessed Oct. 1, 2015).
  9. Stebila, D. et al. The Case for Quantum Key Distribution. International Association for Cryptologic Research; IACR: Nevada, 2009, 4-6.  https://eprint.iacr.org (accessed Oct.1, 2015).

Genetically Engineered Viruses Combat Invasive Cancer

by Caroline Wechsler

58-year-old Georgia resident Nancy Justice was diagnosed with glioblastoma, a tumor of the brain, back in 2012. Though her doctors immediately combated the cancer with surgery, radiation, and chemotherapy, the tumor relapsed in late 2014, stronger than ever. According to her doctors, Justice had only seven months to live because the tumor would double in size every two weeks.1

Invasive cancers are now one of the leading causes of death in America. The American Cancer Society reports in 2015 that there were over 1.65 million new cases of cancer, with just over 589,000 deaths per year.2 So it is unsurprising that now over $130 billion is spent on researching new treatments for cancer.3 Particularly frustrating though are tumors that resurface even after being “cured,” like that of Nancy Justice. But a new, cutting edge treatment is giving people hope: using viruses, something normally thought to be harmful, as a cancer combatant.

Nancy Justice was the 17th patient entered into a revolutionary study at Duke University Medical Center using a genetically modified version of the polio virus to combat glioblastoma. After several months of treatment, her tumor has—seemingly miraculously—begun to shrink away. This project has been a work in progress for almost three decades. It is the brainchild of Matthias Gromeier, a molecular biologist who has been working on viral treatments for cancer for the last 25 years. He described the difficulty of proposing this idea, originally unthinkable. “Most people just thought it was too dangerous,” he remembers in an interview with CBS.1 For the past 15 years, Gromeier has been at Duke, working with Dr. Henry Friedman, deputy director of the Tisch Brain Tumor Center at Duke, the other half of the duo that pushed this project through to completion. Though he too was originally skeptical about the project, Dr. Friedman now calls the polio treatment “the most promising therapy I’ve seen in my career, period.”1

MAKING AN INVADER INTO A DEFENDER

The treatment takes advantage of viruses’ infection mechanisms. Normally, viruses infect cells by piercing their outer membranes and injecting viral DNA into the cell.4 The viral genetic material then hijacks the host cell’s replicating machinery, forcing it to produce viral genomes and capsids, ultimately assembling the parts into many copies of the virus that eventually burst out of the cell in a process called lysis, killing the cell.5 If researchers can take control of this process, they are able to attack and lyse certain cells, such as those in a tumor. The polio virus is particularly effective for this purpose because it specifically attaches to surface receptors on cells that make up most types of solid tumors.1

In order to achieve control of the viral mechanism, the researchers used a technique called recombination—essentially, taking desired parts of viral genomes and fusing them together to create new virus DNA sequence. This technique of DNA recombination is used in many different fields. To create recombinant versions of genetic material, the desired genetic material is isolated using restriction enzymes, and then reinserted into the desired vector (in this case the polio virus) with DNA ligase, an enzyme that links strands of DNA together.6 The team at Duke removed a key genetic sequence from the polio which makes the virus deadly, and instead patched it up with a piece of genetic material from a relatively harmless cold virus to create the recombinant form, called PVS-RIPO.1 Not only is PVS-RIPO effective at killing cancerous cells, but also, because of this recombination, it is incapable of reproducing in normal cells, which makes it safe to use in humans.7

STARTING THE BODY’S DEFENSES

Once the virus has been engineered to attack the cancer, the problem then becomes getting it to the site of the cancer. This process must be neatly tailored because the virus can still cause swelling in the area it is inserted. To do this, the Chief of Neurosurgery at Duke, Dr. John Sampson, uses 3-D MRI images to plot the course of the catheter that releases the genetically engineered virus. “It’s just like a sniper’s bullet,” he said in an interview.1 Once the virus has been inserted, the cancer killing begins. As Gromeier explains, “All human cancers develop a shield or shroud of protective measures that make them invisible to the immune system. By infecting the tumor, we are actually removing this protective shield.”1 Once the immune system is alerted of the polio infection, it shifts into gear and attacks the cancerous cells, effectively neutralizing the tumor.

The polio virus clinical trial began in 2011 at Duke and still continues to date. However, the trial is only in its first phase, a human safety trial.  In order to produce a marketable, certified treatment, the trial must go through Phase II and III testing. The Food and Drug Administration is understandably very cautious in granting approval for medical treatments—the Duke group had to submit seven years’ worth of safety studies in monkey models to receive approval for the first human safety trial.7

LOOKING TO THE FUTURE

Success stories are impressive, especially for a Phase 1 trial that normally exists to simply test dosage. Take the example of Stephanie Lipscomb, a 24-year old nursing student who was diagnosed with glioblastoma in 2011. Though surgeons removed 98% of her tumor, the cancer quickly returned only a year later. With no other options, she enrolled in the Duke trial as its first patient. It was a risk, to be sure—Dr. Friedman himself admits, “We had no idea what it would do in the long haul.”1 Though Lipscomb initially suffered some swelling, after 21 months her tumor shrank until it was completely gone. Despite these success stories, there have been mixed results. In fact, one patient who received a particularly potent dose of the virus experienced extreme inflammation causing swelling in her brain. Out of 22 patients, eleven died—most had doses of the virus that were extremely high. However, the eleven who survived have been in remission for over six months, unprecedented for recurrent glioblastoma.

In light of the Duke trials’ success, researchers are exploring using this technique of recombinant viruses to combat other forms of invasive cancer. Concurrently, the Gene and Virus Therapy Program at the Mayo Clinic has made several breakthroughs in clinical treatments, including a version of the measles virus to combat recurrent ovarian cancer and an adenovirus encoding a gene to combat prostate cancer.8 Moreover, the Mayo Clinic’s Department of Oncology has been using a modified version of the measles virus to combat glioblastoma, bringing the project from animal models to human Phase 1 testing in just under three years.9 A group in the UK completed a study in May demonstrating that using a genetically modified form of the herpes virus to treat melanoma, a type of skin cancer, causes an increase in survival rates.10 And researchers back at Duke are looking into using PVS-RIPO itself to treat other types of cancer, including prostate, lung, and colon cancers, and also determining how treatment differs for children since all trials thus far have treated adults.11 Furthermore, research must be pursued for other aspects of treatment, beyond simply the viral vector—the Mayo clinic is investigating how to chaperone vectors to tumor sites inside protective cell carriers like macrophages or stem cells.8

A CURE FOR CANCER?

It is clear that this research is the start of a new and exciting age of cancer treatment. However, many caution against hailing this as the “cure for cancer.” A pressing concern is that high doses of the recombinant virus can cause massive swelling.12 This is especially problematic in treating cancers like glioblastoma. However, Dr. Friedman emphasized that the point of this initial trial was to get the right dose—not to determine the virus’s effectiveness.1

Though these are legitimate concerns, they are hallmark worries about cutting edge treatments. And it is almost certain that Stephanie Lipscomb was not thinking about the intellectual property law when she found out that her cancer was completely gone. “I wanted to cry with excitement,” she said in an interview with CBS.1 Invasive cancer is still a difficult and dangerous disease. However, with innovative new research approaches like those involving viruses, we are certainly on the way to finding a cure.

Caroline Wechsler ‘19 is a freshman in Weld Hall.

Works Cited

  1. Pelley, S. CBS News. Polio to treat cancer? Scott Pelley reports on Duke clinical trial. http://www.cbsnews.com/news/polio-cancer-treatment-duke-university-60-minutes-scott-pelley/ (accessed Oct. 5, 2015).
  2. American Cancer Society. Cancer Facts and Figures 2015 (4-8). (2015).
  3. National Institutes of Health. Cancer costs projected to reach at least $158 billion in 2020. http://www.nih.gov/news-events/news-releases/cancercosts-projected-reach-least-158-billion-2020 (accessed Oct. 5, 2015).
  4. National Science Foundation. How do viruses attack cells? https://www.nsf.gov/news/overviews/biology/bio_q01.jsp (accessed Oct. 5, 2015).
  5. Wessner, D. R. The Origins of Viruses. Nature Education 2010, 3(9), 37.
  6. Rensselaer Polytechnic Institute. The Basics of Recombinant DNA. http://www.rpi.edu/dept/chem-eng/Biotech-Environ/Projects00/rdna/rdna.html (accessed Oct. 5, 2015).
  7. Caba, J. Medical Daily. Once-Deadly Polio Virus Could End Up Curing Brain Cancer. http://www.medicaldaily.com/polio-virus-may-curebrain-cancer-thanks-genetic-re-engineering-327620 (accessed Oct. 5, 2015).
  8. Mayo Clinic. Gene and Virus Therapy Program. http://www.mayo.edu/research/centers-programs/cancer-research/ (accessed Oct. 5, 2015).
  9. Mayo Clinic. Neurosciences Update. [Online] 2012 9(1), 3. (accessed Oct. 5, 2015). [10]
  10. Knapton, S. The Telegraph. Genetically engineered virus ‘cures’ patients of skin cancer. http://www.telegraph.co.uk/news/science/science-news/11631626/v.html (accessed October 5, 2015).
  11. Preston Robert Tisch Brain Tumor Center at Duke. Targeting Cancer with Genetically Engineered Poliovirus (PVS-RIPO). http://www.cancer.duke.edu/btc/modules/research3/index.php?id=41 (accessed Oct. 5, 2015).
  12. Kroll, D. Forbes. What ‘60 Minutes’ Got Right And Wrong On Duke’s Polio Virus Trial Against Glioblastoma. http://www.forbes.com/sites/davidkroll/2015/03/30/60-minutes-covers-dukes-polio-virus-clinical-trial-against-glioblastoma/ (accessed Oct. 5, 2015).

Genetically Modified Crops as Invaders and Allies

by Sophie Westbrook

It’s not hard to tell frightening stories about genetically modified crops. These days, there is even a formula to follow: the soulless company creates dangerous variants, silences the protests of right-thinking environmentalists, and sends biodiversity and public health down the drain. This scenario’s proponents tend to be horrified by transgenic organisms. Unfortunately, this can polarize their conversations with any agricultural scientists responsible for “Frankenstein’s monster.” The fiery controversies often marginalize a key idea: genetically modified crops are part of the biosphere. This has more complex implications than the popular “they’ll drive everything else extinct” hypothesis. We cannot understand what transgenic variants will do to—or for—us without examining when, how, and why the organisms around them react to invasion.

Genetically modified (“GM”) crops were a cornerstone of the 1980s push for more efficient agriculture. Initial experiments with pest and disease resistance were quickly followed by qualitative modifications: engineered crops could grow quickly into big, attractive specimens with a predetermined chemical makeup.1 Almost immediately, the technology spawned concerns rooted in food safety, economics, and environmental impact. A number of these issues are still with us. In particular, scientists and citizens alike struggle to understand  the implications of resistance evolution, gene flow between hybrid and natural populations, and competitive advantages.2 A nuanced discussion of these topics is critical to developing a modern crop management plan.

RESISTANCE

To GM proponents, pest resistance is one of the technology’s best success stories. Modified crops can repel not only bacterial invaders but also undesirable insects and weeds. This trait improves output by increasing survival rates and facilitating “double-cropping,” planting twice a year and letting growth extend into insect-heavy seasons.3 It also has the potential to reduce non-point source pollution: GM plants need less protection from sprayed pesticides and herbicides. These developments have produced higher yields at lower costs worldwide.

Naturally, the introduction of “hostile” GM  organisms into the environment has consequences. Shifting land use patterns can drive “pest” species away from areas used for GM crop cultivation. This is worth keeping in mind as the amount of land used for GM crops continues to grow. In 2010, GM crops covered an estimated 140 million hectares (346 million acres).3 Larger-scale cultivation could destabilize the ecosystems surrounding cultivation sites by removing key producers and primary consumers.

There are more immediate concerns, though. Where anti-insect crops grow, some insect species are developing resistances to their artificially introduced toxins. For instance, corn plants modified with Bt toxins are increasingly unable to repel caterpillars. This effect has been observed globally.2 Adding increasingly poisonous compounds would only prompt more evolution by the insect species. Such ideas also raise questions about health impacts and environmental contamination.

Other GM crops are not themselves toxic, but have been engineered to resist cheap, relatively safe herbicides. Glyphosate-resistant crops in the United States are a notable example. Since their introduction and initial success, they have been viewed as an easy solution to weed problems.4 Some staple crop species, like corn and soybeans, are seldom cultivated without glyphosates. Now, these herbicides are increasingly ineffective (and consequently applied in increasing concentrations). This is evidence that weed species are experiencing strong selective pressures to develop their own herbicide resistance.

The fact that GM crops prompt pest evolution is neither shocking nor devastating. After all, unmodified plots also promote shifting gene frequencies; some organisms are better suited to take advantage of crop growth than others. However, the GM era has seen an unusually violent “arms race” between scientists and pests. Acknowledging that native insects and weeds can always evolve in response to invading species’ biochemistry means investigating alternative, multi-layered management solutions.2

HYBRIDIZATION

Plants share DNA. Gene flow, the transfer of genetic material from one population to another, is one of their fundamental mechanisms for generating biodiversity. When this happens between GM and natural crops, it can lead to transgene introgression: the fixture of an “invasive” modified gene into a native species’ genome.5 Transgene introgression is never sure to happen. At a minimum, it tends to require population compatibility on a sub-genomic scale, strong selective pressure to retain the transferred trait, time, and luck.5 Even “useful,” artificially inserted genes have a relatively low probability of leaping to nearby organisms.

There are two key barriers to spreading transgenes. First, many modern crops lack genetically compatible wild types, so they simply cannot spread their modifications. Second, “domestication genes” are frequently unsuccessful in natural populations.2

That said, transgene introgression does occur. One of the most famous cases took place between maize populations in Oaxaca, Mexico.6 There was widespread alarm when remotely situated wild maize contained artificial genes. It called into question our ability to safeguard unmodified plant varieties, which would become critical if a commonplace GM species proved unviable or unsafe.

Oaxaca has been analyzed extensively. Unfortunately, data on specific events cannot help us prevent transgene introgression everywhere. The process depends heavily on which species are involved, so one-size-fits-all policies for discouraging gene flow are inadequate.7 A more specialized understanding would help us to manage the possibility of dangerous escapee genes and better answer questions about legal and ethical responsibility.

COMPETITION

When functionally similar invasive and native species do not hybridize, they often compete for the same resources. If the native species is wholly out-classed, it may be driven to extinction. This is the idea behind discussions about GM crops’ threat to biodiversity. Biodiversity is indisputably necessary: it is the foundation of stability and flexibility in an ecosystem. Allowing a single variant to overcome its peers would leave any community more vulnerable to stresses like disease, climate change, and natural disaster.

Do GM crops have an advantage over natural ones in the wild? They tend to incorporate some traits, such as fast growth and temperature tolerance which promote greater survivorship. However, as mentioned above, they are primarily adapted to living in cultivated areas. This means that they lack characteristics like seed dormancy and phenotypic plasticity (the ability to take different forms) that would make them more effective, invasive, weeds.8

Looking forward, extreme crop modifications mean GM variants may entirely lose metabolic capabilities they would need to survive in nature.7 This suggests that they should become increasingly unlikely to succeed after accidental dispersal. Nonetheless, hard-to-predict factors such as mutations within modified crops could always lead to the loss of native populations. Once a species—natural or transgenic—becomes invasive, it is nearly impossible to recapture. Unfortunately, GM plot isolation is a difficult proposition, especially given the crops’ prevalence throughout the developing world.

CONCLUSION

Wild organisms have a surprisingly diverse menu of responses to transgenic invaders. They may evolve in response to the crops’ new traits, hybridize to access the traits themselves, or try to outcompete the variants through their other weaknesses. The strategy adopted depends primarily on the native species’ ecological position, but also on the characteristics of the invader. To develop a comprehensive understanding of the ways GM crops affect the communities they enter, we need to analyze these relationships in all their variety. This examination may lay the groundwork for a safer, more sustainable food supply in the future.

Sophie Westbrook ‘19 is a freshman in Hurlbut Hall.

Works Cited

  1. Nap, J. P. et al. Plant J. 2003, 33, 1-18.
  2. Goldstein, D. A. J. Med. Toxicol. 2014, 10, 194-201.
  3. Barrows, G. et al. J. Econ. Perspect. 2014, 28, 99-120.
  4. Beckie, H. J.; Hall, L. M. Crop Prot. 2014, 66, 40-45.
  5. Stewart, C. N., Jr. et al. Nat. Rev. Genet. 2003, 4, 806-817.
  6. Quist, J.; Chapela, I.H. Nature 2001, 414, 541-543.
  7. Messeguer, J. Plant Cell Tiss. Org. 2003, 73, 201-212.
  8. Conner, A. J. et al. Plant J. 2003, 33, 19-46.

Microchimerism – The More, The Merrier

by Una Choi

Microchimerism, or the presence of genetically distinct populations within a single organism, throws a wrench in the biological concept of sex. Although we traditionally learn that biological females possess two X sex chromosomes and males possess X and Y sex chromosomes, microchimerism is responsible for the presence of cells with Y chromosomes in females. Microchimerism can result from a variety of factors ranging from organ transplant to in-utero transfer between twins. Recent research has focused primarily on maternal microchimerism (MMc) in relation to cord blood transplantation and fetal microchimerism (FMc), the two most common forms of microchimerism.

BI-DIRECTIONAL EXCHANGE DURING PREGNANCY

The placenta connects the mother and fetus, facilitating the bi-directional exchange of cells. Low-level fetal Y-chromosome DNA is found in maternal cellular and cell-free compartments starting at the seventh week of pregnancy and peaks at childbirth.1 Although fetal DNA rapidly disappears from the mother’s body after labor, fetal cells can persist in the mother’s body for decades and vice versa.2 Indeed, there are around two to six male fetal nucleated cells per milliliter of maternal blood,3 and 63% of autopsied female brains exhibited male microchimerism.4

The cells crossing the placenta possess varied physical features and durations in the host body. Highly differentiated cells like nucleated placental trophoblasts, which provide nutrients to the placenta, do not remain for long in maternal circulation. In contrast, pre-natal-associated progenitor cells (PAPCs) can persist for decades after birth. These microchimeric progenitor cells, like stem cells, can differentiate into specific types of cells. PAPCs can later become hematopoietic, or blood-forming, cells and epithelial cells.5 PAPCs have also been found in murine brains. A 2010 study found that PAPCs remained in the maternal brain for up to seven months. These PAPCs developed mature neuronal markers, suggesting their active integration into the maternal brain.6

BENEFITS OF MMC IN CORD BLOOD TRANSPLANTATION

Maternal microchimerism, or the presence of maternal cells in the fetus, is responsible for the consistent success of cord blood transplants. Cord blood is extracted from the umbilical cord and placenta. Because cord blood is rich in hematopoietic stem cells, it is often used as treatment for leukemia. Transplants, however, are not without risk; the introduction of foreign material may cause graft-versus-host-disease (GVHD). GVHD occurs when the donor’s immune cells target the patient’s healthy tissue.

Cord blood inherently contains both maternal cells and fetal cells due to the previously mentioned bi-directional exchange. This displayed MMc can diminish the risks accompanying cord blood transplants.7 The fetus benefits from human leukocyte antigens (HLAs) present on the maternal cells. These HLAs encode for regulating proteins involved in the human immune system. The HLA system can present antigens to T-lymphocytes, which trigger B-cells to produce antibodies.8

Unlike bone marrow and peripheral blood transplants, HLA matching between cord blood donor and recipient does not have to be exact. Indeed, it is often imprecise due to the large variety of HLA polymorphisms;9 parents are often HLA heterozygous because HLA loci are extremely variable. While the foreign maternal cells could potentially aggravate GVHD, cord blood recipients actually exhibit low rates of relapse. Indeed, maternal anti-inherited paternal antigens (IPA) immune elements may result in a graft-versus-leukemia effect.10 The graft-versus-leukemia effect describes the role of donated cytotoxic T lymphocytes in attacking malignant tumors.

Exposing a fetus to foreign antigens can result in lifelong tolerance; fetal tolerance is strongest against maternal antigens.7 In HLA-mismatched cord blood transplants, patients displayed more rapid engraftment, which features the growth of new blood-forming cells and is a marker of transplant recovery, diminished GVHD, and decreased odds of a leukemia relapse. Indeed, the relapse rate was 2.5 times less in allogeneic-marrow recipients with graft-versus-host disease than in recipients without the disease.11

IMMUNE SURVEILLANCE AND FMc

The benefits of microchimerism are not limited to the recipients of maternal cells. The mothers themselves often benefit from increased immune surveillance. Indeed, fetal microchimeric cells T cells can eradicate malignant host T-cells.

Microchimeric cells can also provide protection against various forms of cancer. During pregnancy, mothers can develop T- and B-cell immunity against the fetus’s IPAs. This anti-IPA immunity persists for decades after birth, reducing risk of leukemia relapse. PAPCs can differentiate into hematopoietic cells, which are predicted to have a role in destroying malignant tumors.12 In a study of tissue section specimens from women who had borne sons, 90% of hematopoietic tissues like lymph nodes and spleen expressed CD45, a leukocyte antigen previously identified in the male cells.13

PAPCs are also associated with decreased risk for breast cancer; circulating fetal cells are only found in 11-26% of mothers with breast cancer while a study of 272 healthy women found male microchimerism in 70% of the participants, suggesting microchimerism’s role in the maintenance of a healthy stasis.14,15 The depletion of PAPCs in breast cancer patients may result from the migration of PAPCs from the bloodstream to the tumor.16

AUTOIMMUNE CONDITIONS

FMc and MMc are common in healthy individuals and are associated with repression of autoimmune conditions. Rheumatoid arthritis (RA) is a genetic disorder stemming largely from coding in the HLA-region. The molecules coded for in the HLA-region contain the amino acid sequence DERAA, which is associated with defense against RA. Of 179 families studied, the odds of producing at least one DERAA-negative child from a DERAA-positive mother are significantly lower than the odds of producing a DERAA-negative child with a DERAA-positive father. This suggests a protective benefit of non-inherited maternal HLA-DR antigens in decreasing susceptibility to RA.17

ORGAN REGENERATION

Fetal stem cells feature longer telomeres and superior osteogenic potential than their adult counterparts. They also express embryonic pluripotency markers like Oct4.16 These fetal cells are connected to the alleviation of myocardial disease. In a 2011 study, pregnant mice with myocardial injuries exhibited a transfer of fetal cells from the bloodstream to the site of injury, where the fetal cells differentiated into various types of cardiac cells.18 40% of PAPCs extracted from the heart expressed Cdx2, a caudal homeobox gene expressed during development. Cdx2 differentiates cells that will become the trophectoderm, or the outer layer of the blastoderm which provides nutrients to the placenta, from cells that will become the inner cell mass. Because Cdx2 is absent in the mature trophoblast, the extracted cells likely originated in the placenta.19

A recent study used fluorescence activated cell sorting (FACS) to analyze fetal green fluorescent protein (eGFP+) cells’ in vitro behavior. These fetal cells exhibited clonality and differentiated into smooth muscle cells and endothelial cells, displaying beneficial implications for organ regeneration.

PAPCs selectively travel to damaged organs, further emphasizing their role in healing. eGFP+ cells were present in low quantities in all tissues until the 4.5th day after the injury. 1.1% of the cells were eGFP+ before injury while 6.3% were eGFP+ after delivery, thus displaying a significant increase. These findings pose significant implications for maternal health; PAPCs may be at least partly responsible for the spontaneous recovery from heart rate exhibited by 50% of women.18

FUTURE IMPLICATIONS

Microchimerism poses important implications for cord blood transplants. If we know the maternal and fetal HLA, we can match recipients with those donors whose IPA are included in the recipient’s HLA type to promote graft acceptance.

Although cord blood is typically preserved for transplants, the placenta is often discarded after childbirth. If PAPCs can be traced back to the placenta, the placenta may provide a valuable source of undifferentiated stem cells capable of organ regeneration. Although placental tissue and amniotic fluid have less differentiation potential than fetal tissue from pregnancy terminations, they are less controversial sources of stem cells.16

Because FMc plays an active role in the mother’s body for decades, it can impart significant benefits. The selective migration of PAPCs to damaged organs suggests the existence of a specific targeting mechanism. The ability of extracted PAPCs to differentiate in vitro into working cardiovascular structures also poses exciting implications for organ synthesis.

Una Choi ‘19 is a freshman in Greenough Hall.

Works Cited

  1. Ariga, H. et al. Transfusion 2001, 41, 1524-1531.
  2. Martone, R. Scientists Discover Children’s Cells Living in Mothers’ Brains. Scientific American, Dec. 4, 2012. http://www.scientificamerican. com/article/scientists-discover-childrens-cells-living-in-mothers-brain/ (accessed Sept. 25, 2015).
  3. Krabchi, K. et al. Clinical Genetics 2001, 60, 145-150.
  4. Chan, W. et al. PLOS. [Online] 2012, 7, 1-7. http://journals.plos.org/ plosone/article?id=10.1371/journal. pone.0045592 (accessed Sept. 25, 2015).
  5. Gilliam, A. Investigative Dermatology [Online] 2006, 126, 239-241. http:// http://www.nature.com/jid/journal/v126/n2/ full/5700061a.html#close (accessed Sept. 26, 2015).
  6. Zeng, X.X.. et al. Stem Cells and Dev. 2010, 19, 1819-1830.
  7. van Besien, K. et al. Chimerism. [Online] 2013, 4, 109-110. http://pubmedcentralcanada.ca/pmcc/articles/ PMC3782544/ (accessed Sept. 28, 2015).
  8. Burlingham, W. et al. PNAS. 2012, 109, 2190-2191.
  9. Leukemia & Lymphoma Society. https://www.lls.org/sites/default/files/file_assets/cordbloodstemcelltransplantation.pdf (accessed Sept. 28, 2015).
  10. van Rood, J. et al. PNAS. 2011, 109, 2509-2514.
  11. Weiden, P, M.D. et al. New England Journal of Medicine. 1979, 300, 10681073.
  12. Fugazzola, L. et al. Nature 2011, 7, 89-97.
  13. Khosrotehrani, K, M.D. et al. JAMA 2004, 292, 75-80.
  14. Gadi, V. et al.. American Journal of Cancer 2007, 67, 9035-9038.
  15. Kamper-Jørgensen, M. et al. Elsevier 2012, 48, 2227-2235.
  16. Lee, E. et al. MHR 2010, 16, 869-878.
  17. Feitsma, A. et al. PNAS 2007, 104, 19966-19970.
  18. Kara, R. et al. AHA. [Online] 2011, 3-15.
  19. Pritchard, S. et al. Fetal Cell Microchimerism in the Maternal Heart. http://circres.ahajournals.org/content/110/1/3.full (accessed Sept. 25, 2015). 2004, 291, 1127-1131.

 

 

Fight or Flight: When Stress Becomes Our Own Worst Enemy

by Anjali Chandra

We have all heard of the amazing fight-or flight response: the man lifting a 3,000 pound stock Camaro, the woman fending herself against a bear with just a backpack, and the man outrunning a flaming sphere. Adrenaline surging, our body prepares to defend itself against a perceived threat. Our brain engages our muscles, sensory modalities and chemical signaling pathways in such a way that we may perform feats beyond those of any imaginable human strength.

Stress can activate the sympathetic nervous system in an extraordinary manner. The synchrony of hormones, nerve cells, musculoskeletal tissues and cognitive processes associated with the sympathetic nervous system is awesome, and in many ways, beautiful.

However, repeated or continued exposure to stress can paralyze the body in this hyper-aroused state. Over time, this condition can have detrimental effects on one’s overall health, and immunity, thereby leaving the body susceptible to disease.1

CHRONIC STRESS AND IMMUNE RESPONSE

Stress is a non-homeostatic condition, something that pushes an organism out of its realm of comfort—physically, psychologically or both. According to the American Psychological association, psychological stress is an uncomfortable “emotional experience accompanied by predictable, biochemical, physiological and behavioral changes.”2

External stressors include abuse, or discrimination in the workplace. On the other hand, anxiety about a particular event comprises an internal stressor.

Stressors can also be classified according to the duration and onset of their effects.3 Acute time-limited stressors consist of things like seeing a threatening animal, hunger or public speaking. Stressful event sequences are prefaced by a catastrophic event and consist of smaller related challenges.

Chronic stress can be defined as a relatively stable pervasive force that leads an individual to transform his or her identity, and modify his or her social roles. One distinguishing feature of a chronic stressor is that the individual experiencing it does not know when or if it will terminate. Caring for a loved one with dementia, an abusive relationships, refugee status, and physical disability due to a traumatic injury are all examples of chronic stressors.

Before delving into how stress impacts the immune system, it is important to investigate the basic workings of the immune system. Immune response consists of two broad domains: natural immunity and specific immunity. Natural immunity is the body’s general defense against a variety of pathogens. This consists of neutrophils and macrophages involved in inflammatory response, and cytokines which facilitate communication and wound healing. Another important element of the natural immune mechanism is the killer cell. Killer cells lyse cells that lack the molecular marker designating other cells as native to an organism.4 Complement proteins are the final part of the natural defense team. Complement proteins are important in antibody mediated immunity.

On the other hand, specific immunity as the name implies is generally tailored to particular threats. Lymphocytes are antigen specific, meaning they respond to one particular invader. T-cytotoxic lymphocytes recognize and lyse antigen expressing cells. B cells generate antibodies which target the antigen and reinforce natural immunity, and T-helper cells generate cytokines to enhance the rest of the response mechanisms.

Immunity in Homo sapiens can be further divided into cellular and humoral response. Th1 helper lymphocytes, T-cytotoxic cells and natural killer cells regulate cellular immunity, a response to intracellular pathogens like viruses. On the other hand, humoral response, which is focused on extracellular invaders like bacteria and parasites, is coordinated by Th-2 helper cells, B cells and mast cells.4

PSYCHONEUROIMMUNOLOGY

Psychoneuroimmunology (PNI), also known as Psychoendoneuroimmunology (PENI) is an interdisciplinary field that seeks to unearth the relationship between the brain’s emotional and cognitive processes, and the body in both health and disease.5 PNI focuses on the interaction between the nervous system, endocrine function and immune response.6

University of Rochester Medical Center faculty member Dr. Robert Ader coined the phrase “psychoneuroimmunology” in the early 1970’s, in light of his research regarding immunosuppression in classical conditioning.5 Dr. Adler’s study revealed that saccharin and cyclophosphamide, a gastrointestinal irritant used to suppress immunity, conditioned not only taste-aversion but also immune-suppression in rats.6

Following Dr. Adler a husband-wife team of researchers at the Ohio State University College of Medicine extended the association between stress and weakened immunity to human subjects. They measured immunity in students over one decade of examination periods and found that year after year, during that three-day span, the students’ killer cell levels dropped; they nearly stopped synthesizing gamma interferon, an immune-protectant, and their T-cells were largely unresponsive. This was the first time that stress was directly correlated to compromised immunity for humans in the laboratory setting.

It is important to note however that they were only studying individuals for a three-day period. The first corroborated set of conclusions regarding the effect of chronic stress or compromised mental state on immunity came in Dr. Suzanne Segerstrom and Dr. Gregory Miller’s meta-analysis of some 300 papers connecting psychological stress to elements of immune response. Their investigation revealed that chronic stress negatively impacted both cellular and body-fluid mediated immunity.7

Segerstom and Miller report that early studies demonstrate a reduction in the activity of natural killer cells, stunted lymphocyte proliferation and blunted response to vaccine-based inoculations in chronically stressed individuals.  Increased prevalence of infectious and neoplastic diseases among the chronically stressed demographic was attributed to this weakened immunity.

On the other hand, acute stress has been shown to enhance immune response by triggering a redistribution of immune cells to maximize efficiency. This model has been modified to suggest that instead of spending energy in reorganizing efforts the immune system will shift from specific processes to natural immunity which “are better suited to managing the potential complications of life-threatening situations” since they are rapid response mechanisms.

However, to determine how chronic stress may be increasing disease prevalence, if at all, researchers needed to determine how chronic stressors shifted the balance of immune response. An early proposal was that chronic stress has a paradoxical effect, simultaneously enhancing and suppressing immune response through cytokine mediation, specifically the suppression of Th1 neoplastic disease fighting cytokines, while enhancing the production of Th2 cytokines which increase allergy and autoimmune disease risk. Thus, far this model appears to be consistent with the increased infection and neoplastic disease susceptibility and heightened allergic and autoimmune disordered physiology seen in chronically stressed individuals.8

PATHOLOGY OF STRESS

Dr. Firdaus Dhabhar’s article in the Journal of Neuroimmunology published in 2009 confirms the deleterious effect of chronic stress on Th1 function. Dhabhar’s investigation found that in addition to promoting pro-inflammatory pathways and specifically Th2 cytokine activity, it also increased the body’s natural immunosuppressive mechanisms or regulatory T cells.

Cortisol may be a key player in this Th1 to Th2 shift. Dhabhar reports that glucocorticoids, or stress-related hormones, when administered at a pharmacological level suppress immune response whereas their effect is mixed at a physiological level. More generally speaking, cortisol is an indicator of immune system functioning. Within a certain range, the lower the level of baseline cortisol, the healthier the immune response mechanisms.9

In acute stress mechanisms, low-doses of adrenal stress hormones enhance skin cell-mediated immunity(CMI).  On the other hand, chronic corticosterone significantly suppressed CMI. These conclusions were supported by another study which demonstrated that low levels of corticosterone may enhances anti-T cell receptor-induced lymphocyte production during early stages of T-cell activation, in addition to T-cell responses to IL2, both of which indicate immunity enhancement.

However, a bevy of articles and academic mediums report that glucocorticoids can also be immunosuppressive.10 More support for the immunosuppressive effects of chronic stress comes from a recent study by Jianfeng Jin and colleagues who found that chronic stress induces the accumulation of myeloid-derived suppressor cells in the bone marrow of mice. These cells “inhibited the cytokine release of macrophages as well as T cell responsiveness.”11

Thus, whether it be by way of glucocorticoids, or myeloid-derived suppressor cells, chronic stress has a strong association with immunosuppression.

MANIFESTATIONS OF IMMUNOSUPPRESSION

One manifestation of immunosuppression is decreased vaccine activity. One investigation reported that when the influenza vaccine was given to cohort of older adults, including a group caring for spouses with dementia. The efficacy period of the vaccine for the caregivers was only 6 months were as the non-caregiver’s influenza antibody levels remained stable.

A study published in the New England Journal of Medicine reports that chronically high levels of stress, accounting for other variables such as seasons, substance youth, sleep, exercise and baseline antibody levels, increased susceptibility to acute rhinitis. This links chronic stressed induced immunosuppression with increase susceptibility to infection.

The relationship between stress and cancer has been an area of much interest, and investigation. Dr. David Speigel, a researcher and psychiatrist at Stanford University reports that natural killer cells is markedly low amongst chronically stressed individuals.

Furthermore, Dhabhar illustrates that chronic stress may reinforce pro-inflammatory pathways.12 Chronically elevated cortisol levels desensitize cellular receptors for cortisol and other stress hormones that normally regulate inflammatory response. This Glucocorticoid resistance increases Interleukin-1 activity and Tumor-necrosis factor to promote chronic inflammation.13

IMPLICATIONS

Broadly speaking, these investigations suggest that chronic stress has a two-fold effect. (1) being increased Th2 activity associated with inflammation and increased risk for allergic and autoimmune response, and (2) a reduction in Th1 and leukocyte activity along with enhanced immunosuppressive mechanisms which promotes cancer and infection risk while also decreasing the efficacy of vaccines and wound healing. However, the enhanced immunosuppression may decrease risk for autoimmune disease to some degree.

CONTEXT

When explaining the Bidirectional effect of stress hormones on immune-response, Dhabar suggests that their effect is context based. Critical factors which influence whether the hormone enhances or suppresses immunity include: (1) Duration of stress: Acute or short-term stress experienced at the time of immune activation can enhance innate and adaptive immune responses. Chronic or longterm stress can suppress immunity by decreasing immune cell numbers and function and/or increasing active immunosuppressive mechanisms (e.g. regulatory T cells). Chronic stress can also dysregulate immune function by promoting proinflammatory and type-2 cytokine-driven responses. (2) Effects of stress on leukocyte distribution: Compartments that are enriched with immune cells during acute stress show immune enhancement, while those that are depleted of leukocytes, show immunosuppression. (3) The differential effects of physiologic versus pharmacologic concentrations of glucocorticoids, and the differential effects of endogenous versus synthetic glucocorticoids: Endogenous hormones in physiological concentrations can have immunoenhancing effects. Endogenous hormones at pharmacologic concentrations, and synthetic hormones, are immunosuppressive. (4) The timing of stressor or stress hormone exposure relative to the time of activation and time course of the immune response: Immunoenhancement is observed when acute stress is experienced at early stages of immune activation, while immunosuppression may be observed at late stages of the immune response.14

Dhabar’s contingencies reflect a larger trend in the field of psychoneuroimmunology. The effect that hormone levels, or for that matter stress, is very dependent on the duration of that stressor, as well as each individual’s biological defense mechanism. Apart from factors that researchers can control or mediate, much of relationship between chronic stress and the immune system depends on external variables like diet, sleep, exercise, substance use and other lifestyle factors.

The most valuable lesson that we can take from this is to design studies that account for, as much as possible, these individual variabilities, as was done in the study published in the New England Journal of Medicine on the relationship between chronic stress and acute rhinitis. It is these sorts of studies that can bring us as close as humanly possible to untangling the relationship between mind, and brain, between brain and body.

MOVING FORWARD

Repeated studies over the past 20 years have confirmed the correlation between chronic stress and suppressed immune response. But the next logical question is why? What advantage does immunosuppression have in the fight for survival?

The body may be conserving resources to direct to combat that perceived threat to which salience has been attributed. In order to provide a targeted approach, it shuts down other competing mechanisms which may require and therefore divert energy from what the brain has designated the most important goal or stimulus.

Dr. Speigel, the lead researcher in the study which demonstrated a marked decrease in natural killer cell activity amongst chronically-stressed populations also illustrates that group counseling or stress-management therapy for those already diagnosed with cancer may help to boost immune response. Cancer-patients who had an especially strong support network had low levels of cortisol than those who did not have the same support system.15

Yes, chronic stress has been associated with increased susceptibility to disease, but contrary to the perception of the anxious mind, much is still within one’s own loci of control. This control chiefly manifests itself in two ways. One, immunosuppression may depend on how we ourselves ascribe salience, and two, we can make lifestyle changes to reverse or prevent the immune-related effects of stress.

Without realizing it, the college student stressing about the midterm the next day may be consolidating maldaptive patterns which give rise to chronic stress. Whether it be immune-suppression, or the increased risk for cancer, the consequences of constant hyper-arousal are grave. On the other hand, reforming false attributions of salience, and seeking a supportive social network may thwart the consolidation of acute stressors.

So, take a deep breath. Calm down. Is it really worth the immuno-compromise?

Anjali Chandra ‘19 is a freshman in Canaday Hall.

Works Cited

  1. American Psychological Association. http://www.apa.org/research/action/immune.aspx (accessed Oct. 4, 2015), “Stress Weakens the Immune System”
  2. American Psychological Association. http://www.apa.org/helpcenter/understanding-chronic-stress.aspx (accessed Oct. 4, 2015)
  3. Stress. University of Maryland Medical Center. http://umm.edu/health/medical/reports/articles/stress (accessed Oct. 4, 2015).
  4. Segerstrom, S.C.; Miller, G.E. Psychol Bull. 2004, 130(4), 601-630.
  5. Ray, O. Ann N Y Acad Sci. 2004, 1032, 35-51.
  6. Ader, R, ILAR J, 1998, 39(1), 27-29.
  7. Robert Alder dies. University of Rochester Medical Center, https://www.urmc.rochester.edu/news/story/3370/robert-ader-founder-of-psychoneuroimmunology-dies.aspx (accessed Oct. 4, 2015).
  8. Ader, R; Cohen, N. Psychosom Med. 1975, 37(4), 333-340.
  9. Dhabar, F.S. Neuroimmunomodulation. 2009, 16, 300-301.
  10. Jin, J. Plos ONE [Online] 2013, 8(9),    http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0074497 (accessed Oct. 4, 2015).
  11. Jin, J. Plos ONE [Online] 2013, 8(9), http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0074497 (accessed Oct. 4, 2015).
  12. Jockers, D. Natural News. http://www.naturalnews.com/041556_chronic_stress_immune_system_inflammation.html (accessed Oct. 4, 2015).

Laws of Nature Defending Our Information: Quantum Cryptography

by Felipe Flores

Secure communications and data encryption have been very important topics in the popular eye for the past few years, especially after Edward Snowden made public that the NSA attempts to intervene most communications. I, for instance, never thought my information would be that vulnerable and accessible to potential hackers, sponsored by a government or not. Nevertheless, I realized, my information is not as “valuable” or, better said, as sensitive as the information that banks, hospitals, and governments manage every day. Some information just needs to remain inaccessible to hackers, like the transaction history of bank accounts, the medical records of all patients, or the vote count of an election. All information needs to be heavily encrypted, I figured.

Then, the need to encrypt data became even more relevant in 2014 after several media announced that quantum computers (powerful computers that use fundamental concepts of quantum mechanics) might be used by organizations such as the NSA to break our most sophisticated cyphers and get access to our information.1 The almost unlimited processing power of such computers seemed to threaten our right to virtual security and privacy, and quantum computation was seen as the enemy of the cyber universe. However, this is only one of the many applications of quantum mechanics in computer science. In the process of learning how to use physics to crack codes, we have also learned how to use them in favor of cryptography, the science of hiding messages and protecting information from third parties. Nature itself has the potential to protect information through quantum mechanics, if used correctly. Although the power of quantum computers is (in theory) potentially enough to break any classic method of encryption, quantum cryptography provides an alternate pathway that has proven to be effective and seems to be turning cost-permissive. We are, in a way, using quantum mechanics to protect our information from the power of quantum computers. How ironic is that?

WHAT IS QUANTUM MECHANICS AND WHAT IS SO SPECIAL ABOUT IT?

One of the most bizarre and counterintuitive yet fundamental ideas of quantum mechanics is the quantum superposition principle. The idea is that a particle whose qualities are not being measured—a particle that is not being observed—is in all of its possible states at the same time, but only as long as you are not looking! Whenever you observe a particle, the superposition collapses and the object ‘decides’ to be in only one state; the observer is not interfering with the particle’s superposition, but the act of measuring itself is. To make it more clear, let’s pretend a coin is a quantum-mechanical object, even though superposition only works at the quantum scale—the scale of electrons or photons. If you haven’t looked at the coin yet, it will be in a superposition of both states, heads and tails, at the same time; if you observe it, the coin will choose to be in only one of the states, either heads or tails. This means that the sole act of observing a particle produces a change in the state of such a particle; it’s almost as if nature itself was protecting the superposition from being eavesdropped! (Does it sound like something you’d want in a secure communication channel?) Scientists, financial institutions, medical facilities, and other organizations that require highly-secure channels can use this property of quantum mechanical particles to prevent their messages from being intercepted by a potential hacker. Currently, the most widely used system is called Quantum Key Distribution (QKD).

WHY QKD AND HOW DOES IT WORK?

The classic method of information encryption in point-to-point communication works by encoding and locking a message with a predetermined “key” that the receiver can then use to unlock the message. Fundamentally, encryption transforms the message into nonsense gibberish, and the key holds the instructions to convert it back to normal. This method has several vulnerabilities: any hacker could potentially intercept the communication, copy the data, and (with enough processing power, like the one provided by a quantum computer) figure out the key to decrypt the message.1 It seems like technology is advancing more quickly in computing power and ways to break codes than the science of cryptography is; after all, cryptography requires very complex mathematical algorithms that, unlike computer processors, cannot be produced at an industrial scale. So, science has to come up with a better encryption mechanism that goes beyond a very complicated math problem (like our current algorithms), and quantum mechanics seems to have the answer.

QKD requires two channels of communication. One of them is a regular data channel (like a regular internet connection between two computers), while the other channel is a direct optic fiber connection between the sender and the receiver of the message—essentially, a cable that is able to transmit quantum particles from one computer to another. The QKD mechanism continuously generates new keys (in intervals of less than a minute) to encode the message, and data is sent using such keys through the regular communication channel. QKD, at the same time, uses the direct optic fiber connection to send the key needed to decrypt the message to the receiver; the mechanism sends the key as plane-polarized photons, which are particles in the fragile superposition, as explained above. The concept is that any eavesdropper trying to interfere with the connection would observe the photons before getting to their final destination, which makes the superposition “collapse” and alters the information sent through this channel. The observer in this situation would never be able to obtain the key necessary to decrypt the message, for which the information remains secure. The receiver, on the other hand, could then detect these alterations in the photons, make the valid assumption that the connection has been compromised, and take the appropriate measures to re-establish a secure and private connection. The sole fact that quantum mechanics “protects” the superposition from being seen protects the message at the same time. A common analogy to visualize this is imagining the key was being sent on delicate soap bubbles: if a third party observer tried to reach those bubbles, they would easily pop and prohibit them from decrypting the message. At the same time, the receiver on the other end of the channel is expecting a bubble to read; the receiver would immediately know if the bubble was popped along the way.2,3

WHAT ARE THE LIMITATIONS OF QKD?

QKD is not yet a perfect mechanism. In 2010, a team of researchers in Norway and Germany showed for the first time that it is possible to obtain a full key from (in other words, to “hack”) a commercial QKD channel. The discovery led to even more intense research to modify the communication protocols of QKD. Even though QKD has limitations, they are far fewer than those of other encryption systems, and the vulnerabilities can mostly be fixed by modifying the protocol of key generation and communication, but not the principle of QKD itself;4 that being said we find that the system’s limitations are mostly financial. Implementing such a secure system is, most surely, expensive and complicated. In terms of infrastructure, any QKD network would need direct optic fiber communication between every node (every participant) in the network, which presents a difficult challenge over great distances. QKD communication through optic fiber cables loses power very easily as photons might be absorbed by the material in the cable, which is even more likely to happen at longer distances. If we wanted to build a secure network over a distance of a few hundred miles, we would surely need a large network of quantum repeaters—devices that replicate a signal to maintain it at the appropriate intensity—which also makes it harder for photons to remain in superposition. In order to avoid all the consequences of extended networks, it is necessary to invest large sums in devices that ensure the viability of QKD over long distances.

WHAT IS THE CURRENT STATUS OF QKD? WHAT CAN WE EXPECT FROM IT IN THE FUTURE?

QKD is already in use at several research institutions, technology companies, and telecommunications corporations that require highly-secure data transfer around the globe.5 In fact, the first quantum cyphered network was established in Cambridge, MA. In 2005, the Defense Advanced Research Projects Agency (DARPA) established the first quantum network in a collaborative effort between Harvard University, Boston University, and BBN Technology.6 QKD has found non-stop development ever since. For example, the city of Geneva used QKD channels to securely count votes in 2007 elections. Once the network’s hardware and connectivity was ready to use, the deployment of the secure QKD enabled the encryption of a standard internet connection, taking only 30 minutes to be fully operational, and continuing to operate for more than seven weeks. In 2010, the city of Tokyo built a quantum network that covered distances of over 90 miles and was even enabled for special QDK smartphones (though these smartphones were designed only to prove the point that the technology is applicable even to mobile devices). As of 2015, China is undergoing major advances in the field, currently working on a QKD channel running from Beijing to Shanghai (1,240 mi) to be finished by 2016, and setting a schedule for the network to extend globally by 2030. China has also confirmed their desire to be the first country to launch a satellite with quantum communication systems, using a design similar to QKD.7

Nowadays the public is willing to invest more on their virtual privacy, but the price of QKD systems is currently out of reach for most companies that would like to secure their information. Researchers and developers expect this technology to become more accessible in the near future as the devices involved begin to be produced in industrial scale; accessible prices will take secure quantum channels closer to all users and not only high-tech companies and institutions around the world. The field of quantum cryptography is only growing in a time where security and privacy of information are more important than ever before.8 Highly sensitive information needs better protection as techniques to crack codes are developed, and quantum cryptography seems to be key in the future of transmitting it. In the future, the development and applications of quantum cryptography will allow us to rest assured that our sensitive information in banks, hospitals, or government institutions will not be accessible to any hackers. The more widespread this technology becomes, the more secure we will all feel. For now, we can only be impressed by nature’s quantum-mechanical weirdness and by the applications we, humans, find for it.

Felipe Flores ‘19 is a freshman in Hollis Hall.

Works Cited

  1. Rich, S.; Gellman, B. NSA Seeks to Build Quantum Computer That Could Crack Most Types of Encryption. The Washington Post, Jan. 2, 2014. https://www.washingtonpost.com (accessed Sep. 29, 2015).
  2. Lance, A.; Leiseboer, J. What is Quantum Key Distribution?; Quintessence Labs, 2014,  4-7.
  3. The Project UQCC. On Security Issues of QKD; Updating Quantum Cryptography and Communications, n.d.; http://www.uqcc.org/images/towards.pdf (accessed Sep. 29, 2015).
  4. Dianati M.; Alléaume R; Architecture of the Secoqc Quantum Key Distribution network. GET-ENST, Network and Computer Science Department; GETENST: Paris, Feb. 1, 2008; pp. 3-6; http://arxiv.org (accessed Sep. 30, 2015).
  5. Lydersen, L. et al. Nature Photonics. 2010, 4, 686-688.
  6. Qiu, J. Nature. 2014, 508, 441-442.
  7. Elliott, C. et al. Current status of the DARPA Quantum Network; BBN Technologies: Cambridge, 2005, 9-11.
  8. Dzimwasha, T. Quantum revolution: China set to launch ‘hack proof’ quantum communications network. International Business Times, Aug. 30, 2015. http://www.ibtimes.co.uk (accessed Oct. 1, 2015).
  9. Stebila, D. et al. The Case for Quantum Key Distribution. International Association for Cryptologic Research; IACR: Nevada, 2009, 4-6.  https://eprint.iacr.org (accessed Oct.1, 2015).

“Invaders from Earth!”: Exploring the Possibilities of Extraterrestrial Colonization

by J. Rodrigo Leal

We’ve all seen films or heard stories about the “Invaders from Mars”: aliens coming from other galaxies to colonize Earth and take advantage of its bountiful natural resources. But what if the story happened the other way around? Organizations like the National Aeronautics and Space Administration (NASA) and private companies like SpaceX have looked seriously at the idea of space exploration for colonization, particularly when it comes to the possibility of colonizing artificial satellites, the Moon, or our even planetary neighbor, Mars. With worries about the habitability of Earth being in jeopardy in the not-too-distant future—due to the effects of environmental degradation, lack of resources, and climate change— individuals and institutions are exploring the modern frontiers of space technology that could soon transform humankind into the “Invaders from Earth.”

FROM SCIENCE FICTION TO SCIENCE REALITY

Extraterrestrial colonization seems like a concept straight out of a sci-fi movie—Christopher Nolan’s 2014 science fiction drama Interstellar features protagonist Matthew McConaughey trying to find a new home for mankind as food shortages and crop failures threaten society’s existence on Earth. Surprisingly though, the idea of space colonization has been around for quite some time now. In 1869, The Atlantic Monthly published a short story by Edward Everett Hale entitled “The Brick Moon,” in which an artificial satellite gets accidentally launched into space with people still inside of it, leading to the establishment of the first space colony.1 The idea of a “space colony” eventually took shape in 1971 in the form of the Russian Salyut program, resulting in the first crewed space station in history.

In the mid to late-1970s, not long after the United States put astronauts on the Moon in the famous Apollo missions, scientists and engineers began to seriously consider the idea of extraterrestrial colonies living in artificial space habitats just outside of Earth. One of the first, and indeed one of the most influential, scientific papers on the topic was “The Colonization of Space” by Dr. Gerard K. O’Neill of Princeton University, which was published in the popular magazine Physics Today in 1974. Through careful calculations and consideration of the physics and economics behind the construction of a space habitat, Dr. O’Neill concluded that mankind could “build pleasant, self-sufficient dwelling places in space within the next two decades, solving many of Earth’s problems”.2 As a result of this study, the NASA Ames Research Center began to conduct space settlement studies with the intention of supporting the NASA Ames Space Settlement Contest, a space colony design contest for primary and secondary school students.3 The study begins with the following thought-provoking questions: “We have put men on the Moon. Can people live in space? Can permanent communities be built and inhabited off the Earth?”3 And so commenced NASA’s formal research into the idea of sustaining human civilization outside of our planet.

LOOKING FOR A NEW HOME

Why would it even be necessary to have mankind live in colonies outside of Earth? Perhaps the most compelling reason to explore the possibility of extraterrestrial colonization has to do with our own environment. As the Earth’s population continues to grow—it is expected to balloon to over 11 billion by 2100—and as resources like freshwater and food become more scarce, many thinkers fear our planet will become far less capable of sustaining human life. Famed scientist Stephen Hawking has even delivered a lecture titled “Why We Should Go Into Space,” urging society to continue space exploration to ensure humanity’s survival. As Dr. Hawking states in one of his lectures on space colonization, “if the human race is to continue for another million years, we will have to boldly go where no one has gone before”.4

Climate change is also threatening to make Earth much less hospitable in the future. Average land and sea surface temperatures are increasing, sea levels around the globe are rising, atmospheric carbon dioxide levels have reached all time highs, and precipitation patterns are shifting.5 In the long term, these changes could lead to detrimental effects on human health, crop production, species extinction, and other major environmental catastrophes. On top of this, the risk of nuclear war leaving Earth inhospitable or the threat of a major asteroid leading us to the same fate as the dinosaurs are other reasons why advocates of extraterrestrial colonization suggest we, as a society, should start looking at a Plan(et) B.

MODERN FRONTIERS IN SPACE COLONIZATION

Colonizing other Planets

American company SpaceX (short for Space Exploration Technologies Corporation), led by business mogul Elon Musk, was founded in 2002 with the primary end goal of enabling the human colonization of Mars.6 By designing, manufacturing, and launching space transportation products, SpaceX has become one of the modern leaders in spaceflight. Since its founding, SpaceX has been the first private company to successfully launch and return a spacecraft from lowEarth orbit, as well as the first company to send a spacecraft to the International Space Station. This is changing the way society has traditionally engaged in space exploration, moving the business of space travel from government agencies to private industries. With these developments in space technology occurring within the private sector, the possibility of sending humans to other planets is one step closer to becoming a reality.

Other companies, like the Mars One Corporation, are joining in on the quest to tackle the Mars problem as well. With the goal of establishing a “permanent human settlement on Mars,” Mars One is seeking capital investment, manufacturing partners, intellectual advisees, and other critical partnerships that could make the settlement of Mars a reality within the 21st century.7

The Radical Idea of Terraforming

In the curious world of extraterrestrial colonization, there is another conceptual framework for colonizing planetary bodies that involves a complete transformation of entire planets: terraforming. Terraforming a planet would require significant human engineering and manual alteration of environmental conditions: modifying a planet’s climate, ensuring an atmosphere with sufficient oxygen, altering rugged landscapes, and promoting vegetation and natural biological growth. Scientists have deliberated the idea of making a planet like Mars habitable to human life through various transformations of the environment on a planetary scale. One possible scenario would be to introduce greenhouse gases into the atmosphere of Mars, leading to steady increases in surface temperatures—an induced greenhouse—that would perhaps make the planet hospitable to plants within a time period of 100 to 10,000 years.8 The next step would involve carbon sequestration and the introduction of plant life. Vegetation on Mars would lead to the photosynthetic conversion of CO2 into O2, eventually rendering the Martian atmosphere “human-breathable.”8 Humans would also have to address the problem of ultraviolet radiation shielding (since the Martian atmosphere is so thin), maintaining a warm air temperature, and a myriad of other complications in order to sustain a human-compatible Martian environment.8 While NASA scientist James Pollack and legendary astrophysicist Carl Sagan thought it was within the realm of human capabilities to engineer other planets, they strongly believed the “first step in engineering the solar system is to guarantee the habitability of the Earth.”9 In other words, science must first commit itself to the conservation of this planet before attempting to alter or flee to another.

CONCLUSION

So what’s holding us back from actually getting to Mars and starting a colony? One big reason is the sheer cost of such an operation. In 1989, President George H.W. Bush introduced the NASA Space Exploration Initiative, which would have culminated in sending humans to Mars (after landing astronauts on the Moon again). This initiative was immediately shut down due to concerns over the costs: over $500 billion.10 Mars Direct, a plan created by the Mars Society to colonize the Red Planet through a “minimalist, liveoff-the-land approach to space exploration,” is estimated to come at a cost of about $30 billion.11 While this is substantially less than what a NASA mission to Mars would cost, it is still a significant sum that requires huge investment and careful consideration of the costs and benefits of a project of this scale.

Recent breakthroughs in space exploration technology, both in the private and public sectors, are making the possibility of extraterrestrial colonization much more realistic. Space colonization is no longer merely the plot of a hit sci-fi flick, as entire corporations are investing millions of dollars into the technologies and engineering advancements that will make reaching Mars possible before the end of the century. Pretty soon, instead of looking up at the night sky from Earth, perhaps we will be looking up from Mars and staring at the blue dot that we used to call “home.

Rodrigo Leal ‘16 is a senior in Kirkland House, concentrating in Earth and Planetary Sciences.

Works Cited

  1. Hale, E. E. Atlantic Monthly 1869, 24.
  2. O’Neill, G.K.  Physics Today, 1974, 27.9, 32-40.
  3. Johnson, R.; Holbrow, C. Space Settlements: A Design Study. NASA. 1977.
  4. Hawking, S. Why We Should Go into Space; NASA’s 50th Anniversary Lecture Series, 2008.
  5. NASA. Global Climate Change. climate. nasa.gov
  6. SpaceX. spacex.com
  7. Mars One. mars-one.com
  8. McKay, C.P. et al. Nature 1991 352, 489-496.
  9. Pollack, J.B.; Sagan, C. Arizona Univ., Resources of Near-Earth Space. 1991 921-950.
  10. NASA. Summary of Space Exploration Initiative. history.nasa.gov
  11. The Mars Society. marssociety.org

Earth’s Missiles, Ready to Go?

by Eesha Khare

In 1991, an unusual phenomenon was observed following the volcanic eruption of Mount Pinatubo in the Philippines. After nearly 20 million tons of sulfur dioxide were launched into the stratosphere1—the second largest eruption of this century—the global temperatures dropped temporarily by 1°F. Amid the large-scale destruction, it seemed the Earth was fighting back.

The incident in Pinatubo was a learning opportunity for scientists worldwide. They realized that by manipulating various factors in the Earth’s environment they could work to fight the climate change slowly taking over the Earth. Since the 1950s, scientists have been working on a range of solutions to modify weather conditions. In the context of the Cold War in the 1940s, both the US and the Soviet Union worked on developing scientific techniques such as seeding clouds with substances, which allowed scientists to force more rain, create advantageous conditions for battle, and help agriculture in dry regions.2

This was the birth of geoengineering, or climate engineering, in which artificial modifications of the Earth’s climate systems are made in response to changing climate conditions.3 Geoengineering is focused on two main areas: carbon capture and solar radiation management. Since its advent, geoengineering has become a hot, controversial topic, as the risks and rewards of geoengineering solutions are slowly being detailed. On one hand, geoengineering solutions offer a promising approach to artificially reversing recent climate trends, especially in light of the Pinatubo eruption. Yet on the other hand, these same solutions present a number of risks regarding the side effects and controllability of geoengineering. As we move into the future, the need to counteract increasing climate disturbances is becoming even more pressing, making our search for a solution all the more important.

TECHNOLOGY IN BRIEF

As previously stated, geoengineering solutions can be broken into two main areas: carbon capture and solar radiation management. Within each area, the stages of research are broken down into theory and modeling, subscale field-testing, and low-level climatic intervention. Of these, the latter two stages are seldom reached.3

Carbon capture techniques work to remove the amount of carbon dioxide in the atmosphere, thereby counteracting carbon dioxide emissions that result in the greenhouse effect. At the simplest level, there is a movement to encourage increased planting of trees, termed afforestation, in order to have trees consume carbon dioxide during their photosynthetic process. While initially economical and practical, afforestation would not produce very large reductions in temperature. According to a 1996 comprehensive study, researchers found that the average maximum carbon sequestration rate would be between 1.1 to 1.6 gigatons per year, compared to the 9.9 gigatons per year currently released into the atmosphere, a mere 11 to 16%.4 On top of that, annual sequestration rates would change year to year, as these rates are highly dependent on annual weather conditions. Furthermore, the location of tree planting is critical, as forests absorb incoming solar radiation. When planted at high latitudes, trees can actually lead to net climate warming.5

Some other techniques have focused on re-engineering plant life to capture carbon. This includes biochar, charring biomass and burying it so that its carbon content is kept in the soil, and bio-energy carbon capture, or growing biomass and then burn ing it to capture energy and store carbon dioxide. Many treatments have also focused on the ocean life, particularly the populations of phytoplankton that are responsible for nearly half of the carbon fixation in the world.6 Ocean fertilization, or adding iron to parts of the ocean to promote phytoplankton growth and subsequent carbon dioxide uptake, and ocean alkalinity enhancement, or adding limestone and other alkaline rocks to enhance carbon capture and counteract increasing ocean acidification, have also come up as possible techniques. However, the limiting factor is the lack of translation from small-scale ocean fertilization to larger-scale consequences.7

Solar radiation management is another broad category that has gained prominence over the past few years. In this technique, various measures are used to reflect some of the Sun’s energy back into space and thereby prevent the Earth’s temperature from rising. Albedo engineering, the main subset of this category, focuses on enhancing the albedo, or fraction of short-wave solar energy, reflected back into space. Harvard Professor David Keith is a strong advocate of achieving albedo engineering by launching sulfate particles above the ozone layer, mimicking the eruption and effects of Pinatubo. You would have to deliver nearly one millions tons of SO2 every year using balloons and rockets in order to see some effect. While the sulfur does not reduce the amount of carbon dioxide in the atmosphere, it helps offset its effects by reflecting solar radiation away from the earth. The cost is also quoted to be relatively inexpensive, at only $25-50 billion a year.8 Another solar radiation management technique is cloud whitening, where turbines are used to spray fine mist with necessary salt particulates into the low-lying stratosphere above the oceans, thus making them whiter and increasing scattering of light. While this technique would change precipitation patterns, it localizes the solution to the oceans,9 unlike the sulfate launch, which targets the whole stratosphere.

Former Harvard physicist, Russell Seitz, wants to trap bubbles in the ocean water, by increasing the sunlight submerged in the bubbles and thereby whiten the water. This process would result in undershine, similar to the previously proposed ideas of brightening roofs to offset the CO2 in the air and increase the reflectivity of the air. Again, his solution proposes a series of technical challenges but still highlights the core principles of the geoengineering movement.

One of the main challenges in such work is that creating the climate engineering solutions at a small scale for testing is very difficult, if not near impossible. This raises the question—can we really test geoengineering? Cambridge University Professor Hugh Hunt works to answer that exact question, and in 2011, he tried to launch a small-scale experiment of dispersing sulfur dioxide over Norfolk as a part of the SPICE (Stratospheric Particle Injection for Climate Engineering) consortium in the UK.10 Even this experiment was met with high levels of resistance and ultimately stopped before it could be carried out. Since then, a number of other interesting projects have been developed, such as an ice protector textile coated over a small glacier in the Swiss Alps to reflect light and slow ice melt.11

LEVELS OF RISK

The radical and far-reaching geoengineering solutions presented in this article have raised a number of technical and political issues among the research and local community. For example, while sulfate aerosols would last for a couple of years, a number of concerns have been raised about the side effects of acid rain and ozone damage. However, beyond the technical problems, geoengineering solutions have also become extremely controversial in the political space, as political leaders are forced to deal with the debate over usage of these techniques. From a policy standpoint, opponents of geoengineering fear that introducing such solutions would disincentivize governments and corporations from reducing anthropogenic carbon dioxide emissions, the root of this environmental problem.11 They argue that “technofixes,” or technical solutions to social problems, are falsely appealing, as they do not provide real solutions to the social and political environment causing these problems.12 Further, they also worry that the reduced cost and speed of implementation may result in certain countries adopting solar radiation management techniques without consulting other neighbors and thereby indirectly affecting international policy through their national measures. It is clear that, with the increasing likelihood of the need to implement geoengineering solutions, developing a new national and international policy framework is necessary for further action.

LOOKING FORWARD

It is clear that, with the increasing fluctuations in Earth’s climate, rapid training of natural resources, and scaling degree of globalization, solutions to preserve and protect the Earth’s environment are not just desirable, but extremely necessary. While geoengineering presents promising solutions, it also raises a number of economic, political, and environmental concerns that will likely prevent its full-scale integration. While many geoengineering solutions will likely continue being contested and therefore underdeveloped, it is worth noting that taking a greater focus in specific carbon-capture based technologies, such as chemically catalyzed reactions for “carbon”-fixation, would enable the same level of climate impact without the side effects. Although challenges abound, the development of carbon capture technology must be the next step in this fight to save this planet.

Eesha Khare ‘17 is a junior in Leverett House, concentrating in Engineering Sciences.

Works Cited

  1. Diggles, M. The Cataclysmic 1991 Eruption of Mount Pinatubo, Philippines. U.S. Geological Survey Fact Sheet 113-97. http://pubs.usgs.gov/fs/1997/fs113-97/
  2. Victor, D.G. et al. The Geoengineering Option. Foreign Affairs. Council on Foreign Relations. March/April 2009. http://fsi.stanford.edu/sites/default/files/The_Geoengineering_Option.pdf
  3. What is Geoengineering? Oxford Geoengineering Programme. 2015. http://www.geoengineering.ox.ac.uk/what-is-geoengineering/what-is-geoengineering/
  4. Land Use, Land-Use Change and Forestry. Intergovernmental Panel on Climate Change. http://www.ipcc.ch/ipccreports/sres/land_use/index.php?idp=151
  5. Arora, V.K.; Montenegro, A. Nat. Geoscience 2011, 4, 514-518.
  6. Chisholm, S. W. et al. Science. 2001, 294(5541), 309-310.
  7. Strong, A. et al. Nat. 2009, 461, 347-348.
  8. Crutzen, P. J. Climatic Change 2006, 77(3–4), 211–220.
  9. Morton, O. Nat. 2009, 458, 1097-1100.
  10. Specter, M. The Climate Fixers. The New Yorker [Online]. May 14, 2012.  http://www.newyorker.com/magazine/2012/05/14/the-climate-fixers
  11. Ming, T. et al. Renewable and Sustainable Energy Reviews 2014, 31, 792-834.
  12. Hamilton, C. Geoengineering Is Not a Solution to Climate Change. Scientific American [Online]. March 10, 2015. http://www.scientificamerican.com/article/geoengineering-is-not-a-solution-to-climate-change/

Genetically Modified Crops as Invaders and Allies

by Sophie Westbrook

It’s not hard to tell frightening stories about genetically modified crops. These days, there is even a formula to follow: the soulless company creates dangerous variants, silences the protests of right-thinking environmentalists, and sends biodiversity and public health down the drain. This scenario’s proponents tend to be horrified by transgenic organisms. Unfortunately, this can polarize their conversations with any agricultural scientists responsible for “Frankenstein’s monster.” The fiery controversies often marginalize a key idea: genetically modified crops are part of the biosphere. This has more complex implications than the popular “they’ll drive everything else extinct” hypothesis. We cannot understand what transgenic variants will do to—or for—us without examining when, how, and why the organisms around them react to invasion.

Genetically modified (“GM”) crops were a cornerstone of the 1980s push for more efficient agriculture. Initial experiments with pest and disease resistance were quickly followed by qualitative modifications: engineered crops could grow quickly into big, attractive specimens with a predetermined chemical makeup.1 Almost immediately, the technology spawned concerns rooted in food safety, economics, and environmental impact. A number of these issues are still with us. In particular, scientists and citizens alike struggle to understand  the implications of resistance evolution, gene flow between hybrid and natural populations, and competitive advantages.2 A nuanced discussion of these topics is critical to developing a modern crop management plan.

RESISTANCE

To GM proponents, pest resistance is one of the technology’s best success stories. Modified crops can repel not only bacterial invaders but also undesirable insects and weeds. This trait improves output by increasing survival rates and facilitating “double-cropping,” planting twice a year and letting growth extend into insect-heavy seasons.3 It also has the potential to reduce non-point source pollution: GM plants need less protection from sprayed pesticides and herbicides. These developments have produced higher yields at lower costs worldwide.

Naturally, the introduction of “hostile” GM  organisms into the environment has consequences. Shifting land use patterns can drive “pest” species away from areas used for GM crop cultivation. This is worth keeping in mind as the amount of land used for GM crops continues to grow. In 2010, GM crops covered an estimated 140 million hectares (346 million acres).3 Larger-scale cultivation could destabilize the ecosystems surrounding cultivation sites by removing key producers and primary consumers.

There are more immediate concerns, though. Where anti-insect crops grow, some insect species are developing resistances to their artificially introduced toxins. For instance, corn plants modified with Bt toxins are increasingly unable to repel caterpillars. This effect has been observed globally.2 Adding increasingly poisonous compounds would only prompt more evolution by the insect species. Such ideas also raise questions about health impacts and environmental contamination.

Other GM crops are not themselves toxic, but have been engineered to resist cheap, relatively safe herbicides. Glyphosate-resistant crops in the United States are a notable example. Since their introduction and initial success, they have been viewed as an easy solution to weed problems.4 Some staple crop species, like corn and soybeans, are seldom cultivated without glyphosates. Now, these herbicides are increasingly ineffective (and consequently applied in increasing concentrations). This is evidence that weed species are experiencing strong selective pressures to develop their own herbicide resistance.

The fact that GM crops prompt pest evolution is neither shocking nor devastating. After all, unmodified plots also promote shifting gene frequencies; some organisms are better suited to take advantage of crop growth than others. However, the GM era has seen an unusually violent “arms race” between scientists and pests. Acknowledging that native insects and weeds can always evolve in response to invading species’ biochemistry means investigating alternative, multi-layered management solutions.2

HYBRIDIZATION

Plants share DNA. Gene flow, the transfer of genetic material from one population to another, is one of their fundamental mechanisms for generating biodiversity. When this happens between GM and natural crops, it can lead to transgene introgression: the fixture of an “invasive” modified gene into a native species’ genome.5 Transgene introgression is never sure to happen. At a minimum, it tends to require population compatibility on a sub-genomic scale, strong selective pressure to retain the transferred trait, time, and luck.5 Even “useful,” artificially inserted genes have a relatively low probability of leaping to nearby organisms.

There are two key barriers to spreading transgenes. First, many modern crops lack genetically compatible wild types, so they simply cannot spread their modifications. Second, “domestication genes” are frequently unsuccessful in natural populations.2

That said, transgene introgression does occur. One of the most famous cases took place between maize populations in Oaxaca, Mexico.6 There was widespread alarm when remotely situated wild maize contained artificial genes. It called into question our ability to safeguard unmodified plant varieties, which would become critical if a commonplace GM species proved unviable or unsafe.

Oaxaca has been analyzed extensively. Unfortunately, data on specific events cannot help us prevent transgene introgression everywhere. The process depends heavily on which species are involved, so one-size-fits-all policies for discouraging gene flow are inadequate.7 A more specialized understanding would help us to manage the possibility of dangerous escapee genes and better answer questions about legal and ethical responsibility.

COMPETITION

When functionally similar invasive and native species do not hybridize, they often compete for the same resources. If the native species is wholly out-classed, it may be driven to extinction. This is the idea behind discussions about GM crops’ threat to biodiversity. Biodiversity is indisputably necessary: it is the foundation of stability and flexibility in an ecosystem. Allowing a single variant to overcome its peers would leave any community more vulnerable to stresses like disease, climate change, and natural disaster.

Do GM crops have an advantage over natural ones in the wild? They tend to incorporate some traits, such as fast growth and temperature tolerance which promote greater survivorship. However, as mentioned above, they are primarily adapted to living in cultivated areas. This means that they lack characteristics like seed dormancy and phenotypic plasticity (the ability to take different forms) that would make them more effective, invasive, weeds.8

Looking forward, extreme crop modifications mean GM variants may entirely lose metabolic capabilities they would need to survive in nature.7 This suggests that they should become increasingly unlikely to succeed after accidental dispersal. Nonetheless, hard-to-predict factors such as mutations within modified crops could always lead to the loss of native populations. Once a species—natural or transgenic—becomes invasive, it is nearly impossible to recapture. Unfortunately, GM plot isolation is a difficult proposition, especially given the crops’ prevalence throughout the developing world.

CONCLUSION

Wild organisms have a surprisingly diverse menu of responses to transgenic invaders. They may evolve in response to the crops’ new traits, hybridize to access the traits themselves, or try to outcompete the variants through their other weaknesses. The strategy adopted depends primarily on the native species’ ecological position, but also on the characteristics of the invader. To develop a comprehensive understanding of the ways GM crops affect the communities they enter, we need to analyze these relationships in all their variety. This examination may lay the groundwork for a safer, more sustainable food supply in the future.

Sophie Westbrook ‘19 is a freshman in Hurlbut Hall.

Works Cited

  1. Nap, J. P. et al. Plant J. 2003, 33, 1-18.
  2. Goldstein, D. A. J. Med. Toxicol. 2014, 10, 194-201.
  3. Barrows, G. et al. J. Econ. Perspect. 2014, 28, 99-120.
  4. Beckie, H. J.; Hall, L. M. Crop Prot. 2014, 66, 40-45.
  5. Stewart, C. N., Jr. et al. Nat. Rev. Genet. 2003, 4, 806-817.
  6. Quist, J.; Chapela, I.H. Nature 2001, 414, 541-543.
  7. Messeguer, J. Plant Cell Tiss. Org. 2003, 73, 201-212.
  8. Conner, A. J. et al. Plant J. 2003, 33, 19-46.

To the Rescue: Insects in Sustainable Agriculture

by Ada Bielawski

In 1798, Thomas Malthus published his Essay on the Principle of Population and described the limits of human population growth: the population will continue to grow exponentially while the Earth’s resources are able to sustain the increasing food production needed to feed this population. He concluded that, as the population approaches 8 billion, the poorest will suffer the most from limited resources.1 Currently, over 14% of the world’s population is underfed, and the growing population is expected to reach 9 billion less than 50 years from now.2 Thus, there is a dire need to increase crop yields to feed the growing population. This must be done while also mitigating the effects of agricultural production on the Earth’s limited resources. Therefore, instead of relying on destructive tools—such as deforestation to create more farmland—increasing crop yields through sustainable agriculture is the key to a better future.2

We can increase crop yields and decrease environmental stress through IPM. Integrated Pest Management (IPM) is an ecological approach to pest defense that aims to minimize the use of chemical pesticides and maximize environmental and consumer safety. Farmers utilizing IPM use their knowledge of pests and how they interact with their habitat to eradicate them most efficiently.3 IPM is more sustainable than pesticides but can be less effective, so farmers are reluctant to implement IPM measures that actually do work.

ANT IPM

Oecophylla smaragdina—commonly known as Weaver ants—have been used as an anti-pest crop protector since 304 AD, when Chinese markets sold ants to protect citrus fruit.5,6 Today, after decades of chemical pesticide use, ant IPM has reemerged as a sustainable option for crop defense.4,5,6 Ants are a great tool for many reasons: (1) they are a natural, accessible resource, responsible for 33% of insect biomass on Earth; (2) they can quickly and efficiently build populations at a nest site due to behavioral habits such as path-making, worker recruitment, and pheromone attraction; and (3) they encompass a range of roles and behaviors that make them capable of attacking a variety of pests at many stages of their life cycle.4,5 With these characteristics, ants form a mutualistic relationship with their host plant: the plant attracts their food source and provides a home, while the ants attack the pests that would cause the plant harm.7

Ants do the work of chemical pesticides with increased safety and damage control.4,6,8 There have been 17 studies conducted that evaluate the success of ant pest management for nine crops in a total of eight countries. Of these studies, 94.1%showed a decrease in pest number and damage done by the pests. One of these studies, done on cashew trees in Tanzania, showed that the damage found on cashew trees with ants was reduced by 81% from the damage on control trees, and nut damage was reduced by 82%. Furthermore, 92.3% of reports studying crop yields favored ant IPM over chemical pesticides. Of the studies that compared the results of ant pest control to chemical-based pest control, 77.8% favored ants.4

Moreover, ants as pest control can cost less than their chemical counterparts. In Northern Australia, researchers studied the cost and crop yields of cashew crops between plants with chemical pesticides and plants with ant IPM treatment. The weaver ants showed a 57% reduction in cost over a period of 4 years, and a crop yield 1.5 times the size of the one for chemical pesticides. This resulted in savings of over $1500/hectare/year, which led to a 71% increase in revenue for farmers.4,8 These results prove that ants have the potential to be not only a more sustainable tool for agriculture, but also a more cost-effective method of pest management.

Ant IPM has demonstrated promise for the future of sustainable agriculture. Future research should: (1) focus on identifying all the crops that could benefit from ant IPM; and (2) study more of the 13,000 ant species, whose unique attributes could target a wider variety of crops.4,6

MOTH IPM

Plutella xylostella, the diamondback moth, is a pest that targets cruciferous crops, such as cabbage.9,10,14 The larvae feed on the green sprouts and reduce not only crop yield, but also crop quality.10 To protect against these moths in the past, scientists created a genetically modified (GM) crop that produced a bacterium—Bacillus thuringiensis (Bt)—which was toxic to the pest it targeted9,11,12 but safe for other insects, animals, and humans to consume.13 This was an effective method for controlling diamondback moth populations until the pests developed resistance to the bacterium.9

Scientists from Oxitec set out to solve this perpetual resistance problem by inserting a transgene into the diamondback moth genome.9,14 The transgene has three main components: (1) a tetracycline-repressible dominant female-specific lethal transgene: larvae are fed tetracycline while they mature, and then when released, female GM moths die due to insufficient levels of tetracycline in the wild, whereas males survive (this process also occurs with all the female progeny of the GM moths); (2) a susceptibility gene: this gene makes GM moths susceptible to Bt; and (3) a fluorescent tag: this allows scientists in the field to distinguish which moths have the transgene.9

In the Oxitec study, GM moths were released into a caged, wild-type moth population in high numbers every week. Researchers recorded the number of eggs collected, the number of dead females, and the proportion of the transgenic progeny to wild-type progeny. Wild-type females mated with the GM male moths, and all of the second-generation females died before they reached reproductive age. Since the number of females decreased in subsequent generations, the population became 100% transgenic in ~8 weeks, and then went extinct ~10 weeks from the initial release of GM moths. Thus, GM moths have the potential to not only reverse Bt resistance in their species, but also eliminate the use of Bt crops.9

IPM is the solution to a more sustainable means of food production while the Earth’s population continues to grow beyond the bounds of available resources. Weaver ants have proved to be efficient and cost-effective crop defenders, while new research utilizing GM technology on Diamondback moths has shown major promise in reducing targeted pest populations and their resistance to chemical pesticides. These two examples clearly illustrate the potential of IPM for pest management in the near future.

Ada Bielawski ‘18 is a sophomore in Mather House, concentrating in Integrative Biology.

Works Cited

  1. Malthus, T.R. An Essay on the Principle of Population; J. Johnson: London, 1798, 6-11.
  2. Godfray, H.C.J. et al. Food Security: The Challenge of Feeding 9 Billion People. Science 2010, 327, 812-818.
  3. U.S. Environmental Protection Agency. Integrated Pest Managemnet (IPM) Principles. http://www.epa.gov/pesticides/factsheets/ipm.htm (accessed Oct. 4, 2015).
  4. Offenberg, J. Review: Ants as tools in sustainable agriculture. J. Appl. Ecol. 2015, 52, 1197-1205.
  5. Van Mele, P. A historical review of research on the weaver ant Oecophylla in biological control. Agric. For. Entomol. 2008, 10, 13-22.
  6. Pennisi, E. Tiny ant takes on pesticide industry. Science [Online], Aug. 30, 2015. http://news.sciencemag.org/plants-animals/2015/08/tiny-ant-takes-pesticide-industry (accessed Oct. 9, 2015).
  7. Offenberg, J. et al. Observations on the Ecology of Weaver Ants (Oecophylla smaragdina Fabricius) in a Thai Mangrove Ecosystem and Their Effect on Herbivory of Rhizophora mucronata Lam. Biotropica. 2004, 3, 344-351.
  8. Peng, R.K., et al. Implementing ant technology in commercial cashew plantations. Australian Government Rural Industries Research and Development Corporation. 2004, 1-72.
  9. Harvey-Samuel, T. et al. Pest control and resistance management through release of insects carrying a male-selecting transgene. BMC Biol. 2015, 13, 49.
  10. The Asian Vegetable Research and Development Center Diamondback Moth Management. 1986, x. http://pdf.usaid.gov/pdf_docs/pnaav729. pdf (accessed Oct. 7, 2015).
  11. University of California San Diego. What is Bt. http://www.bt.ucsd.edu/what_is_bt.html (accessed Oct. 4, 2015).
  12. University of California San Diego. How does Bt Work.  http://www.bt.ucsd.edu/how_bt_work.html (accessed Oct. 4, 2015).
  13. University of California San Diego. Bt Safety. http://www.bt.ucsd.edu/bt_safety.html (accessed Oct. 4, 2015).
  14. Oxitec. Press Release- Oxitec announces breakthrough in GM insect technology for agricultural pest control. http://www.oxitec.com/press-release-oxitec-announces-breakthrough-in-gm-insect-technology-for-agricultural-pest-control/ (accessed Oct. 4, 2015).

Genetically Engineered Viruses Combat Invasive Cancer

by Caroline Wechsler

58-year-old Georgia resident Nancy Justice was diagnosed with glioblastoma, a tumor of the brain, back in 2012. Though her doctors immediately combated the cancer with surgery, radiation, and chemotherapy, the tumor relapsed in late 2014, stronger than ever. According to her doctors, Justice had only seven months to live because the tumor would double in size every two weeks.1

Invasive cancers are now one of the leading causes of death in America. The American Cancer Society reports in 2015 that there were over 1.65 million new cases of cancer, with just over 589,000 deaths per year.2 So it is unsurprising that now over $130 billion is spent on researching new treatments for cancer.3 Particularly frustrating though are tumors that resurface even after being “cured,” like that of Nancy Justice. But a new, cutting edge treatment is giving people hope: using viruses, something normally thought to be harmful, as a cancer combatant.

Nancy Justice was the 17th patient entered into a revolutionary study at Duke University Medical Center using a genetically modified version of the polio virus to combat glioblastoma. After several months of treatment, her tumor has—seemingly miraculously—begun to shrink away. This project has been a work in progress for almost three decades. It is the brainchild of Matthias Gromeier, a molecular biologist who has been working on viral treatments for cancer for the last 25 years. He described the difficulty of proposing this idea, originally unthinkable. “Most people just thought it was too dangerous,” he remembers in an interview with CBS.1 For the past 15 years, Gromeier has been at Duke, working with Dr. Henry Friedman, deputy director of the Tisch Brain Tumor Center at Duke, the other half of the duo that pushed this project through to completion. Though he too was originally skeptical about the project, Dr. Friedman now calls the polio treatment “the most promising therapy I’ve seen in my career, period.”1

MAKING AN INVADER INTO A DEFENDER

The treatment takes advantage of viruses’ infection mechanisms. Normally, viruses infect cells by piercing their outer membranes and injecting viral DNA into the cell.4 The viral genetic material then hijacks the host cell’s replicating machinery, forcing it to produce viral genomes and capsids, ultimately assembling the parts into many copies of the virus that eventually burst out of the cell in a process called lysis, killing the cell.5 If researchers can take control of this process, they are able to attack and lyse certain cells, such as those in a tumor. The polio virus is particularly effective for this purpose because it specifically attaches to surface receptors on cells that make up most types of solid tumors.1

In order to achieve control of the viral mechanism, the researchers used a technique called recombination—essentially, taking desired parts of viral genomes and fusing them together to create new virus DNA sequence. This technique of DNA recombination is used in many different fields. To create recombinant versions of genetic material, the desired genetic material is isolated using restriction enzymes, and then reinserted into the desired vector (in this case the polio virus) with DNA ligase, an enzyme that links strands of DNA together.6 The team at Duke removed a key genetic sequence from the polio which makes the virus deadly, and instead patched it up with a piece of genetic material from a relatively harmless cold virus to create the recombinant form, called PVS-RIPO.1 Not only is PVS-RIPO effective at killing cancerous cells, but also, because of this recombination, it is incapable of reproducing in normal cells, which makes it safe to use in humans.7

STARTING THE BODY’S DEFENSES

Once the virus has been engineered to attack the cancer, the problem then becomes getting it to the site of the cancer. This process must be neatly tailored because the virus can still cause swelling in the area it is inserted. To do this, the Chief of Neurosurgery at Duke, Dr. John Sampson, uses 3-D MRI images to plot the course of the catheter that releases the genetically engineered virus. “It’s just like a sniper’s bullet,” he said in an interview.1 Once the virus has been inserted, the cancer killing begins. As Gromeier explains, “All human cancers develop a shield or shroud of protective measures that make them invisible to the immune system. By infecting the tumor, we are actually removing this protective shield.”1 Once the immune system is alerted of the polio infection, it shifts into gear and attacks the cancerous cells, effectively neutralizing the tumor.

The polio virus clinical trial began in 2011 at Duke and still continues to date. However, the trial is only in its first phase, a human safety trial.  In order to produce a marketable, certified treatment, the trial must go through Phase II and III testing. The Food and Drug Administration is understandably very cautious in granting approval for medical treatments—the Duke group had to submit seven years’ worth of safety studies in monkey models to receive approval for the first human safety trial.7

LOOKING TO THE FUTURE

Success stories are impressive, especially for a Phase 1 trial that normally exists to simply test dosage. Take the example of Stephanie Lipscomb, a 24-year old nursing student who was diagnosed with glioblastoma in 2011. Though surgeons removed 98% of her tumor, the cancer quickly returned only a year later. With no other options, she enrolled in the Duke trial as its first patient. It was a risk, to be sure—Dr. Friedman himself admits, “We had no idea what it would do in the long haul.”1 Though Lipscomb initially suffered some swelling, after 21 months her tumor shrank until it was completely gone. Despite these success stories, there have been mixed results. In fact, one patient who received a particularly potent dose of the virus experienced extreme inflammation causing swelling in her brain. Out of 22 patients, eleven died—most had doses of the virus that were extremely high. However, the eleven who survived have been in remission for over six months, unprecedented for recurrent glioblastoma.

In light of the Duke trials’ success, researchers are exploring using this technique of recombinant viruses to combat other forms of invasive cancer. Concurrently, the Gene and Virus Therapy Program at the Mayo Clinic has made several breakthroughs in clinical treatments, including a version of the measles virus to combat recurrent ovarian cancer and an adenovirus encoding a gene to combat prostate cancer.8 Moreover, the Mayo Clinic’s Department of Oncology has been using a modified version of the measles virus to combat glioblastoma, bringing the project from animal models to human Phase 1 testing in just under three years.9 A group in the UK completed a study in May demonstrating that using a genetically modified form of the herpes virus to treat melanoma, a type of skin cancer, causes an increase in survival rates.10 And researchers back at Duke are looking into using PVS-RIPO itself to treat other types of cancer, including prostate, lung, and colon cancers, and also determining how treatment differs for children since all trials thus far have treated adults.11 Furthermore, research must be pursued for other aspects of treatment, beyond simply the viral vector—the Mayo clinic is investigating how to chaperone vectors to tumor sites inside protective cell carriers like macrophages or stem cells.8

A CURE FOR CANCER?

It is clear that this research is the start of a new and exciting age of cancer treatment. However, many caution against hailing this as the “cure for cancer.” A pressing concern is that high doses of the recombinant virus can cause massive swelling.12 This is especially problematic in treating cancers like glioblastoma. However, Dr. Friedman emphasized that the point of this initial trial was to get the right dose—not to determine the virus’s effectiveness.1

Though these are legitimate concerns, they are hallmark worries about cutting edge treatments. And it is almost certain that Stephanie Lipscomb was not thinking about the intellectual property law when she found out that her cancer was completely gone. “I wanted to cry with excitement,” she said in an interview with CBS.1 Invasive cancer is still a difficult and dangerous disease. However, with innovative new research approaches like those involving viruses, we are certainly on the way to finding a cure.

Caroline Wechsler ‘19 is a freshman in Weld Hall.

Works Cited

  1. Pelley, S. CBS News. Polio to treat cancer? Scott Pelley reports on Duke clinical trial. http://www.cbsnews.com/news/polio-cancer-treatment-duke-university-60-minutes-scott-pelley/ (accessed Oct. 5, 2015).
  2. American Cancer Society. Cancer Facts and Figures 2015 (4-8). (2015).
  3. National Institutes of Health. Cancer costs projected to reach at least $158 billion in 2020. http://www.nih.gov/news-events/news-releases/cancercosts-projected-reach-least-158-billion-2020 (accessed Oct. 5, 2015).
  4. National Science Foundation. How do viruses attack cells? https://www.nsf.gov/news/overviews/biology/bio_q01.jsp (accessed Oct. 5, 2015).
  5. Wessner, D. R. The Origins of Viruses. Nature Education 2010, 3(9), 37.
  6. Rensselaer Polytechnic Institute. The Basics of Recombinant DNA. http://www.rpi.edu/dept/chem-eng/Biotech-Environ/Projects00/rdna/rdna.html (accessed Oct. 5, 2015).
  7. Caba, J. Medical Daily. Once-Deadly Polio Virus Could End Up Curing Brain Cancer. http://www.medicaldaily.com/polio-virus-may-curebrain-cancer-thanks-genetic-re-engineering-327620 (accessed Oct. 5, 2015).
  8. Mayo Clinic. Gene and Virus Therapy Program. http://www.mayo.edu/research/centers-programs/cancer-research/ (accessed Oct. 5, 2015).
  9. Mayo Clinic. Neurosciences Update. [Online] 2012 9(1), 3. (accessed Oct. 5, 2015). [10]
  10. Knapton, S. The Telegraph. Genetically engineered virus ‘cures’ patients of skin cancer. http://www.telegraph.co.uk/news/science/science-news/11631626/v.html (accessed October 5, 2015).
  11. Preston Robert Tisch Brain Tumor Center at Duke. Targeting Cancer with Genetically Engineered Poliovirus (PVS-RIPO). http://www.cancer.duke.edu/btc/modules/research3/index.php?id=41 (accessed Oct. 5, 2015).
  12. Kroll, D. Forbes. What ‘60 Minutes’ Got Right And Wrong On Duke’s Polio Virus Trial Against Glioblastoma. http://www.forbes.com/sites/davidkroll/2015/03/30/60-minutes-covers-dukes-polio-virus-clinical-trial-against-glioblastoma/ (accessed Oct. 5, 2015).

Parasitic Cancer: Paradox and Perspective

by Audrey Effenberger

Cancer. It’s a big subject, with a dizzying array of forms and manifestations that can affect all parts of the body. As populations around the world age, cancer’s prevalence will continue to grow, and it will become more and more important to understand and treat it. One lesser known variation is known as parasitic cancer. While its name may seem to combine two totally different ailments, understanding parasitic cancer can actually shed light on the concept of cancer altogether.

AN OVERVIEW

So what is cancer in the first place? On the most basic level, it’s abnormal or uncontrolled cell growth. The cell, the most fundamental unit of life, is a fantastically complicated and regulated machine of DNA, RNA, protein, and all kinds of molecules in between. When any part of the system fails, the entire system can be compromised. The mechanism by which this occurs is known as oncogenesis. Mutations in proto-oncogenes, or those that normally correspond with activating some part of the cell cycle, can transform the normal genes into oncogenes, resulting in abnormal proliferation of the cell. On the other hand, damage to tumor suppressor genes means that a repressing mechanism no longer works, and the cell will fail to stop or die at the appropriate times. Either of these mutations can lead to unwanted cell growth, known as a tumor. Most cells within a tumor are clones, having originated from a single rapidly dividing cell, so the tumor can be called a clonal population.

In fact, because mutation is a random process, the likelihood of a cell incurring a critical mutation in an important gene is quite low. Additionally, cells have various enzymes to proofread and repair DNA. The immune system works to recognize markers on the cell membrane and destroy misbehaving cells. Some tumors are relatively benign. However, no system of defense mechanisms is perfect. As people age or encounter carcinogens in the environment, the rate of damage can increase. Damaged cells that go unchecked can give rise to malignant and invasive tumors that spread throughout the body by traveling through the bloodstream, a process known as metastasis.

THE SPREAD OF CANCER

Though cancer can spread throughout one’s body in this manner, it’s thought of as a largely non-contagious disease. The only way in which cancer is “transmitted,” typically, is by transmission of a pathogen that increases the likelihood of developing cancer. In this way, cancer can only be spread indirectly.

Some bacteria damage tissues and increase risk of carcinogenesis, or cancer formation. For example, the H. pylori bacterium is known to cause stomach ulcers and inflammation that increase relative risk of gastric cancer by 65%.1 Viruses are another culprit; they damage the cell’s DNA by inserting their own and disrupting key sequences or triggering inflammatory responses that lead to rapid cell division. Known oncoviruses include Epstein-Barr virus, hepatitis B and C, and the human herpesviruses.2,3

Parasites, confusingly enough, can also cause cancer, albeit not the parasitic kind; for example, the Asian liver fluke is known to cause a fatal bile duct cancer.4 Again, however, all of these transmissible causes of cancer only increase risk; they can at most heighten the probability that the organism’s cells themselves will become cancerous.

PARASITIC CANCER: THE TRULY TRANSMISSIBLE

Parasitic cancer is defined by its cause: the transfer of cancer cells between organisms. This is comparable to metastasis, when cancer cells migrate throughout the body. However, the new tumor originates from a tumor cell of another organism and is markedly, genetically different. As defined earlier, cancers are often clonal populations rising from a single abnormal cell. In the case of parasitic cancer, the new tumor is populated by clones of another organism’s cancer; therefore, parasitic cancer is also known as clonally transmissible cancer.

While parasitic cancers are very rare, examples can be found in a few animal species. These include devil facial tumor disease (DFTD),5 canine transmissible venereal tumor (CTVT),6 Syrian hamster connective tissue sarcoma induced in the lab,7 and a form of soft-shell clam leukaemia.8 Some cases of parasitic cancer have been documented in humans as well. While extremely rare, cancer cells can be transferred during organ transplants or pregnancies.9

A NEW PERSPECTIVE

Given the unique attributes of parasitic cancer, researchers can reframe their conceptual understanding of cancer and cell organization as a whole. All cells of a particular species have the same basic genetic information, but each cell may be slightly unique, just as human individuals have different eye colors or heights. We can extend the metaphor to bridge the macro- and microscopic. Every organism can be considered its own population of cells cooperating to sustain life, and most cells divide at a regular pace, correcting errors in DNA replication and preserving the overall homogeneity of the organism’s genome.

However, when a cell mutates and becomes cancerous, it changes notably. Given the known mechanisms of oncogenesis, similar types of mutations occur in specific genes to give rise to specific cancers; cells that are able to reproduce after suffering genetic damage have a different, stable genome of their own. Molecular analysis confirms this.10 All cancer of a certain tissue can thus be defined as its own species.11 This species reproduces, competes, and evolves. Tumors thus act as parasites on the rest of the population, sapping resources and occasionally causing direct harm. Benign tumors are analogous to “successful” parasites, coexisting indefinitely with their hosts, while malignant tumors eventually lead to the death of the organism.

The conceptual similarities and differences between parasitic cancer and parasitic organisms lead to important lines of questioning. This is seen in the vastly distinct effects of parasitic cancers on the aforementioned animal species known to have them. DFTD has devastated the Tasmanian devil population and could lead to extinction within three decades of the disease’s emergence, while CTVT has successfully coexisted with dogs for possibly thousands of years. Researchers speculate that reasons for this extreme divergence in outcomes are related to differences in the afflicted species’ genomes.5 Because the Tasmanian devil population lacks the genetic diversity that canines possess, their immune systems are less likely to recognize foreign cancer cells.

Furthermore, this immunological insight can be applied to human cases of parasitic cancer. For example, the genetic similarity between mother and child or transplant donor and recipient is naturally high or engineered to be; while this is necessary to prevent immune system rejection, it allows parasitic cancers more leeway to invade the body. Awareness of this can improve medical treatment in the future.

With the rapid advances in science and technology of the past century, physicians have gained a panoply of weapons to combat cancer. Modern cancer treatment includes everything from surgery to radiation and chemotherapy. However, these measures are imperfect. A paradigm shift spurred by the study of parasitic cancer may guide the medical research community’s efforts to cure cancer conclusively. By treating all cancers as distinct organisms parasitizing the body, physicians can approach treatment differently, combining immunological and genetic therapy with techniques similar to those used against invaders of other species. In this way, parasitic cancer is paradoxical in not only name but also action, and thus brings hope for the future of cancer research.

Audrey Effenberger ‘19 is a freshman in Greenough Hall.

Works Cited

  1. Peter, S.; Beglinger, C. Digestion. 2007, 75, 25-35.
  2. Moore, P.S.; Chang, Y. Nat. Rev. Cancer. 2010, 10, 878-889.
  3. Liao, J.B. Yale J Biol Med. 2006, 79(3-4), 115-122.
  4. Young, N.D. et al. Nat. Comms. 2014, 5.
  5. Dybas, C. Tasmanian devils: Will rare infectious cancer lead to their extinction? National Science Foundation [Online], Nov. 13, 2013. http://nsf.gov/discoveries/disc_summ.jsp?cntn_id=129508 (accessed Oct. 4, 2015).
  6. Ganguly, B. et al. Vet. and Comp. Oncol. 2013, 11.
  7. Murchison, E.P. Oncogene. 2009, 27, 19-30.
  8. Yong, E. Selfish Shellfish Cells Cause Contagious Clam Cancer. Natl. Geog [Online], Apr. 9 2015. http://phenomena.nationalgeographic.com/2015/04/09/selfishshellfish-cells-cause-contagiousclam-cancer/ (accessed Oct. 4, 2015).
  9. Welsh, J.S. Oncologist. 2011, 16(1), 1-4.
  10. Murgia, C. et al. Cell. 2004, 126(3), 477-487.
  11. Duesberg, P. et al. Cell Cycle. 2011, 10(13), 2100-2114.

 

 

Microchimerism – The More, The Merrier

by Una Choi

Microchimerism, or the presence of genetically distinct populations within a single organism, throws a wrench in the biological concept of sex. Although we traditionally learn that biological females possess two X sex chromosomes and males possess X and Y sex chromosomes, microchimerism is responsible for the presence of cells with Y chromosomes in females. Microchimerism can result from a variety of factors ranging from organ transplant to in-utero transfer between twins. Recent research has focused primarily on maternal microchimerism (MMc) in relation to cord blood transplantation and fetal microchimerism (FMc), the two most common forms of microchimerism.

BI-DIRECTIONAL EXCHANGE DURING PREGNANCY

The placenta connects the mother and fetus, facilitating the bi-directional exchange of cells. Low-level fetal Y-chromosome DNA is found in maternal cellular and cell-free compartments starting at the seventh week of pregnancy and peaks at childbirth.1 Although fetal DNA rapidly disappears from the mother’s body after labor, fetal cells can persist in the mother’s body for decades and vice versa.2 Indeed, there are around two to six male fetal nucleated cells per milliliter of maternal blood,3 and 63% of autopsied female brains exhibited male microchimerism.4

The cells crossing the placenta possess varied physical features and durations in the host body. Highly differentiated cells like nucleated placental trophoblasts, which provide nutrients to the placenta, do not remain for long in maternal circulation. In contrast, pre-natal-associated progenitor cells (PAPCs) can persist for decades after birth. These microchimeric progenitor cells, like stem cells, can differentiate into specific types of cells. PAPCs can later become hematopoietic, or blood-forming, cells and epithelial cells.5 PAPCs have also been found in murine brains. A 2010 study found that PAPCs remained in the maternal brain for up to seven months. These PAPCs developed mature neuronal markers, suggesting their active integration into the maternal brain.6

BENEFITS OF MMC IN CORD BLOOD TRANSPLANTATION

Maternal microchimerism, or the presence of maternal cells in the fetus, is responsible for the consistent success of cord blood transplants. Cord blood is extracted from the umbilical cord and placenta. Because cord blood is rich in hematopoietic stem cells, it is often used as treatment for leukemia. Transplants, however, are not without risk; the introduction of foreign material may cause graft-versus-host-disease (GVHD). GVHD occurs when the donor’s immune cells target the patient’s healthy tissue.

Cord blood inherently contains both maternal cells and fetal cells due to the previously mentioned bi-directional exchange. This displayed MMc can diminish the risks accompanying cord blood transplants.7 The fetus benefits from human leukocyte antigens (HLAs) present on the maternal cells. These HLAs encode for regulating proteins involved in the human immune system. The HLA system can present antigens to T-lymphocytes, which trigger B-cells to produce antibodies.8

Unlike bone marrow and peripheral blood transplants, HLA matching between cord blood donor and recipient does not have to be exact. Indeed, it is often imprecise due to the large variety of HLA polymorphisms;9 parents are often HLA heterozygous because HLA loci are extremely variable. While the foreign maternal cells could potentially aggravate GVHD, cord blood recipients actually exhibit low rates of relapse. Indeed, maternal anti-inherited paternal antigens (IPA) immune elements may result in a graft-versus-leukemia effect.10 The graft-versus-leukemia effect describes the role of donated cytotoxic T lymphocytes in attacking malignant tumors.

Exposing a fetus to foreign antigens can result in lifelong tolerance; fetal tolerance is strongest against maternal antigens.7 In HLA-mismatched cord blood transplants, patients displayed more rapid engraftment, which features the growth of new blood-forming cells and is a marker of transplant recovery, diminished GVHD, and decreased odds of a leukemia relapse. Indeed, the relapse rate was 2.5 times less in allogeneic-marrow recipients with graft-versus-host disease than in recipients without the disease.11

IMMUNE SURVEILLANCE AND FMc

The benefits of microchimerism are not limited to the recipients of maternal cells. The mothers themselves often benefit from increased immune surveillance. Indeed, fetal microchimeric cells T cells can eradicate malignant host T-cells.

Microchimeric cells can also provide protection against various forms of cancer. During pregnancy, mothers can develop T- and B-cell immunity against the fetus’s IPAs. This anti-IPA immunity persists for decades after birth, reducing risk of leukemia relapse. PAPCs can differentiate into hematopoietic cells, which are predicted to have a role in destroying malignant tumors.12 In a study of tissue section specimens from women who had borne sons, 90% of hematopoietic tissues like lymph nodes and spleen expressed CD45, a leukocyte antigen previously identified in the male cells.13

PAPCs are also associated with decreased risk for breast cancer; circulating fetal cells are only found in 11-26% of mothers with breast cancer while a study of 272 healthy women found male microchimerism in 70% of the participants, suggesting microchimerism’s role in the maintenance of a healthy stasis.14,15 The depletion of PAPCs in breast cancer patients may result from the migration of PAPCs from the bloodstream to the tumor.16

AUTOIMMUNE CONDITIONS

FMc and MMc are common in healthy individuals and are associated with repression of autoimmune conditions. Rheumatoid arthritis (RA) is a genetic disorder stemming largely from coding in the HLA-region. The molecules coded for in the HLA-region contain the amino acid sequence DERAA, which is associated with defense against RA. Of 179 families studied, the odds of producing at least one DERAA-negative child from a DERAA-positive mother are significantly lower than the odds of producing a DERAA-negative child with a DERAA-positive father. This suggests a protective benefit of non-inherited maternal HLA-DR antigens in decreasing susceptibility to RA.17

ORGAN REGENERATION

Fetal stem cells feature longer telomeres and superior osteogenic potential than their adult counterparts. They also express embryonic pluripotency markers like Oct4.16 These fetal cells are connected to the alleviation of myocardial disease. In a 2011 study, pregnant mice with myocardial injuries exhibited a transfer of fetal cells from the bloodstream to the site of injury, where the fetal cells differentiated into various types of cardiac cells.18 40% of PAPCs extracted from the heart expressed Cdx2, a caudal homeobox gene expressed during development. Cdx2 differentiates cells that will become the trophectoderm, or the outer layer of the blastoderm which provides nutrients to the placenta, from cells that will become the inner cell mass. Because Cdx2 is absent in the mature trophoblast, the extracted cells likely originated in the placenta.19

A recent study used fluorescence activated cell sorting (FACS) to analyze fetal green fluorescent protein (eGFP+) cells’ in vitro behavior. These fetal cells exhibited clonality and differentiated into smooth muscle cells and endothelial cells, displaying beneficial implications for organ regeneration.

PAPCs selectively travel to damaged organs, further emphasizing their role in healing. eGFP+ cells were present in low quantities in all tissues until the 4.5th day after the injury. 1.1% of the cells were eGFP+ before injury while 6.3% were eGFP+ after delivery, thus displaying a significant increase. These findings pose significant implications for maternal health; PAPCs may be at least partly responsible for the spontaneous recovery from heart rate exhibited by 50% of women.18

FUTURE IMPLICATIONS

Microchimerism poses important implications for cord blood transplants. If we know the maternal and fetal HLA, we can match recipients with those donors whose IPA are included in the recipient’s HLA type to promote graft acceptance.

Although cord blood is typically preserved for transplants, the placenta is often discarded after childbirth. If PAPCs can be traced back to the placenta, the placenta may provide a valuable source of undifferentiated stem cells capable of organ regeneration. Although placental tissue and amniotic fluid have less differentiation potential than fetal tissue from pregnancy terminations, they are less controversial sources of stem cells.16

Because FMc plays an active role in the mother’s body for decades, it can impart significant benefits. The selective migration of PAPCs to damaged organs suggests the existence of a specific targeting mechanism. The ability of extracted PAPCs to differentiate in vitro into working cardiovascular structures also poses exciting implications for organ synthesis.

Una Choi ‘19 is a freshman in Greenough Hall.

Works Cited

  1. Ariga, H. et al. Transfusion 2001, 41, 1524-1531.
  2. Martone, R. Scientists Discover Children’s Cells Living in Mothers’ Brains. Scientific American, Dec. 4, 2012. http://www.scientificamerican. com/article/scientists-discover-childrens-cells-living-in-mothers-brain/ (accessed Sept. 25, 2015).
  3. Krabchi, K. et al. Clinical Genetics 2001, 60, 145-150.
  4. Chan, W. et al. PLOS. [Online] 2012, 7, 1-7. http://journals.plos.org/ plosone/article?id=10.1371/journal. pone.0045592 (accessed Sept. 25, 2015).
  5. Gilliam, A. Investigative Dermatology [Online] 2006, 126, 239-241. http:// http://www.nature.com/jid/journal/v126/n2/ full/5700061a.html#close (accessed Sept. 26, 2015).
  6. Zeng, X.X.. et al. Stem Cells and Dev. 2010, 19, 1819-1830.
  7. van Besien, K. et al. Chimerism. [Online] 2013, 4, 109-110. http://pubmedcentralcanada.ca/pmcc/articles/ PMC3782544/ (accessed Sept. 28, 2015).
  8. Burlingham, W. et al. PNAS. 2012, 109, 2190-2191.
  9. Leukemia & Lymphoma Society. https://www.lls.org/sites/default/files/file_assets/cordbloodstemcelltransplantation.pdf (accessed Sept. 28, 2015).
  10. van Rood, J. et al. PNAS. 2011, 109, 2509-2514.
  11. Weiden, P, M.D. et al. New England Journal of Medicine. 1979, 300, 10681073.
  12. Fugazzola, L. et al. Nature 2011, 7, 89-97.
  13. Khosrotehrani, K, M.D. et al. JAMA 2004, 292, 75-80.
  14. Gadi, V. et al.. American Journal of Cancer 2007, 67, 9035-9038.
  15. Kamper-Jørgensen, M. et al. Elsevier 2012, 48, 2227-2235.
  16. Lee, E. et al. MHR 2010, 16, 869-878.
  17. Feitsma, A. et al. PNAS 2007, 104, 19966-19970.
  18. Kara, R. et al. AHA. [Online] 2011, 3-15.
  19. Pritchard, S. et al. Fetal Cell Microchimerism in the Maternal Heart. http://circres.ahajournals.org/content/110/1/3.full (accessed Sept. 25, 2015). 2004, 291, 1127-1131.

 

 

Fetal Microchimerism

by Grace Chen

In Greek mythology, a chimera was a grotesque monster formed of a conglomeration of different animal parts….

With the head of a goat, body of a lion, and tail of a snake, the chimera was a fearsome but reassuringly fictional concept. Today, however, scientists know that real-life chimeras do indeed exist. The term has become used to describe a number of biological phenomena that produce organisms with cells from multiple different individuals.1 Far from being monsters, artificial chimeras include many of the GMO crops that are feeding the world’s growing population, as well as genetically engineered bacteria that produce insulin and other key drugs in marketable quantities.2 Research in human developmental biology is now showing, however, that we ourselves may be naturally occurring chimeras.

The phenomena of fetal microchimerism describes the presence of living cells from a different individual in the body of placental mammals . The placenta generally serves as a bridge between the fetus and the mother for exchange of nutrients and wastes. But that is not all that crosses this bridge—fetal and maternal cells can cross between the two organisms intact. While maternal cells do end up in the fetus, significantly more fetal cells are transferred to the mother.3 The result is that the mother carries a small number  of foreign cells belonging to her fetus within her body—hence the name “microchimerism.” While these non-maternal cells are few in number in comparison to total number of maternal cells, evidence suggests that these transplanted cells can actually remain for long after the end of gestation. In fact, derivative fetal cells have been found in the mother’s body up to 27 years after pregnancy.4

From an evolutionary standpoint, selective pressures favor traits that increase reproductive fitness of the individual; because the mother and fetus share so much genetic material, these invasive cells ought to share the same interests as the mother’s cells in promoting mutual welfare. Yet, pregnancy in placental mammals can also be seen as a tug-of-war between fetal and maternal interests, as finite biological resources must be allocated between the two organisms. Effects caused by these microchimeric cells that favor the fetus’ well-being, however, might be detrimental to the mother’s welfare, or to the welfare of future offspring.5 This creates an interesting paradox for evolutionary biologists: what is the nature of the interaction between these cells that ought to be cooperative but also conflicting?

Answering such questions will require further research on this poorly understood phenomenon. One easy way that scientists have been able to detect and quantify the presence of non-maternal cells in the mother’s body is by searching for the presence of Y chromosomes, found only in male cells, in the mother’s body. Presumably, any Y chromosomes would indicate the presence of intact cells from a prior male fetus, as female sex chromosomes are exclusively X chromosomes.6 Though feto-maternal microchimerism is the most common source of these invader cells, several hypotheses have also been proposed to explain why Y chromosome microchimerism has also been found in about a fifth of women who have not had a male fetus. Some of these alternative explanations include spontaneously aborted male zygotes, or chimeric cells from an older male sibling acquired in utero from their own mother.7

A common technique implemented in hunting down the location of foreign cells is called fluorescent in situ hybridization (FISH), wellknown to most genetics students. After a tissue sample is isolated and prepared, nucleic acid probes specific to genes on the Y chromosome are added.8 These probes are attached to a fluorescent dye, hence providing a visual cue of where they bind and thus where the Y chromosomes are found.9 Increasingly refined techniques are now allowing more specific searches; for instance, fluorescent probes can be used to identify microchimeric cells with specific allele differences from maternal cells.

WHERE DO THESE TINY INVADERS GO?

Invading fetal cells are commonly found in the bloodstream, but can travel much further than that. Fetal microchimerism has been recorded in the liver, bone marrow, thyroid, heart, and more. A recent study by the Fred Hutchinson Cancer Research Center found that more than 60 percent of autopsied brains contained copies of DNA from another individual.10 There is also interesting evidence that these undifferentiated fetal cells can serve as stem cells within the mother’s body—a study in mice suggested that fetal cells can develop into mature neurons within the mother’s brain.11 These invader cells, it seems, can make themselves fully at home in the host body. The locations that the fetal cells tend to settle down in may yet reveal more about the evolutionary pressures affecting this phenomena.

Thus the presence of microchimeric fetal cells in the mother’s body is now known to be widespread and long-lasting, but their effects remain ambiguous. Conflicting studies have linked the presence of fetal cells to both improved and worsened health outcomes for the mother for different diseases in different scenarios. A richer understanding the effects on maternal health, as outlined below, can shed light on not only key issues of women’s health, but also more broadly on the response of the immune system to invaders.

Grace Chen ‘19 is a freshman in Holworthy Hall.

Works Cited

  1. Bowen, R.A. Mosaicism and Chimerism. Colorado State University Hypertextsfor Biomedical Sciences: General and Medical Genetics [Online], August 5, 1988, p2. http://arbl.cvmbs.colostate.edu/hbooks/genetics/medgen/chromo/mosaics.html (accessed Oct. 1, 2015).
  2. Simpson, T. GMO: Part 2 – The Promise, the Fear, Labeling, Frankenfoods. Your Doctor’s Orders [Online], May 15, 2013, p 1-3. http://www.yourdoctorsorders.com/2013/05/gmo-part-2the-promise-fear-frankenfoods/ (accessed Oct. 1, 2015).
  3. Boddy, A. M. et al. Bioessays 2015, 37, 1106–1118.
  4. Bianchi D.W. et al. Proc Natl Acad Sci U S A  1996, 93, 705–708.
  5. Adams, K. M. et al. Journal of American Med. Assn. 2004, 291, 1127-1131.
  6. Kean, S. You and Me. Psychology Today [Online], March 11, 2013, p 1-4. https://www.psychologytoday.com/articles/201303/the-you-in-me (accessed Oct. 1, 2015).
  7. O’Connor, C. Nature Education. 2008, 1, 171.
  8. Chan, W.F.N. et al. PLoS ONE. 2012, 7.
  9. Zeng, X.X. et al. Stem Cells and Development. 2010, 19, 18191830.
  10. Centers for Disease Control and Prevention. http:// http://www.cdc.gov/parasites/naegleria/ (accessed Oct. 4, 2015).

 

Kinesics: What Are You Really Saying?

by Priya Amin

What do shoulder shrugs or crossed arms really communicate? Kinesics, or the systemic study of body behavioral communication,1 is a relatively new subsection in the study of language. More specifically, kinesics describes the importance of body motion behavior in social communication—it is the study of communication through “silent” language. Facial expressions, posture, hand motions, and gestures are some examples of body behavior mannerisms that are included in kinesics. In this field of study, the body is viewed as an instrument of adaptive behavior. Adaptive behavior is the collection of conceptual, social, and practical skills that all people learn in order to function in their daily lives.2 The collective analysis of these adaptations creates a social personality, which is a tempero-spacial system because it is dependent on both time and space. All behaviors evinced by any such system are components of the system, and they act both dependently and independently of each other.3 Often, the social personality communicates vital information that is not verbally acknowledged; in effect, body behavior can entirely change the meaning of a sentence. For example, crossed arms relate a tone of defense, and varying degrees of eyebrow lifts can communicate incredulity.

Charles Darwin is often accredited as the father of modern communicative studies of body motion from his Expression of the Emotions in Man and Animals (1872).1 In his early yet comprehensive study of facial expressions and the effects of emotion on body language, he recorded some of the first written research on kinesics. Since then, the field of kinesics has considerably developed with new assertions and research findings. For example, in 1921, researcher Wilhem Wundt conceived a universal language of gestural communication, and in 1947, anthropologist Ray L. Birdwhistell published his first work, Introduction to Kinesics. As a result, terminology such as kine, the smallest identifiable unit of a stream of body units, a type of body behavior measurement, and kineme, a group of movements that may be used interchangeably without affecting social meaning,4 is used to formalize research findings.

Most recently, research is being directed towards studying the similarities and differences in the body language of dance across several cultures. The direction of the eyes, eyelids, hand positioning, foot positioning, eyebrows, waist, and lips across a set rhythm is often used to convey a feeling or a message. Since subconscious movements in the tension of the skin contribute to body behavior as well, more specific research will soon be conducted on matters such as the oiliness, wetness, and dryness of the skin, tension and laxity of the skin and musculature, variable and shifting vascularity in the skin’s surface, and shifts in the underlying fatty tissue. Although much remains undiscovered, researchers are working towards revealing the secrets of body language.

So, what do you think you’re really saying?

Priya Amin ‘19 is a freshman in Wigglesworth Hall.

Works Cited

  1. “Kinesics.” International Encyclopedia of the Social Sciences. 1968. Encyclopedia. com. http://www.encyclopedia.com (accessed Oct. 6, 2015).
  2. Birdwhistell, R. (1952). Introduction to Kinesics: An Annotation System for Analysis of Body Motion and Gesture. Louisville, KY: University of Louisville – (1970). Kinesics and Context: Essays on Body Motion Communication. Philadelphia, PA: Penn Press.
  3. Padula, A. “Kinesics.” Encyclopedia of Communication Theory. Ed. Thousand Oaks, CA: SAGE, 2009. 582-84. SAGE Reference Online. Web. (accessed Jun. 29, 2012). [4] “Diagnostic Adaptive Behavior Scale.” American Association of Intellectual and Developmental Disabilities. AAIDD, n.d. Web. (accessed Oct. 17, 2015).