Featured

Fall 2020/Spring 2021: Seeing Light

After many months of hard work, we are thrilled to present our Fall 2020/Spring 2021 issue, Seeing Light! Containing fascinating pieces about quantum realms, plant communication, and outer space, this issue reminds us that there are exciting scientific discoveries to be found in the most unexpected places. Download at the link!

Featured

Fall/Spring 2019: Growth

WE’RE BACK!

Amidst the COVID-19 global pandemic, the Harvard Science Review team has been hard at work putting together this new issue. While this particular volume does not cover the pandemic, we are incredibly excited to share with you an issue focused on regrowth, regeneration, and rebirth. The full issue can be accessed through our Archive page.

Check out our Submissions page for more information on writing for us this upcoming school year!

Featured

Fall 2018: How Far is Too Far?

Our Fall 2018 issue is now available online! Articles are posted individually as blog posts (the articles are linked below), a PDF version is currently displayed on our Archives page, and print issues will be available around Harvard’s campus starting today. We hope you enjoy browsing through this issue as much as we enjoyed putting it together! A big thank you to our dedicated staff and sponsors.

Featured

A Fantastic Voyage Through a Nano-Sized Universe

The 1960s was the birthplace of many tumultuous events, of which the Vietnam War, civil rights movement, and Kennedy assassination were a few. A more obscure, somewhat smaller event, however, was the making of the sci-fi movie Fantastic Voyage. Born from the political upheaval of the Cold War, the movie was a fantastic popularization of futuristic medicine: the idea of a doctor and his team, shrinking to the size of a microbe, flying through a patient’s bloodstream to save his life (1).

While most definitely controversial and extremely far-fetched, this idea of a miniaturized doctor swimming through the body has intrigued science for quite some time. A few years before Fantastic Voyage hit film, the great physicist Richard Feynman had already begun to play with this idea of a tiny doctor. In his 1959 lecture at Caltech, he presented the idea of “swallowing the doctor” – essentially building a tiny, ingestible robot to perform surgery on hard-to-reach parts of the body (2).

Although he approached this from a quantum physics standpoint, Feynman facilitated a launching point for nanotechnology, and in particular, the area of nanomedicine. The idea of a tiny, functionalized material was captivating to think about. What if you could build a microscopic machine to achieve a function on its own? What if human control and direction were no longer needed? What if this machine, no longer needing any sort of stimulation, could think and act on its own, regardless of the roadblocks thrown in its way?

Although we are far from answering those questions, nanotechnology has brought us closer than ever to building a truly functionalized nanomaterial. Nanoparticles – tiny, minute particles – are the bridge from bulk materials to atomic and molecular structures. Ranging from one to 100 nanometers in size (A human hair is about 100,000 nanometers in diameter), nanoparticles are currently being scrutinized, manipulated, and prodded in hundreds of labs across the country. Found as far back as 4th century Rome, they range from naturally derived materials such as polymers and carbons, to chemically synthesized compounds such as metal oxides (3).

The hydrophilic and/or hydrophobic nature of the nanoparticle has allowed great advancement in the field of disease therapeutics. Nanoparticles are much more suited for drug delivery than free drug techniques, shown through enhanced tumor accumulations, reduced system exposure, and decreased side effects. Nanoparticles also have a long circulation half-life, making it easier and more convenient to locate a particular tumor or diseased site (4).

One interesting point for nanoparticle engineering is the fact that they can be covered in different cell membranes. A promising technique is to encase the nanoparticle in a red blood cell (RBC) membrane, so that the cells of the immune system will not be able to recognize the nanoparticle as a foreign substance to attack (5). These nanoparticles can be engineered to have a triggered release when in contact with a specific substance or can circulate freely throughout the body completely unrecognized by bodily cells. Due to their biocompatibility and biodegradability, they will have no toxic effects on the body. Moreover, they can be synthesized to degrade after delivering a drug molecule, or after a certain time point (5).

The world of personalized medicine can also be applied to RBC membraned coated nanoparticles. Blood drawn from a patient can be used to coat nanoparticles, thus creating a patient specific nanoparticle coating. Furthermore, transfusions from blood banks can be used to coat nanoparticles in specific blood types, allowing for universal coating materials through blood matching and O-type blood (6). On average, there are around one billion RBCs in 1 mL of human blood. This provides an abundance of coating materials for a plethora of nanoparticles. Furthermore, this patient-specific technique would maximize immune tolerance and minimize immune system interference (6).

Red blood cells were the first type of cell to be sacrificed as clothing for their newly disguised nano-counterparts. In the few years since then, the membranes of cells such as platelets, white blood cells, and even cancer/bacteria cells have become cloaking mechanisms for nanoparticles. Platelets, other than providing a mechanism to maintain hemostasis, were found to attract bacteria. Coating nanoparticles with platelet membranes was shown to increase antibiotic delivery to eliminate bacteria, in a controlled and localized fashion (5). White blood cells, in particular, showcase site-specific targeting towards tumors, providing a fail-safe method towards cancer drug delivery as opposed to a circulation dependent chemotherapy. This targeted cancer treatment has been seen again in cancer cell membrane coated nanoparticles. In a somewhat suicidal move, cancer cells have been shown to “self-target” – accumulating with other cancer cells at the main tumor site. As the nanoparticles are disguised as cancer cells, this “self-targeting” acts a GPS locator to deliver the nanoparticles in their stealthily coated state to the tumor site for targeted cancer drug delivery.7

Nanoparticles are also small and stealthy enough to penetrate and manipulate cancer cells. A common technique to treat cancers is through the usage of nanoparticles containing antibodies, drugs, vaccines, or metallic particles. These nanoparticles can be loaded with multiple drugs for combination therapy, which is known to suppress cancer chemoresistance (cancer cells resisting a singular chemotherapic drug) (8). The enhanced permeability and retention effect, known as EPR, allows for enhanced accumulation in tumors, while decreasing accumulation in healthy organs. This creates pores on the tumor surface in sizes ranging from 200 nm to 2 μm (human hair is about 100 μm or 100,000 nm), a pore size suited for nanoparticles to travel through into the tumor. The lack of drainage system prevents the nanoparticles from travelling out of the tumor, which also aids in nanoparticle accumulation. This effect is not present in healthy organs, because a protective lining of tightly packed endothelial cells prevents migration of the nanoparticles into healthy tissues (9).

An older technique of nanoparticle cancer treatment is to expose metal-coated nanoparticles to magnetic energy, infrared light, or radio waves, causing heat radiation. Exposure to heat causes nanoparticles’ magnetic orientation to oscillate wildly, absorbing electromagnetic energy and converting it into thermal energy (10). The idea of using heat to treat cancer was first exploited in Ancient Egypt, Greece, and Rome, where heat was used to treat breast cancer masses. In Greece, Hippocrates, known as the Father of Medicine, reported successfully treating breast cancer using heat. In fact, he coined the phrase, “What medicines to not heal, the lance will; what the lance does not heal, fire will” (11).

Heat can kill cancer cells, and with a bit of manipulation also has the ability to awaken the body’s immune system. First, an inactive nanoparticle is coated with a metal such as gold, iron, or silver. Once in the body, the nanoparticle can be activated by a light or energy source. The metallic coating will naturally give off heat externally, which can kill a portion of the malignant cells. By manipulating the heat, the immune system can be alerted to the presence of the cancer cells, thus identifying and killing the cancer cells not affected by the heat (12). This technique reverses and essentially demolishes the cancer cell’s primary offensive/defensive methodology, in which it tricks the immune system into believing everything is normal while the cancer cells multiply, take over, and destroy the body (12).

Nanoparticles display a wide range of possibilities, in which the treatment solution is reached with marginal harm to the healthy parts of the body. They display great promise as a novel, biologically relevant and biocompatible approach as an effective drug delivery platform – and also as functionalized vehicles for disease treatment.

With this great advancement in technology, revisiting some earlier questions may bring these innovations into a shadier perspective. Currently, the FDA has mainly approved liposomal or polymeric nanoparticle treatments – essentially limiting the scope of clinically applied nanomedicine to passive, biologically friendly compounds that are less unique treatments than additives to pre-existing drug molecules (13). The advantages of these treatments are characteristic of nanoparticle technologies in general: longer circulation, increased drug delivery for site specific targeting, lower systemic toxicity, and controlled or localized delivery (14). Beyond the area of treatment, however, metallic and inorganic nanoparticles have been utilized as imaging or ultrasound agents and have been applied for thermal cancer therapy (15).

Gene therapy in connection to nanotherapeutics, however, is an interesting cross-disciplinary area that has just now begun to enter the clinical stage. Nanoparticles provide a highly compatible carrier for genes; varying the surface charge on the nanoparticle allows for extremely stable interactions between the gene and the vesicle, while also increasing circulation time and protecting the internal contents from degradation (16). Although extremely promising, the issues that this technology faces are not limited to the original problems of gene therapy itself (such as immune system mediated inflammation or lack of specificity) (17). Now, through combining nanotechnology with gene therapy, issues from both technologies have arisen to complicate future application. Nanoparticles tend to aggregate and are occasionally absorbed by nonspecific tissues. This lack of targeting ability provides danger towards wrongful gene delivery, and subsequent uptake by the cell. Furthermore, immune system recognition of the delivery system is still a major problem, leading to adverse side effects and toxic byproducts (17).

Quite recently, an attempt to functionalize nanocarriers was made through a centuries old paper game – origami. Scientists began by building a nanorobot using DNA. Because DNA has an inherent ability to form self-assembled structures, this allowed for thrombin (a clotting agent) to be released for cancer cell elimination. The DNA formed a nanotube, like a Pixie Stix – entrapping the thrombin within its folds. Upon exposure to the tumor marker nucleolin, unfolding occurs, allowing for subsequent drug release and tumor eradication. As demonstrated in small pigs, release of the clotting agent occurred specifically at the tumor sites, with little to no effect towards other organ systems. Furthermore, liver uptake of the nanotube did not show extreme toxicity, and the nanotubes were successfully cleared or degraded (18).

Functionalized nanocarrier delivery systems have yet to be approved by the FDA and have yet to become a staple in modern disease treatment. Although recent advances have been both successful and exciting in their scope and impact, considering the course of this research moving forward leads to questions about how small we can go. Without knowledge of nanoparticle whereabouts, it is difficult to fully track the pathway of treatment throughout the body. While metallic particles can be traced through photoacoustic and magnetic resonance/computer tomography techniques, non-metallic particles cannot be visualized in a non-invasive way unless coupled with a fluorescent tracker (19–20).  And despite the miraculously tiny size of these nanoparticles, immune system recognition can lead to greater problems than the original disease (21).

Fantastic Voyage, though highly fictionalized, highlights the unexpected problems a small vehicle can face while racing through the body. The crew of doctors takes a detour through the heart (inducing a small cardiac arrest to avoid turbulence), passes through the lungs to regain oxygen, and travels through the inner ear (while pleading for outside silence to minimize turbulence) (1). Although the resulting six-minute operation is successful and the crew escapes to normality shortly after, this movie provides insight into truly how wondrous and complex the human body really is. It is often unknown how the body will react to something so tiny and foreign disrupting activity. If the tiny and foreign object can react to the body’s actions, a chain of negative side effects can occur – much like a line of dominos falling over one another.

Although the future of functionalized nanotechnology is as bright as the metallic nanoparticles heralding it, this juxtaposition between size, intelligence, and feasibility is something that must be carefully considered. Tricking Mother Nature often has unintended consequences – no matter how tiny the vehicle.

Maggie Chen is a first year in Wigglesworth

Works Cited

[1] IMDB. https://www.imdb.com/title/tt0060397/ (accessed Sept. 26, 2018).

[2] Feynman, R. There’s Plenty of Room at the Bottom. December 29, 1959.

https://www.zyvex.com/nanotech/feynman.html (accessed Sept. 26, 2018).

[3] Krukmeyer M.G. et al. J Nanomed Nanotechnol. 2015, 6, 1-7.

[4] Blanco E. et al. Nature Biotech. 2015, 33, 941-951.

[5] Fang F. et al. Adv. Mater. 2018, 30, 1-34.

[6] Hu C. M. J. et al. PNAS. 2011, 108, 10980-10985.

[7] Li R. et al. APSB. 2018, 8, 14-22.

[8] Swain S. et al. Curr. Drug Deliv. 2016, 13, 1290-1302.

[9] Wang M. et al. Pharma. Research. 2010, 62, 90-99.

[10] Jain S. et al. Br. J. Radiol. 2012, 85, 101-113.

[11] DeNardo G. L. et al. Cancer Biotherm. Radiopharm. 2008, 23, 671-679.

[12] Fekrazad R. et al. J. Lasers Med. Sci. 2016, 7, 62-75.

[13] Bobo D. et al. Pharma. Research. 2016, 33, 2373-2387.

[14] Anselmo A. et al. Bioeng. & Trans. Med. 2016, 1, 10-29.

[15] Parvanian S. et al. Sen. And Bio Sens. Research. 2017, 13, 81-87.

[16] Rosenblum D. et al. Nat. Comm. 2018, 9, 1-12.

[17] Chen J. et al. Mol. Therapy. 2016, 3, 1-8.

[18] Li S. et al. Nat. Biotech. 2018, 36, 258-264.

[19] Meir R. et al. Nanomed. 2014, 9, 2059-69.

[20] Lee J. M. et al. Mol. Therapy. 2012, 20, 1434-42.

[21] Shi J. et al. Nat. Rev. Cancer. 2017, 17, 20-37.

Image credit: Wikimedia commons

Featured

AI: Building Trust or Threatening a Cataclysm?

An Introduction

Artificial Intelligence (AI) is still a nascent technology with astronomical promises. Nevertheless, AI seems to have kicked up a storm of debates, with people passionately arguing in favor of and against AI. The fear of losing one’s identity, way of life, or livelihood seems to be motivators of resistance (1,2). From tractors to cell phones, all forms of innovation have suffered this rite of passage. Will AI have to submit to the same resistance?

While Stephen Hawking and Elon Musk warn about AI being an existential threat to humans, Mark Zuckerberg is optimistic about its phenomenal potential (3,4). Scrutinizing issues that lie within these extremes will allow us to explore the full range of the benefits and harms of AI. Could the past help us in this journey? What did the founding fathers of AI have in mind? A peek into the pages of AI history would help unearth their perspectives.

Tracing the Roots

According to Pamela McCorduck, an author who has written about the history and philosophical significance of AI, AI began with “an ancient wish to forge the gods” (5). In the modern times, the birth of AI can be attributed to the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) in 1956, where attendees broke ground on the assumption that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it” (6). AI has been on an upswing since the late 1990’s thanks to three technological breakthroughs: affordable parallel computing, big data, and deep learning.

In the modern era, at the DSRPAI, automating the functions of the brain seemed to be the agenda (6). The goal of the attendees was to address issues like teaching computers the intricacies of language, rules of reasoning, conjecture, and manipulation of words. Among the problems they intended to analyze was simulating a network of neurons in order to model the brain and teach this model human skills like self-improvement, handling randomness, hunches, intuition and making educated guesses. They also wanted to work on removing computational barriers by improving the speed and capacity of the available computers (6).

The Good & The Bad

It is naive to assume that the applications of a new invention will only be virtuous. In fact, the flavors have always been good, bad and ugly. It takes a collaborative effort between scientists, businesses and the general public to translate an invention into successful products that serve the needs of society.

“Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don’t know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it,” said professor Stephen Hawking (4). This is a perfect example of the necessity of going through the good, bad and ugly.

Benefits to Society

Food, healthcare and personal safety. These form the core of the basic necessities for all human beings. AI’s success in these areas allows it to live up to its promises.

The Food and Agriculture Organization of the United Nations (FAO) predicts that by 2050, food production will increase by 50% (7). A growing population, shrinking arable land, and an aging and dwindling labor force contribute to this statistic. AI has helped farmers increase agricultural productivity.

Transfer learning is being used to teach AI to recognize diseases and pest damage in crops. AI can identify diseases with 98% accuracy, train cameras and sensors to capture images, identify weeds, spray the right herbicides, train robots to pick fruit, sample soil, analyze data to detect nutrient deficiencies, and undertake remedial measures (8). Microsoft’s FarmBeats is an example of such an AI-based agriculture platform (9).

Additionally, according to the National Institutes of Health (NIH), “precision medicine is an emerging approach for disease treatment and prevention that takes into account individual variability in genes, environment, and lifestyle” (10). Google’s AI-based tool, DeepVariant, builds a more efficient personalized genome in a faster and cost-effective way (11). Companies such as Deep Genomics use deep learning to analyze personalized genomic data, identify patterns in that data that might contribute to diseases such as Alzheimer’s, cancer, heart disease, etc., and help drug companies with drug discovery (11).

The drug discovery process involves gigantic amounts of data, images, and research papers. It is slow due to the limitations of human researchers who can read only between 200 and 300 papers per year. With AI, such data can be processed by natural language processing algorithms, assimilated, correlated, and connected to related databases to expedite drug discovery. Moreover, the discovery of a drug component is extremely labyrinthine because it involves checking the numerous combinations between that component and other biological factors. AI aids in cutting down the time taken (12).

AI has also become an invaluable tool in protecting data. Algorithms using machine learning scan repositories of malicious programs identified from previous attacks, learn what to look for, and predict future attacks. AI is being used in the war against spam and phishing, to filter out violent images and illegal financial transactions, protect data, and detect computers infected with malware (13).

What do the Naysayers Think?

Naysayers are raising an alarm about AI creating a society of a few haves, who will own strong AI tools/skills/devices, and many have-nots. Since, unlike the previous revolutions, such as the industrial and computer revolutions, which replaced certain type of jobs with others, the AI revolution is ready to destroy numerous blue-collared jobs. The few left safe would share the profits and create a disparate number of have-nots (14, 15). Exploring how modern life will be harmed by AI is a worthy exercise.

As of 2015, the percentage of the population employed in agriculture in the world’s two most populous countries, China and India, is approximately 29% and 51% respectively (16). Even though AI seems to be taking roots in developed countries, it will not be long before it spreads to countries like India and China where it could disrupt the livelihood of major chunks of the population (17).

An often overlooked fact is that a majority of doctors are not trained to interpret results of AI-based precision medicine or relay these results to patients (18). This leads to misdiagnoses and mistreatments. Additionally, precision medicine needs access to the most personal data, one’s genome sequence. Therefore, it demands the utmost alertness because it touches issues of ethics and security. Who should be allowed access to this data? How is the privacy and security of this data ensured?

In the frenzy to import AI into precision medicine, the cost associated with it is overlooked. Mass adaptation of a product/service drastically reduces its price. Precision medicine, which aims to deliver individualized treatment, is therefore expensive (19). Are AI and precision medicine worsening the burden of soaring healthcare costs?

AI is also gaining an upper hand over humans in committing cybercrimes. Personally Identifiable Information (PII) is information that can be used to identify, contact, or locate a single person. AI-aided cybercriminals mine enormous amounts of data, extract PII, use it to steal identities, and enable hackers to mount personalized attacks (20). Phishing, considered to be a lower-level crime, has been empowered by AI. Experimental results showed that AI is remarkably superior to humans in distributing phishing messages over social media (21).

Hindering the livelihoods of large chunks of society and stealing private data could cause turmoil in societies that are unprepared to handle them. It is worth considering the opinions of the naysayers.

The Future

AI offers hope for providing cures for diseases such as Alzheimer’s, Parkinson’s, ALS, cancer, etc., increasing food production, making a commendable dent in hunger in poorer countries, and launching powerful defenses against cybercrimes. But is this a promise of utopia?

At the other end of the spectrum is the prediction of an AI-fueled dystopia: AI bots capturing PII, conquering human identities, overtaking human intelligence, starting wars, and causing high unemployment and societal unrest.

Having gone through the exercise of examining the good, bad and the ugly, it is prudent to ask the next logical questions. Should AI be regulated? What might be the issues within the purview of such regulations? Has the train already left the station?

Considering the speed at which AI is penetrating our lives, it is judicious to accept that AI cannot be turned back and work towards establishing a global consensus on AI regulation (22). It is wise to work to turn this powerful technology into an ally and prevent it from attaining complete autonomy.

Securing data, the lifeblood of AI, is paramount. The General Data Protection Regulation (GDPR), being proposed by the EU is gaining momentum worldwide. However, will it be universally accepted? Scrutinizing for bias in AI algorithms and deciding which AI decisions are ethical are some of the other critical issues to be regulated (23). Achieving a global consensus on these regulations expeditiously is advantageous to everyone.

Mythri is a first year in Matthews Hall

Works Cited

[1]  Steven Overly. Washington Post https://www.washingtonpost.com/news/innovations/wp/2016/07/21/humans-once-opposed-coffee-and-refrigeration-heres-why-we-often-hate-new-stuff/?noredirect=on&utm_term=.316e1bbaef21 (accessed Oct. 2, 2018).

[2] Calestous Juma. World Economic Forum. https://www.weforum.org/agenda/2016/07/why-do-people-resist-new-technologies-history-has-answer/ (accessed Sept. 22, 2018).

[3] Camila Domonoske. NPR. https://www.npr.org/sections/thetwo-way/2017/07/17/537686649/elon-musk-warns-governors-artificial-intelligence-poses-existential-risk (accessed Sept. 20, 2018).

[4] Osborne, Hannah. Stephen Hawking AI Warning: Artificial Intelligence Could Destroy Civilization. Newsweek [Online], November 7, 2017. https://www.newsweek.com/stephen-hawking-artificial-intelligence-warning-destroy-civilization-703630 (accessed Sept. 18, 2018).

[5] McCorduck, P. Machines Who Think, 2nd ed; A.K. Peters, Ltd.: Massachusetts, 2004.

[6] McCarthy, John. et. al. AI Magazine. 2006, 27, 12-14.

[7] Food and Agriculture Organization of the United Nations. http://www.fao.org/news/story/en/item/471169/icode/ (accessed Sept. 19, 2018).

[8] Food and Agricultural Organization of the United Nations. http://www.fao.org/e-agriculture/news/can-artificial-intelligence-help-improve-agricultural-productivity (accessed Sept. 20, 2018).

[9] FarmBeats: AI & IoT for Agriculture. https://ai.intel.com/powering-precision-medicine-artificial-intelligence/ (accessed Sept. 19, 2018).

[10] Intel AI. https://ai.intel.com/powering-precision-medicine-artificial-intelligence/ (accessed Sept. 19, 2018).

[11] Knight, W. Google Has Released An AI Tool That Can Make Sense Of Your Genome. MIT Technology Review [Online]. 2017.  https://www.technologyreview.com/s/609647/google-has-released-an-ai-tool-that-makes-sense-of-your-genome/ (accessed Sept. 20, 2018).

[12] Fan, P. How AI Can Speed Up Drug Discovery. AI Technology and Industry Review [Online]. https://medium.com/syncedreview/how-ai-can-speed-up-drug-discovery-3c7f01654625 (accessed Sept. 20, 2018).

[13] Lily Hay Newman. Wired. https://www.wired.com/story/ai-machine-learning-cybersecurity/ (accessed Sept. 18, 2018).

[14] Polonski, V. People Don’t Trust AI–Here’s How We Can Change That. Scientific American [Online], January 20, 2018, https://www.scientificamerican.com/article/people-dont-trust-ai-heres-how-we-can-change-that/ (accessed September 14, 2018).

[15] Darrell M. West. Brookings. https://www.brookings.edu/blog/techtank/2018/04/18/will-robots-and-ai-take-your-job-the-economic-and-political-consequences-of-automation/ (accessed Sept. 20, 2018).

[16] Max Roser. Our World in Data. https://ourworldindata.org/employment-in-agriculture (accessed Sept. 19, 2018).

[17] Evan Fraser and Sylvain Charlebois. The Guardian. https://www.theguardian.com/sustainable-business/2016/feb/18/automated-farming-food-security-rural-jobs-unemployment-technology (accessed Oct. 3, 2018).

[18] Advisory Board. https://www.advisory.com/daily-briefing/2015/02/10/the-challenges-for-personalized-medicine (accessed Sept. 19, 2018).

[19] Dan Mangan. CNBC. https://www.cnbc.com/2015/12/04/personalized-medicine-better-results-but-at-what-cost.html (accessed Sept. 21, 2018).

[20] George Dvorsky. Gizmodo. https://gizmodo.com/hackers-have-already-started-to-weaponize-artificial-in-1797688425 (accessed Sept. 21, 2018).

[21] Norton, S. Era of AI-Powered Cyberattacks Has Started. Wall Street Journal. https://blogs.wsj.com/cio/2017/11/15/artificial-intelligence-transforms-hacker-arsenal/ (accessed Sept. 22, 2018).

[22] Etzioni, O. How to Regulate Artificial Intelligence. The New York Times. https://www.nytimes.com/2017/09/01/opinion/artificial-intelligence-regulations-rules.html (accessed Sept. 23, 2018).

[23] Elizabeth Sablich. Brookings. https://www.brookings.edu/research/how-artificial-intelligence-is-transforming-the-world/#_edn3 (accessed Sept. 24, 2018).

Featured

Emerging Mitochondrial Therapies and Their Ethicality

Dubbed the “three-parent baby” by the media, a paper published in Reproductive Biomedicine Online in April 2017 details the live birth of a child through experimental spindle transfer, a procedure which involves the use of mitochondrial DNA from a donor.1 Aside from garnering much publicity, the study has raised important questions in the scientific community about ethics and the limits of science.

The Study

Doctors at the New Hope Fertility Center (with locations in Mexico, China, and New York) received a patient seeking help to conceive a child (1). The 36-year old woman had a history of miscarriages and had lost two children due to Leigh syndrome. Leigh syndrome is a neurological disease affecting young children characterized by a loss of motor abilities, which usually results in death within two to three years (2). Leigh syndrome, in the case of this mother, was transmitted to her offspring by a mutation in her mitochondrial DNA (1).

The doctors proposed mitochondrial replacement therapy, which aims at preventing the transmission of mitochondrial diseases from a mother to her biological child by using a mitochondrial donor (3). For this therapy, there are different procedures, two of which are spindle transfer and pronuclear transfer. The key difference between them is that spindle transfer involves the transfer of nuclear DNA before fertilization, while pronuclear transfer is the transfer of pronuclei from zygotes (and, as such, involves discarding the donor zygote) (4).  In this case, they indicate that the patient decided on spindle transfer over pronuclear transfer for religious reasons (1).

        As such, spindle transfer was performed: spindles from the patient’s metaphase II oocyte (immature egg cell) were inserted into the cytoplasm of the donor oocyte (1). After this, the new oocyte was fertilized with one sperm from the patient’s partner (1). Once blastocysts (early stage embryo) formed, they underwent testing for aneuploidy (abnormalities in the number of chromosomes in the cell) and the embryo found to be euploid (having the normal 46 chromosomes) was transferred into the patient at New Hope Fertility Center’s clinic in Mexico (1).

The Results

        The patient delivered the baby at 37 weeks gestation after an “uneventful pregnancy” (1). The child’s mitochondrial DNA (mtDNA) mutation load varied between undetectable to 9.23% (depending on the tissue surveyed), which is low compared to the mother’s mtDNA mutation load of up to 34% (1). (For reference, carriers are typically asymptomatic if their mutation load is less than 30%) (1).

        As such, the patient was able to give birth to a healthy male child in the first instance of mitochondrial spindle transfer to reduce mtDNA mutation transmission. The authors wrote that “at the time of writing, the baby is healthy at 7 months of age” but “we are still following the baby closely” (1).

        Interestingly, the authors conclude at the end of their paper that “there is certain to be much controversy over this treatment, and further study is mandatory” (1).

The Legality

        A few months after the article came out in Reproductive Biomedicine Online, in August 2017, Mary Malarkey, of the U.S. Food and Drug Administration’s Center for Biologics Evaluation and Research, sent John Zhang (principal author of the article, founder and CEO of New Hope Fertility Center) a warning letter. This letter explains that after the publication of the article, John Zhang submitted a request for a pre-investigational new drug (IND) meeting to begin a clinical investigation of spindle transfer therapy for assisted pregnancy (5).  However, the letter states, the FDA, per regulations set forth by Congress, is prohibited from accepting IND submissions that involve the use of human embryos with “intentionally created . . . heritable genetic modifications”, and as such, the FDA declined his request (5). The letter further details violations by New Hope Fertility Clinic, including marketing mitochondrial replacement therapy in the United States (5).

        This was not the first time that New Hope Fertility Clinic received a warning letter from the FDA, as they had previously received warning letters on violations of the regulations regarding the handling of human cells and tissues (6). Nonetheless, a spokesperson for New Hope Fertility Clinic said that they were taking the matter seriously and would work with the FDA to resolve it (6).

Backlash and Controversy

        Shortly after the results of the study came out, several news outlets published articles with headlines drawing attention to a “three parent baby” (7). Meanwhile, John D. Loike and Nancy Reame, part of the Bioethics faculty at Columbia University, wrote an article for The Scientist in which they discuss the ethical dangers of mitochondrial replacement therapy (8).

They bring up issues of parental rights, or more specifically the question of whether the mtDNA donor has parental rights (they state that the genetic contribution, although small, still exists from this donor and should be considered as such) (8). Loike and Reame also worry about the extreme cost of this therapy, which can range from $25,000 and $50,000, which would not only leave this therapy almost exclusively available to the wealthy, but could also possibly exploit impoverished women, as they would be paid extremely well for mtDNA donation (8). They conclude, however, that mitochondrial replacement therapy “should not be banned because of presumed social or ethical complexities” but that “governments and the scientific community should invest time and money into making [it] widely available to patients” (8).

On the other hand, the United Mitochondrial Disease Foundation (UMDF) states on its website that they believe that mitochondrial replacement therapy is “NOT genetic manipulation, but rather a technological innovation and an expansion of in vitro fertilization, a clinically-approved technique used for four decades” (9). This patient advocacy group, however, says that the technique should be made available and accessible to patients with mtDNA mutations only “if demonstrated to be safe and efficacious” (9).

Conclusion

Contrasting opinions and views make this issue a complicated one to tackle. Moving forward, it will be of particular interest to consider the role of the U.S. government in the use of mitochondrial replacement therapies. The United Kingdom in 2015 already approved the use of mitochondrial donation techniques as part of in vitro fertilization (3).  The National Academy of Sciences, Engineering, and Medicine stated that mitochondrial replacement therapy is considered ethical when done in male embryos for mothers with mitochondrial diseases (presumably limited to males so that the genetic modification of mtDNA will not be passed down to future generations) (8). With different organizations rallying around to support mitochondrial replacement therapy, the government will likely be forced to reevaluate its laws soon, and lawmakers will have to decide whether they believe it is ethical or not. Issues of parental rights, patient rights, accessibility, and exploitation will all have to be considered. Should we deny this therapy to patients because of a resulting 0.1% difference in the child’s DNA? Should we continue to not be allowed to genetically edit embryos at all? Is all of this a step towards modification and selection of the human population’s genome pool? Moving forward, there are many factors to be considered and debated on the issue of mitochondrial replacement therapies and spindle transfer.

Ana is a first year in Thayer

Works Cited

[1] Zhang, J. et al. RBM Online 2017, 34, 361-368.

[2] Genetics Home Reference. https://ghr.nlm.nih.gov/condition/leigh-syndrome (accessed October 2018).

[3] Castro, R. Journal of Law and the Biosciences 2016, 726-735.

[4] Reznichenko A. S., et al. Applied & Translational Genomics 2016, 11, 40-47.

[5]FDA.https://www.fda.gov/downloads/BiologicsBloodVaccines/GuidanceComplianceRegulatoryInformation/ComplianceActivities/Enforcement/UntitledLetters/UCM570225.pdf (accessed Oct 2018).

[6] FDA Warning Letters. https://www.fda.gov/ICECI/EnforcementActions/WarningLetters/ucm424065.htm (accessed Oct 2018).

[7] Neimark, J. WBUR [Online]. 2017. http://www.wbur.org/npr/523020895/a-baby-with-3-genetic-parents-seems-healthy-but-questions-remain (accessed Oct 2018).

[8] Loike, J. et al. The Scientist  [Online]. 2016.  https://www.the-scientist.com/critic-at-large/opinion-ethical-considerations-of-three-parent-babies-32320 (accessed Oct 2018).
[9] UMDF’s Postion. United Mitochondrial Disease Foundation [Online]. 2017. https://www.umdf.org/mitochondrial-replacement-therapy/ (accessed Oct 2018).

Image credit: NICHD NIH

Featured

CRISPR: How Far is too Far?

It is 2018, only six years after the discovery of CRISPR-Cas9, and genome engineering has become a rapidly evolving, incredibly exciting field. In order to understand why CRISPR-Cas9 is considered a revolutionary technology, it is important to look at the history of genome editing.

The field of genetics was originally pioneered by Austrian scientist Gregor Mendel in the late 19th century. His work relied on the analysis of breeding patterns and spontaneous mutations in the genome. During the mid-twentieth century, many scientists demonstrated that random gene mutations could be caused by the application of intense radiation and specific types of chemical treatment, but they couldn’t control the mutations produced. In the 1970s, targeted genetic changes were first created in both yeast and mice models by taking advantage of a process known as homologous recombination. Scientists would insert a fragment of a specific gene into an organism, and during cell replication, this fragment was incorporated into the genome, but it had incredibly low efficiency (1).

Modern genome engineering methods, such as Zinc-Finger Nucleases (ZFNs), Transcription-Like Effectors (TALENs) and Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) are all engineered proteins which are capable of introducing double-stranded breaks in DNA (2). They essentially act as a pair of programmable molecular scissors, which are capable of cutting the DNA at a precise location. CRISPR is revolutionary compared to older ZFN and TALEN technology because it is able to be easily matched with tailor-made guide-RNA sequences which lead CRISPR to its target. When using TALEN and ZFN, the proteins themselves need to be reengineered for each specific target, which is enormously time-consuming, difficult, and expensive (3). CRISPR is far cheaper than alternatives, and functions in the vast majority of organisms.

Since the advent of CRISPR, billions of dollars have been invested in CRISPR-based startups, such as Editas Medicine and CRISPR therapeutics (4). These companies are working diligently to translate lab-based studies to therapeutics for patients around the world. The idea of being able to modify our genome, and possibly enhance or disrupt the function of any gene has led to much speculation about medicine’s future. Many journalists are reporting that in just two or three decades, parents will be able to choose their child’s eye color, hair color, and possibly even their intelligence (5–6). However, the biological reality starkly contrasts with these ideations.

Many of our traits, including intelligence and height, are not encoded by a single gene. They are polygenic, or monogenic, indicating that thousands of genes (in some cases, more than 93,000) impact the expression of these traits (7). Identifying each gene locus that has correlation with a specific trait and then using genome engineering to selectively modify each of these locations would be incredibly difficult, and is very unlikely to occur anytime soon (7–8). That being said, over 10,000 of the world’s most devastating genetic diseases, including Huntington’s, Sickle Cell Anemia, and Progeria, are linked to very specific mutations in the genome (7,9).

Chinese physicians have already employed CRISPR-based therapeutics to treat cancer and HIV, according to Quartz (10). Cambridge-based biotech companies have recently launched a trial using CRISPR therapy for the inherited blood disorder beta-thalassemia (11–12). However, these trials truly are just the tip of the iceberg. Many scientists believe that CRISPR technologies could be key to developing therapies for thousands of other diseases (5,9).

With these rapid advancements in genome engineering, it’s incredibly important to consider the vast ethical complications which arise from the use of these technologies. Many individuals have moral objections to germline editing or changing the genome of the embryo and future generations (13). Germline therapy would make it impossible to obtain informed consent, since the patient isn’t even born yet (14). Additionally, while few people would disagree with using gene editing to cure a devastating disease, many would object to using it for prophylactic purposes. If it were possible to reduce the chance of contracting Alzheimer’s from 5% to 1% using CRISPR, would it be ethical to administer patients this treatment? If only the rich can afford these optional therapies, are the less fortunate going to be left behind (9,15)? Balancing these key factors will be crucial in the successful deployment of genome editing tech, and scientists and ethicists around the world are grappling with these difficult questions.

In summary, CRISPR is a revolutionary technology and has the potential to improve the quality of life of billions. However, the scientific and medical community must agree on a set of moral standards which define the way gene editing therapeutics can be used in humans. In the age of CRISPR, how far is too far?

Sreekar is a first year in Weld

Works Cited

[1] Caroll, D. Genome Editing: Past, Present, and Future. Yale J Biol Med. Jan 23, 2018. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5733845/ (accessed Sep 28, 2018).

[2] Brad P. A simple guide to CRISPR, one of the biggest science stories of the decade. Vox Media. July 23, 2018. https://www.vox.com/2018/7/23/17594864/crispr-cas9-gene-editing  (accessed Sep 15, 2018).

[3] Yeadon, J. PROS AND CONS OF ZNFS, TALENS, AND CRISPR/CAS. JAX Blogs. https://www.jax.org/news-and-insights/jax-blog/2014/march/pros-and-cons-of-znfs-talens-and-crispr-cas (accessed Sep 28, 2018).

[4] Terry, M. The 3 Small CRISPR Biotechs That Could Cure 10,000 Diseases. Biospace. Feb 5, 2018. https://www.biospace.com/article/unique-the-3-small-crispr-biotechs-that-could-cure-10-000-diseases/ (accessed Sep 28, 2018).

[5] Salkever, A.; Wadhwa, Vis. When Baby Genes Are for Sale, the Rich Will Pay. Fortune. Oct 23, 2017. http://fortune.com/2017/10/23/designer-babies-inequality-crispr-gene-editing/ (accessed Sep 28, 2018).

[6] Pam, B. Gene Editing for ‘Designer Babies’? The New York Times, Aug. 4, 2017, p. A14. https://www.nytimes.com/2017/08/04/science/gene-editing-embryos-designer-babies.html (accessed Sep 15, 2018).

[7] Boyle, E.; Yang, L; Pritchard, J. An Expanded View of Complex Traits: From Polygenic to Omnigenic. Cell. June 15, 2017. https://www.cell.com/cell/fulltext/S0092-8674(17)30629-3 (accessed Sep 28, 2018).

[8] Greenwood, V. Theory Suggests That All Genes Affect Every Complex Trait. Quanta Magazine. June 20, 2018. https://www.quantamagazine.org/omnigenic-model-suggests-that-all-genes-affect-every-complex-trait-20180620/ (accessed Sep 28, 2018).

[9] Gina, K. Swift Gene-Editing Method May Revolutionize Treatments for Cancer and Infectious Diseases. The New York Times, July 11, 2017, p. A14. https://www.nytimes.com/2018/07/11/health/gene-editing-cancer.html (accessed Sep 15, 2018).

[10] Foley, K. Chinese scientists used Crispr gene editing on 86 human patients Quartz Magazine. Jan 23, 2018. https://qz.com/1185488/chinese-scientists-used-crispr-gene-editing-on-86-human-patients/ (accessed Sep 28, 2018).

[11] Hignett, K. Breakthrough CRISPR Gene Editing Trial Set to Begin This Year. NewsWeek. Apr 16, 2018. https://www.newsweek.com/crispr-therapeutics-crispr-cas9-gene-editing-beta-thalassaemia-887051 (accessed Sep 28, 2018).

[12*] Baylis, F. First-in-human Phase 1 CRISPR Gene Editing Cancer Trials:Are We Ready?. Current Gene Therapy Review. Aug 27, 2017. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5769084/ (accessed Sep 15, 2018).

[13] Rogers, A. Scientists Take A Harder Look at Genetic Engineering Embryos. WIRED Magazine. Aug 8, 2018. https://www.wired.com/story/scientists-take-a-harder-look-at-genetic-engineering-of-human-embryos/ (accessed Sep 15, 2018).

[14] NA. Genome Editing – Ethical Concerns. NIH. ND. https://www.genome.gov/27569225/what-are-the-ethical-concerns-about-genome-editing/ (accessed Sep 28, 2018).

[15] Lunshof, J. Gene editing is now outpacing ethics. The Washington Post. July 12, 2012. https://www.washingtonpost.com/news/theworldpost/wp/2017/12/12/bioethics/?noredirect=on  (accessed Sep 15, 2018).

Image credit: Wikimedia Commons

Featured

Asimov and AI: Investigating the Potential of Robotic Consciousness

Introduction

Robots have pervaded popular culture since the dawn of the technological revolution; for nearly a century, authors, filmmakers, artists, and conspiracy theorists have prophesied that robots will someday break free from mankind’s control and wreak apocalyptic havoc. The popular TV show Black Mirror tells stories of futuristic tech run amok, from powerful and dangerous brain implants to robotic dogs with incredible killing efficiency. Terminator portrays a post-apocalyptic world destroyed by machines, and Ex Machina tests the limits of human-robot interaction and boundaries of our definition of ‘robot’ in grotesque ways. These examples represent the near-universal fear that robots will become so advanced in the future that they will be able to turn against their creators with unstoppable force. What drives this obsession with the future destructive potential of robots? Is it truly possible for robots to become conscious? To answer these questions and more, we will investigate the possibility of robots becoming self-aware and the implications of potential robotic consciousness through the lens of the scientific and cultural history of robots, the status of AI today, and budding technologies that could change the trajectory of machine learning.

Origins of Robot and Portrayals in the Past

The concept of a robot was conceived long before technology allowed the creation of the autonomous ‘thinking’ machines we call robots today. The combination of mythological motifs of an all-powerful divine being breathing life into inanimate materials, Frankenstein’s warning against humanity attempting to play the role of God, and the advent of industrialization spurred fascination with the animation of technology in the early 20th century (1). The term robot itself, derived from the Czech word robota meaning “forced work” or “compulsory service”, was first used in a play and short story by science fiction author Karel Capek in 1920 (2). In the century that followed, science fiction authors investigated the potential of humanoid robots, androids, and cyborgs. One such author, Isaac Asimov, rose to fame as he published a series of novellas with a common theme: all robots have three unbreakable laws programmed into their ‘brains.’ The laws, used as a basis for countless science fiction works in the decades to follow, are:

First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law (2).

Because Asimov’s robots are unable to think for themselves or weigh future consequences from their actions, many moral complications result from the strict adherence to these laws. Asimov’s grim outlook on robots’ cognitive capacity, combined with the relative rigidity of AI today, still influences popular opinion on whether robots will ever truly match humans’ mental abilities (2).

Robots and AI Today

N.B. For the purposes of this article, a “robot” is defined as a programmable machine that can sense its surroundings in some way, “think” or analyze the information presented to it, decide on a solution or action, and act on the solution in a way that manipulates its physical surroundings.

Contrary to Asimov and Capek’s views but consistent with the origin of the word ‘robot’, artificial intelligence and robots today are primarily beneficial to the advancement of humanity. With superior processing capacity, physical endurance, and preciseness of movements, robots have infiltrated healthcare, agriculture, industry, research ventures, households, transportation, and defense. Already, Intuitive Surgical’s da Vinci Surgical System is performing surgeries at hundreds of hospitals around the United States, Waymo’s autonomous cars are gaining traction in the automobile industry, and NASA’s Robonaut is being sent to the International Space Station to carry out dangerous missions (3). Robots have rescued humans from catastrophic situations, explored the deepest depths of the ocean, and even vacuumed the floors of millions of homes. Humanoid robots and androids have also been making strides in their mimicry and analysis of human emotion. For example, Softbank Robotics’ NAO is an excellent companion to autistic children, and Honda’s 3E robot series focuses on an ‘empathetic’ design for its robots; one day, the 3E-A18 model may soothe distressed children and help regulate their emotions.

As close as we may be to creating robots with their own emotions and consciousness, AI’s proximity to human capabilities has landed many robots in an “uncanny valley”. This refers to the phenomenon that occurs when a human is repulsed by the humanlike qualities of machines (4). We are currently between the age of endearing machines with distinctly mechanical features, such as R2D2 from Star Wars, and terrifyingly realistic androids like Sophia (5). Even Siri could fall into the category of ‘uncanny’her manner of speaking is very similar to that of a human, but not similar enough that one would mistake her as another human. Our position within the uncanny valley represents our failure to pass the original Turing test: “if an AI machine could fool people into believing it is human in conversation…then it would have reached an important milestone” (6). Even though supercomputers have defeated chess grandmasters and robots like Watson are used to assimilate data quickly and more efficiently than any human, progress in the emotional aspect of computingcoined “affective computing”is still slow (5).

Even if a robot passes the Turing test with flying colors, is it really thinking like a human? What abilities would AI have to possess in order to truly resemble humans in emotional and empathetic capacity?
Defining Consciousness: Where the Lines Blur

The question what qualifies a consciousness? has been asked by philosophers for millennia, and now, software designers and mechanical engineers are investigating it in their development of AI. Some thinkers focus on the psychological aspect of consciousness, arguing that true consciousness results from self-awareness and ability to reflect on past decisions (7). Others, most notably Christof Koch and David Chalmers, argue that consciousness arrives from experiences and interactions with the outside world combined with an inner sense of purpose; in order for a robot to think and act as freely as a human, it would have to process the sensory information it obtains subjectively and outside of the constraints of an algorithm (8). Others still believe that a machine cannot behave like a human unless it is treated as one and introduced to human culture, including religion; this would allow it to become more than the sum of its mechanical parts and even develop a soul (7). Of course, some pragmatic scientists think that defining consciousness as a guideline for AI is irrelevant, because artificial intelligence will always be artificial. In addition, the mental abilities of humans developed as a result of millions of years of evolution as part of a natural biochemical trajectory, so pragmatists argue that it impossible to mimic this level of complexity in the span of a few decades (1).

From all of the varying definitions and qualifications of consciousness, it is evident that our robots and AI today are nowhere near singularitythe fabled moment at which a machine has its own goals outside of what was programmed into itbut many tech startups and even government organizations today are using new approaches to dig us out of the uncanny valley (4).

How Can We Improve AI?

Today, most AI systems utilize a series of processes generalized under the term “deep learning” to collect and analyze data (9). Deep learning involves recognition of patterns, identification of objects and symbols, and perception of the outside world, but these processes are entirely driven by an algorithm that many engineers criticize for its inflexibility. Project Alexandria attempts to combat the rigidity of AI algorithms by introducing a component of human intelligence that is commonly overlooked: common sense. Drawing on facts, heuristics, and observations, the project is working towards creating AI machines with the fluidity of the human mind and a more flexible approach to solving real-world problems (9). Similarly, the startup Kyndi is building more adaptable AI, focusing on advanced reasoning abilities rather than conventional data consolidation. DARPA (Defense Advanced Research Projects Agency, a branch of the U.S. Department of Defense) is developing the initiative Machine Common Sense with similar goals as Project Alexandria, recognizing the importance of more fluid AI for the future of robotics (9). Although true robotic independence and consciousness remains in the relatively distant future, rapid strides are being taken to eliminate the barriers against making science fiction a realityso it is worth considering the cultural and ethical implications of conscious AI.

Future Considerations

The future of our robots and AI has been speculated by philosophers, scientists, and screenwriters alike. According to the episode “Be Right Back” in Black Mirror, even the most advanced android that easily passes the Turing test cannot possibly mirror a human’s personality and mannerismsand this failure can result in emotional catastrophe. Ex Machina explores the consequences that could arise from confining highly intelligent robots and using them for research: the android Ava ultimately murders her creator and escapes captivity with a vengeful spirit. AI expert Aleksandra Przegalinska speculates that the best outcome for robots would be the optimization of programming without the potential side effect of gaining consciousness, and the worst-case scenario would resemble the future depicted in The Terminatora violent rebellion against human oppressors (4). In the event of robots gaining consciousness and not acting out violently, a political divide could arise regarding the ethical treatment of these machines. Regardless of the eventual outcome of our work with AI, politicians and civilians should be aware of the rate of progress being made, as well as the divide between fact and fiction.

Hannah is a first year in Holworthy planning to concentrate in MCB or Neuroscience.

Works Cited

[1] Ambrosino, Brandon. What Would It Mean for AI to Have a Soul? BBC, BBC, 18 June 2018, www.bbc.com/future/story/20180615-can-artificial-intelligence-have-a-soul-and-religion (accessed Oct. 05, 2018).

[2] Clarke, R. Asimov’s Laws of Robotics: Implications for Information Technology-Part I.  Computer, vol. 26, no. 12, 1993, pp. 53–61.

[3] Robonaut. NASA, NASA, 24 Sept. 2018, https://robonaut.jsc.nasa.gov/R2/ (accessed Oct. 9, 2018).

[4] Bricis, Larissa. A Philosopher Predicts How and When Robots Will Destroy Humanity. Techly, 4 Oct. 2017, www.techly.com.au/2017/09/22/philosopher-predicts-robots-will-destroy-humanity/ (accessed 5 Oct. 2018).

[5] Caughill, Patrick. SophiaBot Asks You to Be Nice So She Can Learn Human Compassion. Futurism, Futurism, 12 June 2017, https://futurism.com/sophiabot-asks-you-to-be-nice-so-she-can-learn-human-compassion (accessed Oct. 10, 2018).

[6] Ball, Philip. Future – The Truth about the Turing Test. BBC, BBC, 24 July 2015, www.bbc.com/future/story/20150724-the-problem-with-the-turing-test (accessed Oct 04, 2018).

[7] Robitzski, Dan. Artificial Consciousness: How to Give a Robot a Soul. Futurism, Futurism, 25 June 2018, https://futurism.com/artificial-consciousness (accessed Oct. 11, 2018).

[8] Robitzski, Dan. The Frustrating Quest to Define Consciousness. Scienceline, 25 June 2017, https://scienceline.org/2017/06/frustrating-quest-define-consciousness/ (accessed Oct. 11, 2018).

[9] Lohr, Steve. Is There a Smarter Path to Artificial Intelligence? Some Experts Hope So. The New York Times, The New York Times, 20 June 2018, www.nytimes.com/2018/06/20/technology/deep-learning-artificial-intelligence.html (accessed Oct. 10, 2018).

Image credit: Wikimedia Commons

 

Featured

Body-to-Head Transplants: The Future of Medicine or a Modern-day Frankenstein Fantasy?

Scientific history has proven that often the greatest medical breakthroughs go against society’s conventional criterion at that time. Today, society is faced with a pivotal, yet controversial, development in medicine: the body-to-head transplantation (BTH). Some believe that transplanting a head and a brain could perhaps be the final frontier in organ transplantation (1); while others are still skeptic, as BTH is no ordinary surgery and tackles complex medical procedures as well as many ethical dilemmas. Not only is the surgery extremely dangerous with no exit strategy, but it also goes against many of society’s conventional moral standards. However, if executed, the benefits the procedure could provide to terminally ill patients are incalculable.

The goal of body-to-head transplantation (BHT) is to sustain the life of individuals who suffer from a terminal disease, but whose head and brain are healthy. Ideally, BHT could provide a life-saving treatment for several conditions where none currently exist (1). The surgical considerations include identifying a brain-dead body donor, with all healthy organs, and removing the head from the recipient and transplanting it into the donor body (1). The surgery itself would require complex surgical skills, highly specialized and trained surgeons, and the most cutting-edge medical technology. Both the head of the recipient and the body of the donor would have to be cooled down in order for the brain cells to survive minutes without oxygen. Then, the neck and arteries would be cut and the spinal cord fused. Dr. Sergio Canavero, an Italian neurosurgeon that has been attempting to work towards BHT, believes that it is a scientifically sound endeavor that has been carefully conceived (2).

Ethics around this procedure primarily center around the physical implications for the patient. Even if the procedure is successful, the patient will be susceptible to risks and a recovery process so long that, for some, it outweighs the benefits. The patient will require extensive post-transplant support, and they will be maintained in the intensive care unit under strict isolation with ventilator and full circulatory support (1). Assuming that the spinal cord connection succeeds, the patient will need to take a large amount of immunosuppressive drugs, which still may not solve the immune rejection problem (3). Considering the high mortality rate that this surgery could entail, it ultimately leads to the question of whether the procedure should be legally allowed, regardless of the patient volunteering, or if legislation should not to permit this type of surgery.

The possibility of replacing an incurably ill body with a healthy one tests not only medicine’s limits, but also the social and psychological boundaries of physical life (1). The procedure presumes that transplanting a head with a brain would automatically transplant the whole person with its mind, personality and consciousness. Some scholars, like Čartolovni and Spagnolo, consider that BTH patients will experience an extreme mind and body disharmony that could lead to either insanity or death. They argue that the body represents the corporeality of existence, and individuals will fail to adjust to a new and dramatically different physical presence. However, experience with previous organ transplants have not proven these results. In internal medicine and transplantation journals, organ transplantation is generally viewed and described in positive terms (4). The point is put forward, nonetheless, because it is not the same to have somebody’s entire body transplanted on your own than to have just a liver, kidney, or heart that is not yours transplanted to your body. The novelty of the procedure makes it extremely hard to predict what kinds of psychological results to expect from the recipient.

Canavero himself addresses an ethical issue that could emerge from BTH: the donor’s gonads and the transmission of genetic inheritance to the donor’s offspring. In fact, since gonads are considered human identity organs, they are forbidden for transplantation by some legislations (3). The gonads will continue to produce the donor’s offspring. And since the donor is deceased, it is impossible to acquire their consent if not done prior to their death. Transplanting gonads means not only transplanting the organs themselves, but also the donor’s genetic material to be passed on to the recipient’s offspring (1). All sorts of ethical concerns arise from BTH, considering that it is transplanting an entire person’s body, including their reproductive organs.

BTH is also considered an inefficient expenditure of resources, if seen from an economic viewpoint. Only one body could save one life per BTH, while the same donor body could save and enhance up to 15 lives with multiple organ transplants (1). Considering the large number of patients waiting to receive organ transplants, BTH would be considered not only a waste of economic resources, but also of poorly addressing the population’s general medical needs by wasting valuable organs on a procedure from which the patient will not likely survive or recover.

All of the above mentioned psychological, physical, ethical, and economic concerns about body-to-head transplants have been raised since Canavero published his research on the “Gemini” spinal cord fusion protocol (5). Since then, scientists and scholars from around the world have questioned not only the validity of this procedure, but also if it should be done. BHT, however, is an extremely new and novel procedure, meaning that there is no clear-cut answer to the underlying questions behind the surgery (5). The fact remains that no surgeon has been able to successfully complete the procedure on live patients yet, but the future of BHT is extremely uncertain considering Canavero’s most recent research, and the methods being proposed by different scientists as well. Regardless of whether or not the surgery is feasible, an extensive analysis of the ethics behind BHT should be evaluated before moving forward with the proposed procedure.

Andrea Rivera is a first year in Stoughton

Works Cited

[1] Furr, A., Hardy, M. A., Barret, J. P., & Barker, J. H. (2017). Surgical, Ethical, and Psychosocial Considerations in Human Head Transplantation. International Journal of Surgery (London, England)41, 190–195. http://doi.org/10.1016/j.ijsu.2017.01.077.

[2] Ren, X., & Canavero, S. (2017). From Hysteria to Hope: The Rise of Head Transplantation. International Journal of Surgery,41. doi:10.1016/j.ijsu.2017.02.003.

[3] Čartolovni, A., & Spagnolo, A. G. (2015). Ethical Considerations Regarding Head

Transplantation. Surgical Neurology International6, 103. http://doi.org/10.4103/2152-7806.158785.

[4] Durand, C., Duplantie, A., Chabot, Y., Doucet, H., & Fortin, M.-C. (2013). How is Organ Transplantation Depicted in Internal Medicine and Transplantation Journals. BMC Medical Ethics14, 39. http://doi.org/10.1186/1472-6939-14-39.

[5] Canavero, S. (2015). The “Gemini” Spinal Cord Fusion Protocol: Reloaded. Surgical Neurology International6, 18. http://doi.org/10.4103/2152-7806.150674.

Image credit: Wikipedia—Universal

Featured

Was It a Bounce or a Bang?

When most of us think of the idea of the “big bang”, a massive explosion emerging from nothingness comes to mind. While much of the physics community agrees that such a description is relatively accurate for the start of the universe as we know it, much of the context around the Big Bang remains unknown, namely what came before and how our universe evolved to what we see today. Three Harvard scientists, Avi Loeb, Xingang Chen, and Zhong-Zhi Xianyu, set out to narrow down the list of possible explanations for how the Big Bang happened (1).

One description of the Big Bang is the theory of cosmic inflation, according to which an infinitesimal point inflated almost instantaneously to create space itself (2). This theory is in accordance with all experimental data and observation, but we have yet to see definitive proof in the form of primordial gravitational waves (ripples created by the rapid expansion of the fabric of space in the early universe) for this theory (2). While it could be that these waves are just too weak to observe, others, taking advantage of the lack of direct evidence, have proposed alternate theories, one of them being the “big bounce” (3).

The theory of the big bounce posits that the universe had been contracting for a long time until it reached the smallest possible size and then “bounced back” in what we now call the Big Bang (3). Distinguishing between these two theories that describe the universe at a time when time itself was essentially meaningless is not an easy task, yet Loeb, Chen, and Xianyu recently published a paper that elucidates a possible method for doing just that (1).

According to quantum uncertainty, there is no such thing as truly empty space, which means that quantum fields in the early universe were filled with ripples of varying wavelength. The constructive and destructive interference of these ripples over time together with their interaction with the expanding or shrinking space determined the distribution of matter throughout the universe (2-3). We can observe this today by looking at the matter density of different parts of the universe on different scales. Loeb, Chen, and Xianyu postulate that the variations of matter density can tell us about the nature of space at the time when the “ripple”, from which the matter density pattern emerged, was formed (1). Specifically, they believe that by analyzing these scale-dependent matter density variation patterns, they can determine “whether the primordial universe was actually expanding or contracting, and whether it did it inflationary fast or extremely slowly,” according to Chen (1).

While their idea appears theoretically sound, the oscillating density patterns they hope will be observed may not be pronounced enough to detect with today’s technology. Even if this turns out to be the case, Loeb, Chen, and Xianyu are pushing the field of cosmology in the right direction, creatively fostering the use of direct evidence to fill in the gaps in our story of the universe’s history.

Lucia Gordon is a first-year in Weld Hall and is planning to concentrate in Physics and Mathematics.

Works Cited

[1] Chen, X.; Loeb, A.; Xianyu, Z. Fingerprints of Alternatives to Inflation in the Primordial Power

Spectrum. Cornell University Library, Sep. 16, 2018. arXiv:1809.02603 (accessed Oct. 8, 2018).

[2] Natalie Wolchover. A New Test for the Leading Big Bang Theory. Quanta Magazine, 9/11/18.

https://www.quantamagazine.org/a-new-test-for-the-leading-big-bang-theory-20180911/ (accessed Sept. 28, 2018).

[3] Natalie Wolchover. Big Bounce Models Reignite Big Bang Debate. Quanta Magazine, 1/31/18.

https://www.quantamagazine.org/big-bounce-models-reignite-big-bang-debate-20180131/.

Image credit: Wikimedia Commons

 

Featured

Americans Must Fix Their Health Habits Now—or Face the Consequences Tomorrow

Currently in the United States, 70.2% of the American population is either obese or overweight (1). Obesity is defined as having a BMI of 30+, while being overweight signals a BMI of 25+ (2).  This may not be a surprise to many, but we have to examine this statistic closely to understand the harmful health impacts it could have on society. Heart disease —the number one cause of death in the United States (3)— is primarily caused by the negative health effects that arise from obesity (4). Ultimately, we must educate ourselves on the potentially life-threatening health consequences of obesity.

Many factors contribute to this health epidemic, but many are tied to the convenience and enjoyment factor; many sources of entertainment and enjoyment are unhealthy and easily accessible.

One of those many issues that contributes towards the widespread obesity in the United States is that the food which contributes towards obesity is easily accessible. With the growing popularity of fast foods, the opportunities to lead an unhealthy lifestyle are becoming greater than ever before (5).  Fast food restaurants are omnipresent in highways, cities, and the countryside (6). This convenient accessibility to the unhealthy food served at fast food restaurants, when combined with the food’s relative affordability, makes it easy for people to order an unhealthy meal at a fast food restaurant. In the United States, 34% of children between the ages of 2 and 19 consume fast food daily (7). This is extremely unhealthy as fast food is comprised of many of the worst and most unhealthy ingredients such as palm oil, high fructose corn syrup, and partially hydrogenated oil (8).  Eating this food consistently can lead to the obesity and harmful heart effects. In addition, fast food is addictive and can create unhealthy patterns in a person’s life.  Its convenience, relatively cheap prices, and addictive nature of its ingredients such as sugar makes it relatively easy to become hooked on fast food (9).  A prolonged addiction to the unhealthiness of fast food can eventually create consistent unhealthy habits, which often can contribute to obesity and weight-gain. Eating fast food occasionally – just once a week – is permissible; however, eating fast food more than twice a week has been proven to cause weight gain and insulin resistance among young adults (10).

Another cause of obesity in America is the presence of digital and electronic forms of entertainment (11). The increase in popularity of these forms of inactive entertainment like television, video games, and online entertainment causes many people to remain physically stagnant throughout the day. This idleness can have harmful health impacts. The disuse of a person’s muscles can increase the person’s blood glucose levels, as the inactive muscles no longer need to take up as much sugar from the blood to function (11).

Obesity can lead to consequences like heart disease, Type 2 Diabetes, and increased risk of certain cancers (12). In addition, many people who struggle with obesity are more at risk for depression and anxiety (13). The bottom line is this: obesity can ultimately cause undesired negative health effects, which can hurt people both physically and mentally.

While there are several reasons why people struggle with obesity, there are still many changes that can be made to improve that person’s health regardless of his or her background. Healthier eating habits and daily exercise can be implemented into most people’s lives and can result in drastic positive health effects. If healthy eating is not affordable depending on a person’s situation, there are many cheap options. Brown rice, whole-wheat pasta, whole-wheat bread, and non-fat yogurt all cost less than $2 for a typical standard size at a supermarket store (14). There are also many other cheap and healthy options available. And if these foods are still not affordable, at the very least some form of exercise can be implemented into almost all people’s lives. Taking even just a thirty-minute walk daily has many health benefits like a decreased risk of high-blood sugar and a healthier body composition (15).

If Americans are more educated on obesity’s significant health effects, they may be more likely to make healthier decisions. In addition, by understanding the causes of the problem, Americans can identify specific bad practices in their lives and start making committed changes to these habits. Ultimately, a movement for Americans to perform more healthy habits, like former First-Lady Michelle Obama’s “Let’s Move!” campaign, may lead to greater positive health effects around the country, making us healthier as a whole (16).

Alec is a first-year in Thayer

Works Cited

[1] National Institute of Diabetes and Digestive and Kidney Diseases. www.niddk.nih.gov/health-information/health-statistics/overweight-obesity (accessed Oct. 14, 2018).

[2] Center for Disease Control. www.cdc.gov/obesity/adult/defining.html (accessed Oct. 15, 2018).

[3] MedicalNewsToday. www.medicalnewstoday.com/articles/282929.php (accessed Oct. 14, 2018).

[4] World Health Organization. www.who.int/features/qa/49/en/ (accessed Oct. 14, 2018).

[5] TransparencyMarketResearch. www.transparencymarketresearch.com/article/
global-fast-food-market.htm
(accessed Oct. 14, 2018).

[6] Statista. www.statista.com/statistics/196619
/total-number-of-fast-food-restaurants-in-the-us-since-2002/
(accessed Oct. 14, 2018).

[7] Toasttab. pos.toasttab.com/blog/10-fast-food-industry-statistics (accessed Oct. 14, 2018).

[8]  TheBetterHealthStore. www.thebetterhealthstore.com/
043011_top-ten-toxic-ingredients-in-processed-food_01.html
(accessed Oct. 14, 2018).

[9] Psychology Today. www.psychologytoday.com/us/blog/you-illuminated/201108/
7-reasons-we-cant-turn-down-fast-food
(accessed Oct. 14, 2018).

[10] ScienceDaily. www.sciencedaily.com/releases/2005/01/050104105659.htm (accessed Oct. 14, 2018).

[11] LiveScience. www.livescience.com/15324-ssedentary-behavior-health-risks.html (accessed Oct. 14, 2018).

[12] World Health Organization. www.who.int/features/qa/49/en/ (accessed Oct. 14, 2018).

[13] Healthline. www.healthline.com/health/depression/obesity-and-depression (accessed Oct. 14, 2018).

[14] WebMD. www.webmd.com/food-recipes/features/
cheap-healthy-15-nutritious-foods-about-2-dollars#2
(accessed Oct. 14, 2018).

[15] Washington Post. www.washingtonpost.com/lifestyle/wellness/
the-many-benefits-of-walking-30-minutes-a-day/2015/10/19/

cf12c938-71e1-11e5-9cbb-790369643cf9_story.html?noredirect=on&utm_term=.
6fd016ed8696
(accessed Oct. 14, 2018).

[16] Let’s Move!. letsmove.obamawhitehouse.archives.gov/ (accessed Oct. 14, 2018).

Image credit: Wikimedia Commons

Featured

Disagreeing on Brain Death

“Death has been dissected, cut to bits by a series of little steps, which finally makes it impossible to know which step was the real death, the one in which consciousness was lost, or the one in which breathing stopped.” – Philippe Aries, 1975

On June 22 of 2018, a 17-year-old girl named Jahi McMath went into acute liver failure and passed away. But according to the state of California, McMath had already been legally dead for 5 years (1). How is this possible?

The answer lies in the fact that McMath was declared brain dead in 2013 but remained on life support for the next five years. Her case has deeply shaken medical and ethical frameworks of what is considered death for physicians as well as patients.

It seems like it should be obvious whether someone is alive or dead. And yet, the determination has gotten progressively more complicated and controversial as medical technologies have improved. As St. Louis University Professor of Philosophy Jeffrey Bishop writes, “On the surface, ‘brain death’ appears to be a very stable concept, but, in practice, we see it frays at the edges” (1). What does a designation of brain death mean, and how has that definition changed over the years? Does a physician’s authority extend to removing life support from a brain dead individual? And how far is too far to keep someone alive?

Definitions of Brain Death, Then and Now

What is brain death? Current guidelines define it as “the irreversible loss of all functions of the brain, including the brainstem,” generally typified by coma (absence of response to sensory stimuli), absence of brainstem reflexes, and apnea (inability to sustain breathing) (2). This definition emerges almost wholly from a 1968 ad hoc committee convened at Harvard Medical School to discuss and examine definitions of brain death (3).  Since then, every state in the US has adopted legislation, through the 1981 Uniform Determination of Death Act (UDDA), that defines death by meeting at least one of two criteria: cessation of heartbeat and respiration, or “irreversible cessation of all functions of the entire brain, including the brain stem” (4).

The rationale behind this is the idea that the brain is the control center of the body – without it, the body cannot survive for long (3). The advent of technological advance, however, throws this in question; while perhaps in 1981 cessation of brain function implied and necessarily was followed by cardiac arrest, with the advent of improved ventilators, tube feeding, and other life support technologies, it becomes increasingly unclear what our understandings of “death” should be: the body can survive long after the brain ceases to function. Thus, the divide between biological death and brain death grows.

As Harvard Bioethicist Robert Truog describes the particular challenge of brain death, “Although legal definitions are typically defined by bright lines, biology tends to be continuous” (5). Essentially, brain death is a designation existing for legal reasons that does not necessarily fit biological criteria and understandings of bodily functions. And yet, this definition maintains medical and biological importance, though the line may be inherently arbitrary.

Part of the difficulty, especially for the patients’ loved ones, is that brain dead individuals often do not appear “dead” to the observer – their heart may still beat, and they may still breathe with assistance from machines. It is also difficult to distinguish between the irreversible unconsciousness that characterizes brain death and potentially reversible states of coma (6). The main difference is that brain dead individuals are incapable of breathing on their own and would quickly arrest and die if removed from supportive care. Physicians are still undecided, however, on the minimum acceptable observation period to determine cessation of neurologic function – that is, it still comes down to a judgment call (7).

Beyond the cognitive dissonance that inevitably makes accepting brain death difficult for patients’ loved ones, it is also challenging to reconcile brain death with other, generally religious, definitions of death. For example, many Orthodox Jews and Native Americans maintain that death only occurs upon cessation of breathing, not only upon cessation of brain function (8). Thus, current definitions of death, and whether brain death constitutes death, are multiple and controversial.

The Case of Jahi McMath

Discussions of death and dying can seem abstract and theoretical, but they come to real-life importance frighteningly quickly as in the case of Jahi McMath. After a routine tonsillectomy operation to treat sleep apnea in 2013, then-thirteen-year-old Jahi McMath began to bleed and went into cardiac arrest, falling into a coma from which she would never awaken (9). She was declared by physicians and then a judge to be “brain dead.” By California law, which follows the UDDA, persons declared brain dead are deemed legally dead and must be disconnected from ventilators after a “reasonably brief period of accommodation” (10).

McMath’s family fought bitterly for her to be sustained on a ventilator and other supportive technologies – a judge issued a temporary restraining order to prevent the hospital from disconnecting McMath’s ventilator (which they were compelled to do in adherence to UDDA and to maintain their organ donation program), and she was evaluated by an independent physician who also declared her brain dead (11). In January, the hospital agreed to release McMath to the county coroner, who could then release the (legally dead) body to her family, with the understanding that her family would take full responsibility of her care (11).

Two states, New York and New Jersey, allow exceptions to the UDDA for religious reasons (6). For this reason, McMath’s family transferred her to a hospital in New Jersey, where she remained for several months to stabilize her (as a result of the court case and transfer, she had not been fed in three weeks and several organs were failing) (10). For the next four years, she sustained a heartbeat and other vital functions, receiving round-the-clock nursing care in an apartment (10).

Over the course of this period, she grew and went through puberty, even beginning menstruation, opening up still more questions about the ability of the body to live on after brain death (10). Indeed, in the few recorded cases of brain dead individuals sustained on life support, others have also demonstrated this: for example, one individual who had been declared brain dead survived for nearly 20 years, growing and functioning even though his brain demonstrated no tenable structure and showed calcification (12).

For five years, the McMath family has continued to battle the Children’s Hospital of Oakland, where she was treated and plan to continue to do so after her death (9). The case remains the subject of heated debate within and outside the medical community, centered around questions of boundaries and rights around death.

A key difficulty in the McMath case is autonomy and choice: McMath could not choose whether she wanted to continue on living in this way because of course she could not communicate. In this way, the McMath case recalls many other controversial right-to-life cases, such as that of Terri Schiavo in 2005, whose family maintained her right to life support in an irreversible comatose state (slightly different from McMath’s in that she retained some brainstem function) (13). The perennial question that plagues the medical establishment is who holds the right to make decisions about the patient’s care?

These decisions and debates are made more difficult by a constant imbalance of information and expertise between physicians and families. Indeed, one of the difficulties of McMath’s case is the refusal of her family to accept her brain dead status as permanent. McMath’s mother, Nailah Winkfield, has said that after her transfer to New Jersey “I didn’t have a clue. I had really thought that I would get her a feeding tube and a tracheotomy, and she would just get up, and we would be good” (10). This is one reason why many physicians and bioethicists vehemently oppose contestations of the term brain death – they worry that it may give families false hope of their relatives’ potential for recovery.  

The case of Jahi McMath demonstrates the obvious difficulty of delineating death. But perhaps one of the most important lessons it demonstrates is the importance of clear, open, and honest communication between patients and families.

Consequences of Changing Definitions

What are the consequences of allowing families like McMath’s to continue life support for a brain-dead individual? While the debate can easily turn towards arguments and discussions of autonomy and who has the right to end or maintain a life, there are many practical concerns that come with a declaration of death – for example, execution of wills and burial proceedings.

In the landmark 1968 document defining brain death from the Harvard Medical School Committee, the reasons for making this delineation were clearly laid out: first, because “improvements in resuscitative and supportive measures have led to…[the existence of] an individual whose heart continues to beat but whose brain is irreversibly damaged” which is burdensome to families and healthcare systems; and second, because “Obsolete criteria for the definition of death can lead to controversy in obtaining organs for transplantation” (14).

In fact, it was the first successful heart transplant in 1967 that prompted the formation of the committee on brain death at Harvard in the first place (3). This is no coincidence; with increasing success rates in organ transplantation surgeries, the medical establishment found itself in a paradoxical predicament – needing recently alive organs, but not being able to ethically remove organs from a live person. As one New Yorker editorial put it, “the need for both a living body and a dead donor” (15). Brain dead individuals provide just that.

This may seem a bit morbid, but as Harvard ethicist Robert Truog notes, “Since 1968 literally hundreds of thousands of lives have been saved or improved because we’ve been able to view this diagnosis as a legitimate point for saying that these patients may be considered legally dead” (3). Indeed, a change or limit in the definition of brain death would certainly hinder transplantation to a significant degree – even McMath’s lawyer expressed concern that “we may screw up organ donation” (10).

Beyond organ donation, there is the question of hospital resources. As one bioethicist puts it, “every extra hour of nursing time that goes into one of these dead patients is an hour of nursing time that didn’t go to somebody else” (10). Looking at it financially, McMath’s ICU care in New Jersey cost, on average, $150,000 per week, all paid for by Medicaid (10). Given the already limited resources available in intensive care units, many question the ethics of continuing to support and sustain patients with no chance of recovery at the expense of helping others, with potentially better chances at recovery.  

And yet, is this premise of delivering the most good to the most people enough to justify a definition of brain death? The philosophical debate between utilitarian principles and notions of individual rights to long-term support rages on. But perhaps the most important consequence of the definition of brain death is the opportunity to initiate the beginnings of closure for the family – something that is often difficult if the definition remains blurry.

Is there hope for the brain dead?

One of the most controversial aspects of the McMath case was the claim by a neurologist who examined her that she may have shown signs of consciousness and some restored brain activity before she died. Calixto Machado, president of the Cuban Society of Clinical Neurophysiology, observed in McMath’s scans that though her brain stem was almost entirely destroyed, significant portions of her cerebrum were intact (10). This is unexpected, because in the few documented cases like McMath’s, nearly all brain matter is destroyed because of poor circulation on a ventilator. Beyond this, video recordings taken by McMath’s mother seem to show the girl responding to commands and perhaps even recognizing her mother’s voice by a change in heart rate (10). 

But a meta-analysis of neurology scholarship by the American Academy of Neurology (AAN) concluded that there are no cases of recovery of brain function after being declared brain dead (7). In this analysis, they did find several reports of apparently-stimulated motor movements in brain dead patients, much like what McMath’s family observed, but concluded that these “falsely suggest retained brain function” and are not in fact indicative of consciousness (7).

There remains interest in treatments to reverse brain death. A company called Bioquark is pursuing a clinical trial to inject stem cells into the spinal cords of brain dead individuals among other treatments (16). Similar treatments have been somewhat successful in patients with other sorts of brain damage (stroke patients, children with brain injuries), but these trials have been vehemently opposed by several neurologists as “border[ing] on quackery” and “creat[ing] room for the exploitation of grieving family and friends and falsely suggest[ing] science where none exists” (17). Indeed, the fact remains that at this moment in time, brain dead patients have no hope for recovery.

Altogether, though, it seems that brain death is still an open question – and one that will continue to get more complicated as medical innovations improve. Though the definitions of brain death continue to be delineated based on AAN guidelines heavily similar to the original 1968 document, researchers are still working to determine what scans and tests can more completely demonstrate what is happening in a brain dead patient. But as our ability to bring individuals back from dire situations improves, the need for clarity on what it means to die – and beyond, what constitutes being alive – becomes ever more pressing.  

Caroline Wechsler is a senior in Currier House studying History and Science

Bibliography

[1] Bishop, Jeffrey. Why “Brain Death” Is Contested Ground. Accessed September 30, 2018. https://bulletin.hds.harvard.edu/articles/winterspring2015/why-brain-death-contested-ground.

[2] Goila, Ajay Kumar, and Mridula Pawar. “The Diagnosis of Brain Death.” Indian Journal of Critical Care Medicine : Peer-Reviewed, Official Publication of Indian Society of Critical Care Medicine 13, no. 1 (2009): 7–11. https://doi.org/10.4103/0972-5229.53108.

[3] Powell, Alvin. “Harvard Ethicist Robert Truog on Ambiguities of Brain Death – Harvard Gazette.” Accessed September 30, 2018.

https://news.harvard.edu/gazette/story/2018/07/harvard-ethicist-robert-truog-on-why-brain-death-remains-controversial/.

[4] Sade, Robert M. “BRAIN DEATH, CARDIAC DEATH, AND THE DEAD DONOR RULE.” Journal of the South Carolina Medical Association (1975) 107, no. 4 (August 2011): 146–49.

[5] Truog, Robert D. “Defining Death—Making Sense of the Case of Jahi McMath.” JAMA 319, no. 18 (May 8, 2018): 1859–60. https://doi.org/10.1001/jama.2018.3441.

[6] Powell, Tia. “Brain Death: What Health Professionals Should Know.” American Journal of Critical Care 23, no. 3 (May 1, 2014): 263–66. https://doi.org/10.4037/ajcc2014721.

[7] Machado, Calixto, and Mario Estevez Jesús Pérez-Nellar. “Evidence-Based Guideline Update: Determining Brain Death in Adults.” Neurology, September 30, 2018.

http://n.neurology.org/content/evidence-based-guideline-update-determining-brain-death-adults-1.

[8] Singh, Maanvi. “Why Hospitals And Families Still Struggle To Define Death.” NPR.org. Accessed October 1, 2018. https://www.npr.org/sections/health-shots/2014/01/10/261391130/why-hospitals-and-families-still-struggle-to-define-death.

[9] Goldschmidt, Debra. “Jahi McMath, California Teen at Center of Brain-Death Controversy, Has Died.” CNN. Accessed September 26, 2018. https://www.cnn.com/2018/06/29/health/jahi-mcmath-brain-dead-teen-death/index.html.

[10] Aviv, Rachel. “What Does It Mean to Die?” The New Yorker, January 29, 2018. https://www.newyorker.com/magazine/2018/02/05/what-does-it-mean-to-die.

[11] Burkle, Christopher M., Richard R. Sharp, and Eelco F. Wijdicks. “Why Brain Death Is Considered Death and Why There Should Be No Confusion.” Neurology 83, no. 16 (October 14, 2014): 1464–69. https://doi.org/10.1212/WNL.0000000000000883.

[12] Shewmon, D. Alan. “Recovery from ‘Brain Death’: A Neurologist’s Apologia.” The Linacre Quarterly 64, no. 1 (February 1997): 30–96. https://doi.org/10.1080/20508549.1999.11878373.

[13] Grossman, Cathy Lynn. “Family, Ethics, Medicine and Law Collide in Jahi McMath’s Life _ or Death.” Washington Post. Accessed September 26, 2018. https://www.washingtonpost.com/national/religion/family-ethics-medicine-and-law-collide-in-jahi-mcmaths-life-_-or-death/2014/01/03/3b8ced32-74be-11e3-bc6b-712d770c3715_story.html.

[14] “A Definition of Irreversible Coma: Report of the Ad Hoc Committee of the Harvard Medical School to Examine the Definition of Brain Death.” JAMA 205, no. 6 (August 5, 1968): 337–40. https://doi.org/10.1001/jama.1968.03140320031009.

[15] Greenberg, Gary. “As Good as Dead.” The New Yorker, August 6, 2001. https://www.newyorker.com/magazine/2001/08/13/as-good-as-dead.

[16] Sheridan, Kate. “Resurrected: A Controversial Trial to Bring the Dead Back to Life.” Scientific American. Accessed September 26, 2018. https://www.scientificamerican.com/article/resurrected-a-controversial-trial-to-bring-the-dead-back-to-life/.

[17] Lewis, Ariane, and Arthur Caplan. “Response to a Trial on Reversal of Death by Neurologic Criteria.” Critical Care 20 (November 22, 2016). https://doi.org/10.1186/s13054-016-1561-5.

Image credit: Wikimedia Commons

Featured

(Baby You Can’t) Drive My Car: The Ethical Implications of Driverless Cars

Driverless cars are one of the hottest topics in media reporting on science today. The idea that a person may be able to tell a car where to go without having to operate it is alluring, and these vehicles have the potential to increase the efficiency and safety of travel. Scientists have many promising ideas for how driverless cars will operate: the vehicles are envisioned to utilize GPS, radar maps, laser ranging systems, and vehicle-to-vehicle communication in order to make for the safest and most efficient transport (1).
These exciting prospects, however, do not come without understandable doubts about whether these cars will be safe and what it means for transport machines to have an unprecedented level of autonomy. One major issue is the question of how to handle the many ethical predicaments that arise when considering these machines. To tackle this problem, researchers have begun to craft algorithms that might “tell” a car how to react in a specific scenario; to some, these formulas, though imperfect, seem to be the most pragmatic solutions to these ethical difficulties (2).
An example: a driverless car is about to crash. The crash is inevitable, but the car can veer one of two ways: it can hit an eight-year-old girl or an eighty-year-old woman. Which way should it go?
This example, crafted by Patrick Lin in his review of ethics in autonomous cars, has no categorically correct solution. While the young girl has her whole life ahead of her and is considered a “moral innocent,” the older woman also has a right to life and respect (2). Even the professional code of ethics has no true recommendation, nor does it seem morally sound to take no stance and let the situation play out arbitrarily and unpredictably.
Scenarios like this have led to disagreement among scholars, who often adhere to different approaches when thinking through ethical dilemmas. Should a car be utilitarian, or is the raw number of lives saved not enough of a metric? Should a car have the option for a human driver to override the autonomous system? Should a car swerve to miss a crash, even if missing the crash would end up causing a different collision? These are all questions that researchers must consider as they form the algorithms that will govern the outcomes of these situations (2).
The implications of these decisions extend beyond the puzzle of determining who should fall victim to a crash. For example, some would argue that the ethically sound choice is for driverless cars to self-sacrifice, if that sacrifice would save a larger number of people than would the potential crash on its original trajectory—but surveys report it is highly unlikely that the average buyer would be willing to purchase and ride in a vehicle programmed to take this course of action (3).
Another question is one of individual human decisions: consider a car that is inevitably going to hit a bicycle rider. It has two options: hit a biker with a helmet or hit a biker who did not wear one. Many would advise that the car be programmed to hit the rider that is wearing a helmet—the one who has the greater chance of survival; but the choice has many repercussions: this type of programming essentially penalizes riders for wearing helmets and could deter people from ever wearing helmets at all (2).
Last, the issue stands of who is responsible for an accident, should one occur. Though one reasonable suggestion is to hold manufacturers responsible for fatalities due to driverless car crashes, this might compromise their willingness to develop and optimize new products, out of fear of being liable. An alternative might be to hold the “driver” responsible, no matter what—but this also seems unfair, given that the rider may have no way to interfere (4).
A major difficulty of ethical questions like these is that they do not have a “right” answer; as scientists write algorithms that feel the most reasonable, it will be important to keep in mind that there exists no solution that will please everyone. Through this moral murkiness, however, one thing is certain: driverless cars will transform our world, and programming them to reflect generally acknowledged standards of public safety will be revolutionary.

Julia Canick is a senior in Adams House studying molecular and cellular biology.
Works Cited

[1] Waldrop, M. Mitchell. “No drivers required.” Nature 518.7537 (2015): 20.

[2] Lin, Patrick. “Why ethics matters for autonomous cars.” Autonomes fahren. Springer Vieweg, Berlin, Heidelberg, 2015. 69-85.

[3] Bonnefon, Jean-François, Azim Shariff, and Iyad Rahwan. “The Social Dilemma of Autonomous Vehicles.” Science 352.6293 (2016): 1573-1576.

[4] Hevelke, Alexander, and Julian Nida-Rümelin. “Responsibility for crashes of autonomous vehicles: an ethical analysis.” Science and engineering ethics 21.3 (2015): 619-630.

Image credit: Wikimedia Commons

Featured

Spring 2018: Can Science Save Us?

Our Spring 2018 issue is now available online! Articles are posted individually as blog posts (the articles are linked below), a PDF version is currently displayed on our Archives page, and print issues will be available around Harvard’s campus starting Fall 2018. We hope you enjoy browsing through this issue as much as we enjoyed putting it together! A big thank you to our dedicated staff and supporting sponsors.

Table of Contents:

NEWS BRIEFS
1 A Shocking Revelation: Deep Brain Stimulation as a Treatment for Alzheimer’s
Disease by Julia Canick ’17-‘19

2 One Dose, One Day: The Magic of Xofluza by Hanson Tam ‘19

FEATURE ARTICLES

4 Aging and Debilitation: The Grave Reality, and Hopeful Future, of Treating
Neurodegenerative Disease by Leon Yang ‘21

7 Can Blockchain Save our Healthcare System? by Puneet Gupta ‘18

10 The Race to Quantum Supremacy by Kidus Negesse ‘21

14 No More Penicillin: A Future Without Antibiotics? by Kelvin Li ‘21

COMMENTARY
18 Space Biology and the Future of the Human Race by Connie Cai ‘21

20 Vaccines Against Opioids: A Solution or a Problem? by Felipe Flores ‘19

 

Featured

Vaccines against Opioids: A Solution or a Problem?

Opioid abuse is responsible for billions of dollars of additional healthcare expenditure, and more importantly, about 90 deaths every day in the US alone (1). In the treatment of addiction and eradication of the current opioid crisis, vaccination against street drugs has become a potential therapeutic option. Scientists from the Janda lab (Skaggs Institute) have developed a vaccine that exploits features from the immune system to prevent the adverse effects of opioids like heroin. Learning from a similar effort for a vaccine against cocaine (2) the team created a structure that could be injected into mice and sequester heroin by covalently conjugating a heroin analog (a hapten) to a carrier protein (3). The haptens tested were bound to proteins such as diphtheria toxoid or tetanus toxoid produced a strong immune response that resulted in a high blood antibody concentration; such concentration ultimately prevented the opioids from reaching the brain by antibody sequestration. The antibody’s specificity depends on what drug or metabolite it was meant to mimic structurally, giving high specificity control to the scientists. The vaccine, when administered in mice, was able to reduce heroin’s potency by over 3-fold initially (3), only to be made longer-lasting and more effective against high (even lethal) doses of heroin in subsequent work (4). In parallel, the Matyas lab (US Military HIV Research Program) optimized the hapten synthesis, making the process scalable and the vaccine protective against other abused opioids like oxycodone, hydrocodone and hydromorphone while not binding to endogenous opioid peptides or drugs used to treat overdoses and addiction such as naloxone or methadone (5).

If translated to the clinic, the vaccines combined with withdrawal-alleviating drugs could prove useful in the treatment of opioid addiction and the epidemic’s alleviation. However, if they are to become a therapeutic, and more generally if vaccination for drug addiction is to be the standard of care, the paradigm will need to face several questions. For instance, will patients and clinical trial volunteers be cooperative? Most likely not, as a failed clinical trial for a cocaine vaccine elucidated, or at least not without a proper psychosocial support network (6). At the same time, while efficient against multiple opioids, neither vaccine responds to fentanyl or sufentanil, both drugs abused by addicts, but also critical in severe pain management. Importantly: should they? If a vaccinated
patient is committed to relapse with heroin, they could easily (and inadvertently) overdose with high fentanyl doses present in laced street heroin believing they would
need a high dose after being vaccinated. On the other hand, if a vaccine is made effective against fentanyl, a patient in need for emergency pain management may be completely out of options having been vaccinated against them; the last resort in such a situation, a higher dose of opioids, could prove extremely dangerous, as the therapeutic dose varies in the population, making patients susceptible to an accidental overdose. On top of that,
how will a vaccination paradigm work against newer, more potent and elusive “designer drugs” (7)? Considering the radically different time scales of newer, more potent street drug production/commercialization and FDA-approved treatment development, there is a real possibility that science will never catch up to addiction. While promising, vaccines would do little to resolve the opioid crisis until other societal problems are solved first, and in fact may even result more dangerous than beneficial. In a situation as delicate as this one with thousands of lives at risk, the vaccination paradigm may prove ineffective –if not harmful– if not properly assessed by scientists and authorities. Vaccination is nevertheless an option worth considering in parallel with alternative paradigms in order to arrive at the most comprehensive solution.

Felipe Flores ’19 is a junior in Quincy House concentrating in Human Developmental and Regenerative Biology.

Works Cited
[1] Volkow, N. D. & Collins, F. S. The Role of Science in Addressing the Opioid Crisis. New England Journal of Medicine 377, 391–394 (2017).

[2] Kimishima, A., Wenthur, C. J., Eubanks, L. M., Sato, S. & Janda, K. D. Cocaine Vaccine Development: Evaluation of Carrier and Adjuvant Combinations That Activate Multiple Toll-Like Receptors. Molecular Pharmaceutics 13, 3884–3890 (2016).

[3] Bremer, P. T. et al. Development of a Clinically Viable Heroin Vaccine. Journal of the American Chemical Society 139, 8601–8611 (2017).

[4] Hwang, C. S. et al. Enhancing Efficacy and Stability of an Antiheroin Vaccine: Examination of Antinociception, Opioid Binding Profile, and Lethality. Molecular Pharmaceutics 15, 1062–1072 (2018).

[5] Sulima, A. et al. A Stable Heroin Analogue That Can Serve as a Vaccine Hapten to Induce Antibodies That Block the Effects of Heroin and Its Metabolites in Rodents and That Cross-React Immunologically with Related Drugs of Abuse. Journal of Medicinal Chemistry 61, 329–343 (2018).

[6] Kosten, T. R. et al. Vaccine for cocaine dependence: A randomized double-blind placebo-controlled efficacy trial. Drug and Alcohol Dependence 140, 42–47 (2014).

[7] Crews, B. O. & Petrie, M. S. Rent Trends in Designer Drug Abuse. Clinical Chemistry 61, 1000–1001

Featured

Can Blockchain Save our Healthcare System?

“Did the Bitcoin Bubble Just Burst?”1 This latest news headline and many others immediately and frequently catch our attention with key terms such as Bitcoin, cryptocurrency, and blockchain. We’ve all heard of the cryptocurrency or digital asset Bitcoin, but few understand it. More importantly, even fewer understand the technology that underlies it: blockchain. Although Bitcoin and other cryptocurrencies have a great potential to reshape our global economy and financial industry, the underlying blockchain technology has the potential to transform every industry in our world today, especially healthcare, which is currently in a state of disarray. As J.P. Morgan Chase CEO Jamie Dimon recently said, “Blockchain is real.”2 But what is this blockchain technology and can it save our healthcare system?

What is blockchain technology?

A blockchain is simply a digital ledger, or essentially a digital book, that can maintain a series of records of data for all parties involved in the network3. Blockchains have been critically designed to be secure, decentralized, and transparent. The security arises from how different blocks, each of which contains certain data, such as from a transaction or contract, are timestamped and cryptographically linked, thereby forming a chain of blocks3. Each block contains two cryptographic hashes, or simply digital fingerprints: one is generated using the data in that block and the second is simply the hash of the previous block4. If the data in a block is changed, then its hash will change and no longer match the hash value originally stored in the block ahead of it. Thus, any attempt to change the information in one of the blocks of the chain can be spotted, thereby preventing any data removal or tampering3. These cryptographic hashes form the crux of the immutability of blockchains, though there are often other features of the chain that play a role as well, such as the proof-of-work mechanism.

blockchainFigure 1: A sample blockchain model showing how different blocks are linked through cryptographic hashes.

The decentralized nature of blockchains arises from how each party in the network has a copy of the most up-to-date blockchain and how a block can only be added to the blockchain if verified by a majority of parties within the network5. Once a block is validated, each party has their copy updated to ensure all records match. This distributed design of blockchains also makes it nearly impossible for blockchains to be compromised or hacked. In contrast, many current systems are far more centralized. In these centralized systems, a central organization or person has dominant control over the verification and regulation of the databases. In blockchains, since transactions can occur directly between the parties involved in the network with approval from majority of the parties, the need for middlemen, such as the government, is often eliminated6.

The transparency of blockchains arises from how the different blocks of information can be traced and viewed by the different parties of the network. Moreover, each party knows that each block has been validated and not unknowingly been modified3,4. Either all or certain data in these blocks can be viewed by all parties in the network depending on how the blockchain is designed or modeled. These variations in a blockchain’s design allows for its application to different fields and purposes. The complexity of novel blockchain models or designs is currently being explored and researched for specific applications, as different blockchains can have different features that are specific for their purpose. Overall, the security, decentralization, and transparency in the design of blockchains gives people hope that its implementation into different industries will bring greater trust, security, and reduced costs within those industries.

Blockchain in the Financial Ecosystem

With the elimination of intermediaries, transparency, and security being some of the greatest advantages of blockchain technology, there’s no surprise that a majority of the earliest adopters, researchers, and innovators of this technology were interested in applying it to our banking and financial systems7. Financial institutions and fintech startups are partnering together and are actively interested in applying blockchain for wholesale payments, clearing and settlements, issuing debt and equity, management, and more7. Many new cryptocurrencies have been developed in hopes of becoming the most efficient, effective, and widely adopted, whether it be for general money transfers, for specific financial institutions like investment banks, or even for non-monetary use8. Some of these Bitcoin alternatives, also referred to as “altcoins,” have been growing rapidly, including Ripple, Litecoin, Monero, and Ether8. Nonetheless, blockchain technology has begun to disrupt the financial ecosystem, as many large firms including Accenture, McKinsey & Company, IBM, and Deloitte are investigating this ripe technology9.

Blockchain in Healthcare

The advent of blockchain technology and its promising future have spurred unparalleled investments and interest in applying it to healthcare. Blockchain’s ability to disrupt the current healthcare ecosystem and infrastructure has triggered a rise in many startups and businesses entering this unfolding industry.

Within healthcare, blockchain can have a large impact in many aspects. Heavy attention has been specifically focused on electronic health records and patient data. Currently, one of the biggest issues with patient data involves the interoperability of different systems or softwares within and between organizations, and even the privacy and security associated with those data when communicated between systems 10. The consequences of healthcare data interoperability are vast. In an era of technological advancements, repeatedly filling out bundles of paper forms at different hospitals and clinics is shameful. Redundant testing is not only a greater financial burden on the system and patient, but also simply ineffective and a waste of time. Poor coordination and miscommunication among multiple providers working on a patient increases chances of erroneous diagnoses and ineffective therapies.

There is a great potential for blockchain to streamline and function as a distributed database for improving interoperability. In the future, we hope that from a single blockchain, patient data can be readily accessed by all providers, researchers, and patients themselves. The beauty of blockchain technology is its ability to make this secured patient data easily accessible while simultaneously maintaining patient privacy through its cryptographic public and private keys. Moreover, through this blockchain technology, patients can access their life-long health history privately while also being able to share certain aspects of their data with healthcare providers and organizations. Giving patients more access and control of their own data is critical for healthcare’s primary goal of delivering patient-centered care. Patients will become more involved in their own care and will be better able to make informed decisions about their treatment or therapy options.

Integrating patient data on blockchain technology will have an unfathomable impact on the pace of new discoveries in biomedical research, both in laboratories and clinics. Rapid access to patient data through the blockchain would allow researchers to investigate large amounts of data simultaneously. With these large data sets, researchers can better investigate various topics at the population level, including health trends over time, differences in disease susceptibility among ethnic groups, differences in a drug’s effectiveness between sexes, and much more. These large patient data sets are also critical for building and training machine learning algorithms that can predict these trends and increased disease susceptibilities. Most importantly, principal investigators and research organizations could rapidly access this patient data without having the patient’s identity compromised.

Even outside the clinic and research institutions, blockchain in the healthcare infrastructure has been proposed for many smaller, yet critical, areas 11. For example, blockchain technology may be used for monitoring and maintaining active records of all healthcare providers, including their certifications, hospital affiliations, education, and more. Healthcare providers are frequently changing where they practice, their specialties, and recertifying; thus, a blockchain-based system will allow for regulation, monitoring, and even fraud prevention11. Blockchains also have the potential to resolve many of the issues we face today in medication adherence and drug abuse, areas of high concern due to the current opioid overdose epidemic. Through blockchain technology, the manufacturing and prescribing of these drugs can better be traced and monitored, preventing drug misuse and repetitive prescriptions12. Blockchain’s function as an immutable digital ledger has even pushed innovators to explore its use in managing the issue of counterfeit pills in the pharmaceutical supply chain and even for managing clinical trials for novel therapeutics.

In order to show the interest of companies in blockchain for healthcare, a recent Deloitte survey found that 35 percent of blockchain-knowledgeable healthcare senior executives in healthcare were planning to implement blockchain13. Many new startups are also aiming to apply blockchain to various aspects of the healthcare system. Some companies currently pursuing blockchain in healthcare include PokitDok, Gem, and Guardtime. PokitDok is looking at interoperability, Gem is investigating reimbursement models, and Guardtime is exploring health data security14.

The ability for blockchain technology to make large amounts of data immutable, secure, private, and accessible inspires many to work towards its large-scale adoption in healthcare, especially for interoperability. This interoperability will drive a rapid increase in the rate of medical research discoveries and innovations by allowing researchers to access large sets of patient data while maintaining patient privacy. The distributed ledger design and elimination of middlemen have driven people to believe that this technology can bring back trust in the systems to which it is adopted in15. Moreover, in healthcare specifically, this future elimination of middlemen may drive down healthcare costs and reduce fraudulent activity in medical billing and insurance. Because this is such a new and emerging technology, current healthcare guidelines and regulations for the implementation and integration of blockchain technology are not yet established. These regulatory standards could be a potential barrier to the widespread adoption of blockchain. There is hope that in the coming years a better understanding of this technology will promote its integration and eventual disruption of the industry. Overall, the possibilities are endless for applying this revolutionary blockchain technology to medicine in order to transform and save the healthcare ecosystem.

Puneet Gupta ’18 is a senior concentrating in Biology and beginning medical school in Fall 2018.

Works cited

[1] Barlow, S. Did the bitcoin bubble just burst? The Globe and Mail, Jan. 17, 2018. https://www.theglobeandmail.com/globe-investor/inside-the-market/did-the-bitcoin-bubble-just-burst/article37632595/ (accessed Feb. 14, 2018).

[2] Cao, S. JP Morgan CEO Jamie Dimon: Blockchain Is Real, Not Interested in Bitcoin. Observer, Jan. 9, 2018. http://observer.com/2018/01/jp-morgan-jamie-dimon-blockchain-real-not-interested-bitcoin/ (accessed Feb. 17, 2018).

[3] Blockchain: The New Technology of Trust. Goldman Sachs. http://www.goldmansachs.com/our-thinking/pages/blockchain/ (accessed Feb. 14, 2018).

[4] Norton, S. CIO Explainer: What Is Blockchain? The Wall Street Journal, Feb. 2, 2016. https://blogs.wsj.com/cio/2016/02/02/cio-explainer-what-is-blockchain/ (accessed Feb. 14, 2018).

[5] Watters, A. The Blockchain for Education: An Introduction. April 7, 2016. http://hackeducation.com/2016/04/07/blockchain-education-guide (accessed Feb. 14, 2018).

[6] Tapscott, D. How blockchains could change the world. May 2016. McKinsey & Company. https://www.mckinsey.com/industries/high-tech/our-insights/how-blockchains-could-change-the-world (accessed Feb. 15, 2018).

[7] Blockchain rewires financial markets. IBM, 2017. https://www-01.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=GBP03469USEN& (accessed Feb. 16, 2018).

[8] What is an Altcoin? https://bitcoinmagazine.com/guides/what-altcoin/ (accessed Feb. 16, 2018).

[9] Market Guide for Blockchain Consulting and Proof-of-Concept Development Services. Gartner, Feb. 22, 2017. https://de.nttdata.com/fileadmin/web_data/country/de/News/
market_guide_for_blockchain__317612.pdf

[10] Siwicki, B. Biggest EHR challenges for 2018: Security, interoperability, clinician burnout. Healthcare IT News, Dec. 9, 2018. http://www.healthcareitnews.com/news/biggest-ehr-challenges-2018-security-interoperability-clinician-burnout (accessed Feb. 18, 2018).

[11] Bresnick, J. Five Blockchain Use Cases for Healthcare Payers, Providers. https://healthitanalytics.com/news/five-blockchain-use-cases-for-healthcare-payers-providers (accessed Feb. 16, 2018).

[12] Blockchain Technology Holds the Answer for the Prescription Drug Crisis. NewsBTC, Jan. 7, 2018. https://www.newsbtc.com/2018/01/07/blockchain-technology-holds-answer-prescription-drug-crisis/ (accessed Feb. 16, 2018).

[13] Deloitte survey: Blockchain reaches beyond financial services with some industries moving faster. Deloitte, Dec. 13, 2016. https://www2.deloitte.com/us/en/pages/about-deloitte/
articles/press-releases/deloitte-survey-blockchain-reaches-beyond-financial-services-with-some-industries-moving-faster.html
(accessed Feb. 15, 2018).

[14] CB Insights. Sept. 21, 2017. https://www.cbinsights.com/research/healthcare-blockchain-startups-medicine/ (accessed Feb. 16, 2018).

[15] Blockchain: A new mechanism for trust—no intermediary required. Deloitte. https://qz.com/628581/blockchain-a-new-mechanism-for-trust-no-intermediary-required/ (accessed Feb. 18, 2018).

 

 

 

 

 

 

 

 

Featured

Aging and Debilitation: The Grave Reality, and Hopeful Future, of Treating Neurodegenerative Disease

Introduction

At the start of the new year, during a time usually associated with resolution and new promises, two major pharmaceutical companies, Pfizer and Axovant, both announced the discontinuation of their campaigns to uncover drugs to treat Alzheimer’s disease and Parkinson’s disease, two progressive and debilitating neurodegenerative diseases that stunt cognitive function and deprive individuals of the ability to complete the most rudimentary physical and mental tasks.1 The stagnation of research related to these diseases is especially relevant in contemporary times given the aging of our population, which will experience a strong shift towards older age in the coming years. Current projections suggest that by 2060, Americans 65 and older may reach 98 million persons, a more than two-fold increase from the current 46 million individuals currently in this age bracket.2 Old age is almost inextricable from decay of the brain and neurodegenerative disease. Thus, an understanding of current research, which hopes to find remedies for these diseases, is pertinent to understanding the innovations and intricacies of science necessary to slow down, and perhaps rescue us from the encroaching grasp of diseases of the brain.

A Brief Historical Aside

Understanding how far our understanding of neurodegenerative disease has progressed is tied to recognizing how rudimentary knowledge of treatments was only a few decades ago. As Anne B. Young notes in a review of the evolution of medicine pertaining to neurological disorders over the past 40 years, the concept of neurology centered on the diagnosis of these diseases and less on how to treat them.3 The brain is an organ protected by the blood-brain barrier and so was hard to access for diagnostic purposes before the emergence of CT, PET, and MRI scans.3 Many neurodegenerative diseases have no definitive cause; paired with the inaccessibility of the brain, this has made diagnostic and treatment approaches relatively difficult. Progress, however, has been made.

The study of neurodegenerative disease gained momentum with the discovery of methods to study the connections between nerve fibers, including one that utilized horseradish peroxidase as a tracer to study axonal pathways, and with the eventual invention and proliferation of novel imaging techniques.4,5 The rise of new imaging platforms as well as the accessibility of thorough genomic data has spurred more targeted research and the creation of effective transgenic mouse models.3 However, in general effective therapies and cures remain elusive. Between 2002 and 2012, there were only 413 drug trials for Alzheimer’s drugs with only a 0.4% success rate.6 To put this in perspective, the New York Times published an article in August 2017 with the headline: “A Cancer Conundrum: Too Many Drugs Trials, Too Few Patients.” The article described the difficulty to fill the 1,000 trials for immunotherapy drugs that are currently underway.7

So What Research Have We Done?

While it is clear that neurodegenerative disease research requires continued investment of time and resources, the strides that scientists have already made in this field are noteworthy and should be elucidated. The following is a sketch of some common neurodegenerative diseases and possible exciting innovations.

Alzheimer’s Disease

Alzheimer’s Disease, named after Dr. Alois Alzheimer, who discovered the disease in 1906, is the most common form of dementia in older people; estimates put it as the third highest cause of death in older individuals, behind heart disease and cancer.8 Two hallmarks of the disease are the accumulation of plaque deposits consisting of the beta-amyloid protein and tangles consisting of the microtubule-binding protein tau; the disease is also associated with neuron loss and synapse loss.9 Beta-amyloid and tau were discovered in 1984 and 1986, respectively, leading to the first drug trial in 1987, conducted by Pfizer. Currently, there are five FDA-approved drugs to treat Alzheimer’s, though none is significantly effective in reversing or even slowing the progression of the disease.10

However, certain research studies have excited the scientific community. Immunotherapy provides one angle. Some studies suggest that the immune system can be equipped to target beta-amyloid plaques in the hopes of alleviating an individual of this protein, associated with the pathogenesis of the disease.11 Although recent trials have not been successful, clinical trials based on both the humoral (relating to B cells) and cell-mediated (relating to cytotoxic T cells) still remain viable options for future research.

In addition, a recent study in February 2018 showed that inhibiting the BACE1 enzyme through the deletion of the gene that encodes it led to a loss of plaques and improved cognitive function; deleting the gene after early development stages rendered the mice free of any side effects, circumventing the malignancy that deleting genes can often cause.12 Although the transition between mouse models and human subjects is an extrapolation that must be made with caution, hope remains high for the viability of a treatment based on this mechanism. In general, these two examples serve as a microcosm of the unique and exciting science aimed at combatting Alzheimer’s Disease.

 Parkinson’s Disease

Parkinson’s Disease, discovered by James Parkinson in 1817, is another relatively common neurological disorder that effects around one million individuals in the United States and five million worldwide. What frustrates scientists and patients alike is that there is no known cause of the disease; rather, it seems as though a confluence of genetic and environmental factors lead to its emergence and progression in individuals.13 The most characteristic symptom of the disease is a tremor in the hands as well as rigidity of movement.13 Just like Alzheimer’s, Parkinson’s disease progressively robs people of their most basic functions. And just like Alzheimer’s as well, Parkinson’s disease is associated with protein aggregates, these ones called lewy bodies, which contain the presynaptic neuronal protein alpha-synuclein. Researchers believe these proteins disrupt synaptic function, leading to the pathogenesis of the disease.14 Currently, treatments are directed towards easing symptoms but do nothing to erase them or significantly slow them down.

Again, possibilities in treatment and cures remain high. In improving the methods to improve the diagnosis of Parkinson’s Disease, researchers have used CRISPR, the novel gene-editing technique that has been used for a wide variety of scientific purposes. Scientists at the University of Central Florida have used CRISPR to attach a reporter gene, a NanoLuciferase, to alpha-synuclein, generating an effective way to visualize the progression of Parkinson’s Disease.15 This would allow researchers to use an early diagnosis to begin treating the disease in its early stages, a strategy important in combating any progressive disease. In addition, because Parkinson’s disease leads to the destruction of dopamine producing neurons, scientists have attempted to create these cells in primate models with great success.16 The use of iPSCs circumvents ethical concerns while also providing a viable option to combat Parkinson’s disease by restoring the brain’s ability to generate dopamine.

Amyotrophic Lateral Sclerosis

Amyotrophic Lateral Sclerosis, or ALS, rose to the national stage when Yankee legend Lou Gehrig was diagnosed with the disease and passed away shortly thereafter. The disease is characterized by progressive muscle wasting and a loss in the ability to control voluntary movement. Muscle weakness is followed by muscle twitching, which in turn is followed by muscle atrophy.17

Research has been slower for ALS, but huge social movements have enabled momentum to build. The once famous ice bucket challenge and the now prevalent hot pepper challenge are a testament to the societal backing of research. In 1994, the drug Riluzole was shown to slow the progression of the disease by targeting the neurotransmitter glutamate that may be involved in its pathogenesis.18 In 2017, the FDA approved Edaravone, a drug that reduces the oxidative stress of ALS.19

Some Final Thoughts

Neurodegenerative disease is wide in its breadth and devastating in its effects. However, Pfizer and Axovant’s discontinuations of their neurodegenerative disease treatment programs should not be deterrents to the potential of biomedical research to solve the mechanisms behind these diseases. Science has already made great leaps, and it is important for policy makers and scientists to work together to develop therapies. In a sign that this synergy is possible, FDA director Dr. Scott Gottlieb announced in February 2018 that he supports a “broader, programmatic focus on advancing treatments for neurological disorders that aren’t adequately addressed by available therapies.”20 With continued government support and intellectual initiative from a wide array of scientists, one day we will certainly develop cures for these neurodegenerative diseases.

Leon ‘21 is a freshman in Weld Hall thinking about concentrated in MCB or Neurobiology, but is ultimately undecided.

Works Cited

[1] NPR.org. https://www.npr.org/sections/thetwo-way/2018/01/08/576443442/pfizer-halts-research-efforts-into-alzheimers-and-parkinsons-treatments (accessed February 18, 2018).

[2] Population Reference Bureau. http://www.prb.org/Publications/Media-Guides/2016/aging-unitedstates-fact-sheet.aspx (accessed February 18, 2018).

[3] Young, Anne B. J. Neurosci. 2012, 29, 12722–28.

[4] Graybiel, A. M., and M. Devor. Brain Res. 1974, 68, 167–73.

[5] Jack, Clifford R., et al. Alzheimer’s Assoc. 2015, 11, 740–56.

[6] Cummings, Jeffrey L., Travis Morstorf, and Kate Zhong. Alzheimer’s Res. & Ther. 2014, 37

[7] Kolata, G. New York Times. https://www.nytimes.com/2017/08/12/health/cancer-drug-trials-encounter-a-problem-too-few-patients.html (accessed February 18, 2018).

[8] “Alzheimer’s Disease Fact Sheet.” https://www.nia.nih.gov/health/alzheimers-disease-fact-sheet (accessed February 18, 2018).

[9] Murphy, M. Paul, and Harry LeVine. J Alzheimer’s Dis. 2010, 19, 311.

[10] Alzheimer’s Association. //www.alz.org/research/science/alzheimers_disease_treatments.asp. (accessed February 18, 2018).

[11] Weiner, Howard L., and Dan Frenkel. Nature Rev Immunol, 2006 6, 404–16.

[12] Hu, Xiangyou, Brati Das, Hailong Hou, Wanxia He, and Riqiang Yan. J. Exp. Med. 2018.

[13] The Michael J. Fox Foundation for Parkinson’s Research. https://www.michaeljfox.org/understanding-parkinsons/living-with-pd/topic.php?causes&navid=causes (accessed February 18, 2018).

[14] Stefanis, Leonidas. Cold Spring Harb Perspect Med. 2012, 2.

[15] Basu, Sambuddha, Levi Adams, Subhrangshu Guhathakurta, and Yoon-Seong Kim. Sci. Rep. 2017, 7.

[16] Kikuchi, Tetsuhiro, Asuka Morizane, Daisuke Doi, Hiroaki Magotani, Hirotaka Onoe, Takuya Hayashi, Hiroshi Mizuma, et al. Nature. 2017, 548, 592–96.

[17] National Institute of Neurological Disorders and Stroke. https://www.ninds.nih.gov/Disorders/Patient-Caregiver-Education/Fact-Sheets/Amyotrophic-Lateral-Sclerosis-ALS-Fact-Sheet (accessed February 19, 2018).

[18] Bensimon, G., L. Lacomblez, V. Meininger, and the ALS/Riluzole Study Group. N Engl J Med. 330, 1994, 9, 585–91.

[19] Sawada, Hideyuki. Expert Op. Pharm. 2017, 18, 735-738.

[20] Commissioner, Office of the. https://www.fda.gov/NewsEvents/Newsroom/PressAnnouncements/ucm596897.htm (accessed February 19, 2018).

 

 

 

 

 

Featured

The Race to Quantum Supremacy

The computer is one of the most revolutionary devices ever invented, and it distinctively marks the landscape of the digital era. We use it to work, learn, teach, read, write, speak, share information, access the government, power hospitals, run businesses, automate industry, perform research, make purchases, secure data, watch movies, drive cars, fly planes, and even travel to space. Everything we do is centered around it and completely dependent on it. The computer has become an indispensable facet of modern life.

And yet, they are aging—big time. We have reached the point where the computer industry is approaching physical limits on how small we can make transistors, the billions of switches that provide us with computational power. For that reason, the power of our computers is capped, and while that may not be in issue for browsing internet memes, it is affecting work in cryptography, physical simulations, and other fields with computationally demanding processes. As we inch closer and closer to our limitations, researchers have been working behind the scenes to create quantum computers, which use the special properties of quantum mechanics to process unimaginable streams of information. To realize the importance of these new developments, we will first examine the issues with our current computers, then cover the beautiful theory behind quantum computation, and finally explore some of the most remarkable breakthroughs in this field here at Harvard and around the world.

The Limitations of Classical Computers

Classical computers are the bread and butter of modern technology. They are made up of large electronic circuits that contain switches called transistors that can allow or block the flow of electricity by manipulating electric fields. A bit is a piece of information that tells us whether there is electricity flowing through a transistor, and it is binary, meaning it can have one of two states: 1 when there is electricity, and 0 when there is not.1

We can then combine many transistors into an even larger structure called a logic gate, which checks whether electricity is flowing through its individual transistors. Using that information and a set of rules, it either allows or blocks electricity out of the gate. Those rules are statements of logic, and since basic math operations are based on logic, all the calculations and computation behind our computers are built on top of these logic gates.2

Computing power, the speed with which computer operations are carried out, has grown exponentially during the past few decades as we became able to fit more and more transistors onto chips. In 1971, the Intel 4004 processor had a mere 2300 transistors; in 2016, Intel’s Xeon processor contained 7.2 billion. Gordon Moore, the co-founder of Intel, was so optimistic about this growth that he predicted that the number of transistors in a chip would double every two years. His observation became known as Moore’s Law after it proved to be true throughout the late 20th century. But Moore failed to consider that as we construct smaller and smaller transistors, we start to dive into the bizarre world of quantum mechanics.3

As transistors shrink down to atomic sizes, they begin to face weird quantum effects. In particular, a transistor on the scale of an electron encounters quantum tunneling, a phenomenon where a particle can pass through a barrier that it classically could not penetrate. This is because as we enter the quantum scale, particles become defined by wave functions that describe the probability of finding the particles in particular regions of space. If a barrier is on the same scale as the particle, then the wave function will necessarily show that there is a non-zero probability that the particle might exist in the space beyond the barrier. The terminology of “tunneling” through the barrier then is really a classical metaphor to help us comprehend what is happening—the particle simply has a chance of showing up beyond the barrier (4).

We would not be able to use transistors as binary devices that output 1s and 0s because the probabilistic nature of the location of the electrons would not return definite values for the presence of current. Thus, the physical limit of the size of transistors, our source of computing power, will come to stall the continuation of Moore’s Law. But why not consider the possibility that the very phenomenon that limits us could be used to our advantage?5

The Quantum Computer

Quantum mechanics poses a physical limit to the power of a system built on top of binary bits. But what about non-binary information? The information of particle, such as its position or momentum, is determined by quantum probability functions. Underlying these functions is something called the principle of superposition, which states that a particle exists across all of its possible states at the same time, like overlapping waves. Once you measure the system, however, it will collapse into a single state; otherwise, the system exists as a probability distribution across states.6

The bits of information on a computer could also be described by superposition. These theoretical information packets are known as qubits, and unlike binary bits, they can simultaneously be in a state of 1 and 0. This would allow a qubit to contain two pieces of information at the same time! For example, a system of 4 classical bits carries 4 bits of information since each classical bit has 1 value. But because each qubit is in a superposition of 1 and 0, it carries 2 possible bits of information. Thus, with 4 qubits, we would have possible states or 16 bits of information. We can generalize this to say that N qubits will contain bits of information. The power of exponentials is on our side; for example, a system of 20 qubits would be equivalent to the computing power of = 1,048,576 classical bits!6

Similar to how we use the flow of electricity to encode classical bits, we can use properties of quantum systems, like the spin of electrons or the polarization of light, to represent qubits. Then, quantum gates—the equivalent of logic gates for quantum computers—take in incoming qubits and output a result. To get values from the output, however, we have to measure it. What happens again when you measure a quantum system? It collapses into a single state and gives one classical value—it seems as if the incredible magnitude of information held by the qubits simply disappears. Getting around this is quite difficult and requires quantum algorithms that filter the outputs to return useful results. The development of these quantum algorithms is one of the largest areas of research in quantum computing, and we will cover them in more detail later.6

Experts view quantum computers as machines best suited for operations that can take advantage of these superpositions to run many different calculations all at once, a process called parallel computation. The beauty of quantum computation is exactly this—the improvement is not in the speed of each individual operation but rather in the reduction of the total number of operations required.

One quantum algorithm called Shor’s algorithm exhibits this quality from its ability to crack the cryptography system called RSA that is used everywhere today. RSA creates secure keys by multiplying large prime numbers, and in order to crack the keys, you have to figure out what its factors—the prime numbers—are. It would take classical computers longer than the age of the universe to crack these keys; for quantum computers on the order of seconds. For some, this has scary implications, and rightfully so. Quantum computing seems to be lurking beneath our stable layer of security, ready to crack through at any second. But for many researchers, as we will see, this perspective is narrow in vision. To them, something of boundless potential, something unseen and not yet realized, lies in the future of quantum computing.6

Breakthroughs

Harvard physicist Mikhail Lukin is already at the forefront of quantum computing, building a remarkable 51-qubit quantum simulator in the last two years. In a HSR interview with Lukin, he first described that a quantum simulator is a little different from a general quantum computer because it’s a system specially designed to solve a specific set of problems. For Lukin, this specific problem is what’s known as “a many body problem” in quantum mechanics, where the objective is to figure out how a collection of particles interact with each other. To build this machine, his team focused more than 100 laser beams into a cloud “of rubidium atoms so tightly that the laser beams acted as optical tweezers which could each capture exactly one atom.”7-8

“We take a picture to determine which tweezers are holding atoms and then we rearrange the atoms into a crystal of arbitrary shape,” Lukin said.8 This arrangement is essentially the input that his team programs into the quantum simulator.

After setting this arrangement into place, “the atoms all start in an identical state and transform into a final state that is determined by interactions in the system.” This is how his simulator computes, and the computation is what Lukin describes as a phase transition, like “water that transforms into ice from a temperature change.” In this case, the phase transition is driven by quantum interactions in the system.8

“This transformation finds the lowest energy state,” said Lukin, “and we compare the result of the simulation to what we actually know, which allows us to quantify the accuracy of the machine.”8

Essentially, by putting the configuration into place—the input—and allowing it to evolve, they receive a certain energy state—the output—that is the result of their computation. While this specific quantum simulation may not be able to perform universal computational tasks, it provides a way to simulate a physical quantum mechanical system that was previously very hard to do with classical computers. In the words of Alexander Keesling, a Ph.D. student on the project, “the way around that is to actually build the problem with particles that follow the same rules as the system you’re simulating,” which is precisely what Lukin’s team did.9

But why 51 qubits? Lukin explains that it is exactly around 50 atoms where classical computers are not capable of simulating that number of particles interacting with each other.

“What’s exciting about approaching these system sizes is that we are crossing this boundary where quantum machines will outperform classical machines,” he added.8

If quantum machines with this number of qubits already have tremendous capabilities, why not just add more qubits? Unfortunately, the quantum world has never been easy with us. Methods of constructing qubits involve trapping little particles and using complex optical equipment, like with Lukin’s project, and they become very hard to scale when you add more qubits. This is due to quantum decoherence, a property of quantum systems that require that they be isolated from outside forces which can collapse the superposition states.10

“You must have a high degree of quantum coherence to process these bits of information in spite of the number of qubits,” said Lukin.8

The spotlight in quantum computing, at one point, was centered on Canadian start-up D-Wave Systems, which had introduced a 2000-qubit quantum computer. D-Wave uses a process called quantum annealing that uses magnetic fields to change the energy states of superconducting loops. Similar to Lukin’s quantum simulator, quantum annealing is used to solve specific optimization problems and does not function as a universal quantum computer. Although the number of qubits they boast sounds strikingly impressive, Lukin adds that “their qubits have a very short quantum memory time and decohere very rapidly”—on the order of nanoseconds.8,11 Nonetheless, they’ve caught the attention of Google and NASA, who have partnered together to purchase one of D-Wave’s 2000 qubit computers for $15 million for further experimentation.11

Google itself has been ferociously fighting decoherence. They have created qubits out of supercooled aluminum wires hooked up to classical circuits, and their error-correction methods allowed them to construct a 9-qubit quantum chip in 2015. Their current goal is to come out with a 49-qubit machine within the next few months. The tech giant is primarily focused on reaching quantum supremacy, which Lukin described as the time when quantum systems will become accurate and efficient enough to easily solve problems that our best supercomputers cannot in a reasonable time frame. At this point in time, Google has been able to maintain qubits in coherent states on the order of a microsecond. While it sounds small, operations on quantum computers take nanoseconds to complete, so these coherent states currently allow for thousands of operations to run at once.12-16

“Google’s 49-qubit project is interesting work,” Lukin agrees. “But they haven’t yet operated the machine, and even when implemented will be quite hard to compete with our approach in terms of coherence and programmability.”8

Competition is flaring, however. IBM, the oldest player in the game, has been working on quantum computing for decades and came out with a proof-of-work of a 50-qubit quantum computer in November, 2017 that could put them ahead of Google in the race. They constructed their current system in quite a unique way by supercooling a metal into a state of superconductivity where a current can simultaneously flow in two different directions, representing a superposition of 1 and 0. As of now, they have achieved 9-qubits with this set up and hope that as they scale to 50-qubits, decoherence does not scale out of control.17

Decoherence is not even the biggest problem—designing algorithms for quantum computation has proved to be a major theoretical challenge for researchers like Lukin. Remember that the beauty behind quantum computing is that it drastically reduced the number of operations required to compute something by being able to run many calculations at once. “This is the idea of parallel computing, and that’s where this exponential speed up occurs at ,” Lukin said.8

However, the issue was that taking measurements of quantum systems collapsed the superposition into a single classical state, and the challenge is to make sure that the classical information returned is accurate and useful.

“You want to design a quantum algorithm such that it’s easy to encode the quantum problem in a state or in a system of interactions,” said Lukin. “And then you can efficiently extract some classical information.”8

These quantum algorithms that return useful classical results do not follow the same logic as our current ones, like Fourier Transform, RSA, or link analysis.18

“That’s what makes it hard, that’s why you can’t just take conventional algorithms and implement it on a quantum machine,” Lukin explained, because there is no guarantee that they will be accurate and efficient.8

Is Lukin’s project a means of developing a universal quantum computer?

“It is on one hand a stepping stone, but it’s also the case that to implement useful and practical algorithms, we might not need a fully universal quantum computer,” he said. “We need it to be programmable to encode a problem into the machine, but it doesn’t necessarily need to be fully universal.”8

The Uncertain Future

Quantum computing, as we have seen so far, is experiencing rapid growth, and the desire for the potential power it can provide has spread far beyond the tech industry. According to classified documents revealed by Edward Snowden, the NSA has put $80 million into use to build “a cryptologically useful quantum computer” as part of a project they call “Penetrating Hard Targets.” The controversial government agency fears that people could use powerful quantum computers with methods like Shor’s algorithm to break encryption that secures classified government information and communication.19

“Some people are actually worried that this quantum computer power will be mostly destructive, but in practice this might be just the other way around,” he claimed.8

Quantum cryptography can allow one to encrypt information much more securely so that “this encryption is protected by the laws of quantum mechanics, which would be impossible to break.”8 The NSA thus believes that it is crucial for them to get a head start now and construct quantum computers that can protect their information from attacks in the future. Not much is known about their progress, but their work calls into question the threatening implications quantum computing has on security. Lukin, however, remains optimistic.

“I would venture to guess that way before the time that we have quantum computers that are powerful enough to use Shor’s algorithm, we will be using them to do other things, like optimization problems, which are at the root of tasks like machine learning and artificial intelligence,” Lukin explained.8

Google and NASA have already begun research on applying quantum computing to artificial intelligence because it utilizes difficult optimization problems that quantum machines can solve. The combined power of these two giants might lead us to a breakthrough in machine learning.20

“Shor’s algorithm is important because it was one of the earliest examples showing the power of quantum computers and it got people sort of talking about it,” Lukin stressed, “but if you think about the classical computer, we also initially had some ideas of what classical computers would be good for.”8

When computer engineers built the first programmable computers, like the Colossus, they used them to aid the British military in breaking encrypted German communication and calculating projectile trajectories during WWII. For the purposes of the war, these machines were much more efficient than humans, but after transistors and integrated circuits were developed, computers got smaller and were able to store programs and run software that allowed them to do things never thought of before.21-22

“When they tried to implement the algorithms which motivated building these classical machines, they only performed okay, but other algorithms which were completely unproven worked remarkably well, and I think there are some indications that the same story will be true for quantum computers,” Lukin exclaimed.8

The comparison he drew was optimistic, and it put into perspective the potential scale and reach of quantum computing. It has the power to completely restructure data science, it has the power to take us to unimaginable heights in computing efficiency and security, and it has the power to revolutionize the world.

“We don’t realize it yet,” Lukin said. “We just really need to build these machines to figure this out, and that’s what we are going to do.”8

Kidus Negesse 21’ is a freshman living in Pennypacker intending to concentrate in Physics or Applied Mathematics.

Works cited

[1] Bits and Bytes. Stanford University. https://web.stanford.edu/class/cs101/bits-bytes.html (accessed February 17, 2018)

[2] Sangosanya, W. Basic Gates and Functions. University of Surrey, 2005.

http://www.ee.surrey.ac.uk/Projects/CAL/digital-logic/gatesfunc/ (accessed February 17, 2018).

[3] Moore, G.E. Proceedings of the IEEE. 1998, 86, 82-85.

[4] Quantum Tunneling. University of Oregon. http://abyss.uoregon.edu/~js/glossary/quantum_tunneling.html (accessed February 17, 2018).

[5] Condliffe, J. World’s Smallest Transistor Is Cool but Won’t Save Moore’s Law. MIT Technology Review, Oct. 7, 2017. https://www.technologyreview.com/s/602592/worlds-smallest-transistor-is-cool-but-wont-save-moores-law/ (accessed February 17, 2018).

[6] Aaronson, S. The Limits of Quantum. Scientific American, Mar. 2008. http://www.cs.virginia.edu/~robins/The_Limits_of_Quantum_Computers.pdf (accessed February 17, 2018).

[7] Lukin, M. D. et al. Nature. 2017, 551, 579-584.

[8] Lukin, M. Personal Interview. Mar. 2, 2018.

[9] Reuell, P. Researchers create quantum calculator. The Harvard Gazette, Nov. 30, 2017. https://news.harvard.edu/gazette/story/2017/11/researchers-create-new-type-of-quantum-computer/ (accessed February 18, 2018).

[10] Neill, C. et al. ArXiv:1709.06678. 2017.

[11] Gibney, E. D-Wave upgrade: How scientists are using the world’s most controversial quantum computer. Nature, Jan. 24, 2017. https://www.nature.com/news/d-wave-upgrade-how-scientists-are-using-the-world-s-most-controversial-quantum-computer-1.21353 (accessed February 18, 2018).

[12] Emerging Technology from the arXiv. Google Reveals Blueprint for Quantum Supremacy. MIT Technology Review, Oct. 4, 2017. https://www.technologyreview.com/s/609035/google-reveals-blueprint-for-quantum-supremacy/ (accessed February 18, 2018).

[13] Hsu, J. Google Tests First Error Correction in Quantum Computing. IEEE SPECTRUM, Mar. 4, 2015. https://spectrum.ieee.org/tech-talk/computing/hardware/google-tests-first-error-correction-in-quantum-computing (accessed February 18, 2018).

[14] Nave, K. Quantum computing is poised to transform our lives. Meet the man leading Google’s charge. Wired, Oct. 31, 2016. http://www.wired.co.uk/article/googles-head-of-quantum-computing (accessed February 18, 2018).

[15] Russell, J. Quantum Bits: D-Wave and VW; Google Quantum Lab; IBM Expands Access. HPC Wire, Mar. 21, 2017. https://www.hpcwire.com/2017/03/21/quantum-bits-d-wave-vw-google-quantum-lab-ibm-expands-access/ (accessed February 18, 2018).

[16] Nicas, J. How Google’s Quantum Computer Could Change the World. Wall Street Journal, Oct. 17, 2017. https://www.technologyreview.com/s/609035/google-reveals-blueprint-for-quantum-supremacy/ (accessed February 18, 2018).

[17] Knight, W. IBM Raises the Bar with a 50-Qubit Quantum Computer. MIT Technology Review, Nov. 10, 2017. https://www.technologyreview.com/s/609451/ibm-raises-the-bar-with-a-50-qubit-quantum-computer/ (accessed February 18, 2018).

[18] Otero, M. The real 10 algorithms that dominate our world. Medium, May 26, 2014. https://medium.com/@_marcos_otero/the-real-10-algorithms-that-dominate-our-world-e95fa9f16c04 (accessed March 14, 2018).

[19] Rich, S.; Gellman, B. NSA seeks to build quantum computer that could crack most types of encryption. Washington Post, Jan. 2, 2014. https://www.washingtonpost.com/world/national-security/nsa-seeks-to-build-quantum-computer-that-could-crack-most-types-of-encryption/2014/01/02/8fff297e-7195-11e3-8def-a33011492df2_story.html?utm_term=.ac7150def805 (accessed February 18, 2018).

[20] Quantum A.I. Google. https://research.google.com/pubs/QuantumAI.html (accessed February 18, 2018).

[21] History of Computers. University of Rhode Island.

https://homepage.cs.uri.edu/faculty/wolfe/book/Readings/Reading03.htm (accessed March 17, 2018).

[22] When was the first computer invented? Computer Hope, Jan. 24, 2018.

https://www.computerhope.com/issues/ch000984.htm (accessed March 17, 2018).

Featured

No More Penicillin: A Future Without Antibiotics?

INTRODUCTION

The invisible world of microbes is a scary place. For nearly the entirety of humanity’s existence, infectious diseases have been the leading cause of death. It wasn’t until recently that developments such as antiseptic chemicals, vaccinations, and pasteurization (to name a few) were developed to combat lethal pathogens. Because of those advances, life expectancy and global population levels have skyrocketed in the past century, but there is one advance that has caused the death toll from bacterial infections has all but disappeared. That advance is the development of antibiotics, small molecules designed to inhibit or kill bacteria.

To take a page out of the 2008 financial crisis, the variety of antibiotics discovered in the 1950s and 1960s seemed “too big to fail.” Unfortunately, medicine is now growing closer to the point where all currently used antibiotics will succumb to resistance. In the direst of cases, humanity will plunge back into a pre-antibiotics era. That’s not to say there is no hope though; there are several promising new drugs in the pipeline that look to be effective in killing even the most resistant bacteria. If this research can be sustained and if policy changes can be enacted to promote responsible antibiotic usage, then the resistance problem will effectively be solved.

THE BIOCHEM PROBLEM

The antibiotic revolution has afforded great benefits to humanity including a vast increase in life expectancy, the disappearance of many fatal infections, control over other once deadly viral diseases, and even increased crop and animal yield.1 Yet in the euphoria of having momentarily defeated bacterial infections, scientists forgot how adaptable bacteria are in their ability to process their chemical surroundings. The so called “golden age” of antibiotics resulted in thousands of new compounds produced by selectively modifying natural antibiotics (such as amoxicillin from penicillin) and this seemingly endless supply was thought to be enough to stave off resistance.1 What followed, however, was a dramatic decrease in discovery of antibiotics due to growing research costs and stagnating technical screening and isolation strategies.1 And even though research has increased in recent years, without financial backers to develop these compounds into usable drugs, there is still a frustratingly low number of new drugs entering the market.

For many drugs, a slow discovery process would not be problematic, but for antibiotics, science is racing against the evolutionary clock driving resistance. The World Bank has estimated costs of bacterial resistance could be even more than the 2008 recession,2 and the CDC has estimated over 23,000 deaths in over 2 million antibiotic resistant infections in the last year alone with that number only projected to rise.3 Naturally occurring resistant bacteria are normally evolutionarily disadvantaged because of the metabolic costs of maintaining resistant mechanism and thus are found in low concentrations in the environment. However, following introduction of antibiotics, the non-resistant individuals are killed, and resistant organisms are allowed to proliferate.3 Not only can these resistant colonies go on to cause infections directly, they can transfer their resistance genes to other bacteria, sometimes of completely different species, furthering the spread of resistance.3 Scientists knew this all but underestimated the speed at which bacteria could adapt which is what led them to this current crisis.1

Of the resistant bacteria, the most pressing subset are the Gram-negative species, a categorization based on structural differences in bacteria. A useful analogy might be to think of Gram-positives as being surrounded by a thick layer of steel mesh while Gram-negatives are surrounded by a thin layer of lead. Throwing a rock at both will not do anything, however, pouring water on the Gram-positives will allow penetration. In addition to the decreased permeability to drugs, Gram-negatives also have more built in resistance mechanisms and a higher ability to pass genes around.5 Thus, antibiotic resistant Gram-negatives are causing more infections worldwide and those infections are more difficult to treat as well.2

grampos

Figure 1 https://commons.wikimedia.org/wiki/File:Gram-Cell-wall.svg

THE INDUSTRY PROBLEM

The last, and arguably most concerning, obstacle in solving the resistance problem concerns industrial pressures. As antibiotics ideally should be used as sparingly as possible, that creates an unfavorable situation for pharmaceutical companies looking to maximize sales which results in research on antibiotics being relegated to academia.1 Additionally, since antibiotics cure diseases, they are short course therapies and thus bring in less revenue than say a beta-blocker for high blood pressure which must be constantly taken. Lastly, the time and money it takes to develop new antibiotics (even longer and more expensive now that drug-approval protocols have become more stringent) combined with the short time frame is just an additional factor making it unfavorable for drug companies, or anyone for that matter, to invest in antibiotic development.1

Promisingly though, there has been a rather unprecedented uptick in global awareness of antibiotic resistance and a number of entities have implemented programs designed to incentivize research and development.2

THE PRIORITY ANTIMICROBIAL VALUE AND ENTRTY (PAVE) AWARD

The PAVE award concept, developed by the Duke-Margolis Center for Health Policy at Duke University, aims to overcome the low return on investment of antibiotics and overturn the current volume-based payment industry paradigm.5 They proposed a market entry award which will provide companies with public funding following FDA approval with the stipulation that they must find other sources of funding that are tied to drug efficacy/performance.5,6 To generate these public funds, the group considered selling transferable exclusivity vouchers (TEV) which when owned, would allow companies to maintain a limited monopoly on a drug of their choice.

Though not expressly included as part of the award, the PAVE group recommended that a comprehensive strategy for addressing antibiotic resistance should also include what they call “push incentives.”5 Push incentives do what their name implies, they push drugs into the market by providing funds for both clinical and pre-clinical research, reducing the burden of the drug approval process on pharmaceutical companies. Thus, they increase both academic and industry research and speed up the discovery and safety testing steps of drug development.

The PAVE award stresses that financial incentives for pharmaceutical companies alone, no matter the amount and no matter the time line over which they are given, will not be enough to defeat resistance. They stress that proper antibiotic stewardship, minimizing unnecessary antibiotic usage and prescribing the minimally effective antibiotic for a given infection, will also be key.5 This responsibility will have to lie primarily on doctors and other health professions as they have direct contact with patients, but the PAVE award makes it clear that the industry must also stop pushing for their antibiotics to be prescribed.

FDA PROGRAMS

The USA’s FDA has created a variety of programs designed to increase the amount of antibiotics available to treat resistant infections. As part of their GAIN (Generating Antibiotic Incentives Now) act, the FDA created the QIDP (Qualified Infectious Disease Product) designation which offers the incentive of a five-year exclusivity extension and both Fast Track designation and priority review from the FDA.7 QIDP status allows drugs to make it through the FDA’s review process much quicker and also gives them access to more guidance from the FDA in the process to approval.7

A similar program to the FDA’s Fast Track process is the Breakthrough Therapy designation. It is not included with QIDP designation, and to receive the designation, a drug must show that it has a clear advantage over available therapy, examples of which include improved safety profile or an effect on serious symptoms of a disease.8 Breakthrough Therapy designation includes all the benefits of the Fast Track process but also includes more thorough advising on efficient drug development starting in Phase 1 clinical trials.8

 

 

 

UPCOMING DRUGS

In conjunction with the industry modifying policies spurring research and changing antibiotic usage paradigms to prevent future development of resistance, new drugs still need to be developed which can deal with the current resistance problem. For now, there are only a few candidates that have shown promise in fighting multi-drug resistant Gram-negative bacteria, but even one reliable drug is enough to start making a dent in the problem. Some of them are semisynthetic derivatives of known drugs, some are fully synthetic compounds of established antibiotics classes, some of the drugs are new combinations of previously approved drugs, and others are completely new classes of compounds. Here is an overview of a few of the upcoming drugs.

Lefamulin

A novel pleuromutilin, lefamulin is being developed by Nebriva and currently in Phase 3 clinical trials.9 Interestingly enough, it is the first pleuromutilin to be developed for systemic rather than topical use which has made it incredibly effective against many drug resistant bacteria both Gram-negative and positive.9,10 Additionally, research has shown that pleuromutilins have a generally low susceptibility to resistance development and that due to their unique mechanism of action, they have low levels of cross resistance with other antibiotics.10,11 Though it is currently being advanced as an agent to treat community acquired bacterial pneumonia (CABP), Nabriva is also developing it for skin infections and pediatric infections all in both IV and oral availabilities.10
lefamulin

Figure 2 https://commons.wikimedia.org/wiki/File:Lefamulin_skeletal.svg

Eravacycline

Synthesized at Harvard University by Amory Houghton Professor of Chemistry and Chemical Biology Andrew Myers, eravacycline is a tetracycline antibiotic that was developed to treat complicated intra-abdominal infections (cIAI).12 The fully synthetic route to its creation allowed for substitution of chemical structures that were key in demonstrating activity against CRE and carbapenem resistant A. Baumanii (CRAB).9,12 It is also unaffected by many resistant mechanisms specific to tetracyclines and is effective against bacteria resistant to colistin, a potent drug of last resort.9,12 Tetraphase, its parent company, recently submitted an NDA which was given priority review and included data demonstrating non-inferiority to two carbapenems in Phase 3 clinical trials and is aiming for both IV and oral formulations of the drug to treat serious hospital infections.12

eravacycline

Figure 3 https://commons.wikimedia.org/wiki/File:Eracacycline-.png

Cefiderocol

Antibiotic resistance is not always due to degradation of the drug by bacterial enzymes. Sometimes the drugs simply cannot penetrate the cell either because they are actively effluxed or because the cell wall is simply too difficult to breach. To get around the latter, scientists at Shionogi have developed cefiderocol which is a semisynthetic derivative of ceftazidmine, a cephalosporin, and is able to get into bacterial cells by binding to iron and hitching a ride through the bacterial iron.9,13 Because of the novelty of the compound, it is relatively immune to all the known beta-lactamases and has demonstrated activity against all of the multi drug resistant Gram-negative infections prioritized by the WHO.13 Phase 3 clinical trials are currently being done or have been completed in cUTIs and in hospital/ventilator acquired pneumonia and Shionogi is planning on submitting a NDA later this year.9,13

Murepavadin

The first in its class of novel antibiotics, murepavadin is an outer membrane protein targeting antibiotic developed by Polyphor which inhibits the construction of a portion of the cell wall by mimicking the needed compound.9 As the first of its class, there are only intrinsically resistant bacteria for which murepavadin cannot physically reach the membrane; in other words, there are no biochemical resistance mechanisms that would impede its success and thus it shows no cross resistance with other antibiotics.9,14 It is being developed to specifically treat carbapenem resistant Pseudomonas and has been given QIDP status with two Phase 2 clinical trials indicating high treatment levels and low resistance development having been completed.14

Recce 327

Taking the membrane binding idea further, Recce pharmaceuticals has developed a membrane binding protein that causes the cell to burst from outward pressure.15 It is unique in that it indiscriminately attacks both Gram-negative and Gram-positive bacteria because it is both effective against the outer membrane found in the Gram-negative bacteria and is small enough that it can diffuse through the thick peptidoglycan of the Gram-positive bacteria.15 Additionally, Recce claims that their product binds so nonspecifically to the cell wall that even mutations that alter the composition of its binding target won’t affect its efficacy.15 Having both broad spectrum and anti-resistant development characteristics would make this drug the ultimate addition to medicine’s arsenal, and having received QIDP designation from the FDA, recce is hoping that their planned clinical trials will be able to repeat the powerful results that laboratory experiments have demonstrated.15

CONCLUSION

Antibiotics have come a long way since the golden era of their discovery. Though in recent years bacteria have developed startling amounts of resistance to even the most potent of antibiotics, the world is starting to respond. With changes to both the research process and the prescribing paradigm and with increases in the development of novel antibiotics, science is on track to restore the benefits that the first antibiotic revolution brought about. As the public becomes increasingly aware of the problem of resistance, patients will undoubtedly reduce their want of unnecessary antibiotics; however, doctors and the industry must also do their part. Policy changes may be the ultimatum needed to spur action, but regardless, increased research into antibiotics will only be effective if everyone educates themselves about proper antibiotic stewardship. Antibiotics allowed for major surgeries lifesaving procedures that would have impossible due to the risk of infection and it is not an exaggeration to say that antibiotics form the cornerstone of many of the greatest medicinal advancements in the last half century. Continued enjoyment of their benefits will require work from pharmaceutical companies, doctors, and patients alike, but it is a worthwhile change that will undoubtedly continue to bring far reaching benefits to all of humanity.

 

Kelvin Li ’21 is a freshman in Wigglesworth Hall.

 

Works Cited

[1] Bérdy, J. Thoughts and Facts about Antibiotics: Where We are Now and Where We are Heading. J Antibiot. 2012, 65, 385-389.

[2] WHO/EMP/IAU. Prioritization of Pathogens to Guide Discovery, Research and Development of New Antibiotics for Drug-resistant Bacterial Infections Including Tuberculosis; World Health Organization: Geneva, CH, 2017.

[3] About Antimicrobial Resistance. https://www.cdc.gov/drugresistance/about.html (accessed Feb. 18, 2018).

[4] Gram-negative Bacteria Infections in Healthcare Settings. https://www.cdc.gov/hai/organisms/gram-negative-bacteria.html (accessed Feb. 18, 2018).

[5] Daniel, G.W. et al. Value-based Strategies for Encouraging New Development of Antimicrobial Drugs; Duke Margolis Center for Health Policy: Washington, US-DC, 2017.

[6] Daniel, G.W. et al. Addressing Antimicrobial Resistance and Stewardship: The Priority Antimicrobial Value and Entry (PAVE) Award. JAMA. 2017, 318, 1103-1104.

[7] FDA: CDER. Qualified Infectious Disease Product Designation Questions and Answers Guidance for Industry; Food and Drug Administration: Rockville, US-MD, 2018.

[8] Breakthrough Therapy. https://www.fda.gov/ForPatients/Approvals/Fast/ucm405397.htm (accessed Feb. 18, 2018).

[9] WHO/EMP/IAU. Antibacterial Agents in Clinical Development: An Analysis of the Antibacterial Clinical Development Pipeline, Including Tuberculosis; World Health Organization: Geneva, CH 2017.

[10] Pipeline and Research. https://www.nabriva.com/pipeline-research (accessed Feb. 18, 2018).

[11] Paukner, S.; Ridel, R. Pleuromutilins: Potent Drugs for Resistant Bugs – Mode of Action and r=Resistance. Cold Spring Harb. Perspect. Med. [Online] 2016, 7, 83-98. http://perspectivesinmedicine.cshlp.org/content/7/1/a027110 (accessed Feb. 18, 2018).

[12] Tetraphase Pharmaceuticals. Tetraphase Pharmaceuticals Announces Submission of New Drug Application to FDA for Eravacycline for the Treatment of Complicated Intra-Abdominal Infections (cIAI); Tetraphse Pharmaceuticals: Watertown, US-MA, 2018.

[13] Shionogi Incorporated. Shionogi Presents Positive Clinical Efficacy Trial Results And In Vitro Data On Cefiderocol, At IDWeek 2017; Shionogi & Co., Ltd.; Osaka, JP and Florham Park, US-NJ, 2018.

[14] Murepavadin (POL7080). https://www.polyphor.com/pol7080/ (accessed Feb. 18, 2018).

[15] Product Candidates: Science. https://www.recce.com.au/index.php/product-candidates/science (accessed Feb. 18, 2018).

Featured

One Dose, One Day: The Magic of Xofluza

On February 23, 2018 amidst a worst-in-a-decade flu season, the Japanese pharmaceutical Shionogi & Co. attained approval to sell a new influenza drug in Japan.1 Xofluza (or baloxavir marboxil) works unlike any antiviral previously developed. Instead of preventing infected cells from releasing viral particles, as the established Tamiflu medication does, Xofluza stops the flu from hijacking healthy cells in the first place.2

The strategy of blocking this earlier phase of the viral life cycle was proven effective in the phase 3 clinical trial CAPSTONE-1, which concluded in October 2017.3 Researchers found that taking Xofluza, as opposed to Tamiflu or a placebo, led to better outcomes.4,5 First, patients on the new drug had significantly lower virus levels at all time points after treatment. For example, at the one day mark, 90% of patients on Tamiflu were still positive for influenza whereas only 50% of patients on Xofluza carried detectable virus.5 Second, virus production in Xofluza patients ceased after 24 hours while it continued for Tamiflu and placebo patients for 72 and 96 hours respectively.4 In terms of clinical signs, patients on the new drug saw their fever subside in a median of 24.5 hours rather than 42 hours (placebo). Other flu symptoms typically last around 80 hours (placebo), but those taking Xofluza experienced a much quicker recovery, with alleviation occurring on average within 54 hours.4,5

Although the time required for symptom relief was similar between Xofluza and Tamiflu, the medical community is excited about Xofluza because it induces a rapid reduction in viral load. The new drug’s ability to stop viral shedding after only 24 hours could limit transmission and help contain outbreaks.4 Another major benefit is that the Xofluza treatment involves exactly one dose, making it more convenient than Tamiflu’s five day regiment of two doses per day.2

This breakthrough in antiviral development comes in a year when influenza has been particularly deadly and when the vaccine has been particularly ineffective. As of the end of February 2018, the flu already caused 114 child deaths in the United States, with record high hospitalization rates reported by the Center for Disease Control.6 This year’s flu shots have conferred only 36% protection against the circulating strains.7 While researchers are seeking a universal vaccine that can protect against all strains, experts warn that such a development is at least a decade away. In the meantime, pharmaceuticals including Shiongi, Johnson & Johnson, AstraZeneca, and Visterra are testing potential new compounds to treat the flu post-infection. Johnson & Johnson is investigating pimodivir, which targets viral replication. AstraZeneca and Visterra are working on an injectable antidote that would disable viruses.2 Many of these potential drugs have years of testing ahead, but Shionogi’s Xofluza has already made it through the regulatory process, at least in Japan.

How specifically does Xofluza work its magic? The way influenza replicates in host cells is by hijacking cellular machinery to make copies of the viral genome as well as viral proteins. Influenza carries its genetic information in the form of RNA, a close cousin of DNA. A key player is the influenza virus RNA polymerase, which serves two functions. First, it uses its nuclease activity to cut capped RNA fragments from host cell RNAs. These capped fragments then serve as primers for the copying of viral mRNA, the polymerase’s second function.8 A 2009 study by Dias et al. determined the location of the endonuclease active site within the RNA polymerase, making it a promising target for an influenza drug.9 Subsequently, Xofluza was developed. It is a cap-dependent endonuclease inhibitor that stops the influenza polymerase and thus viral replication.2

Shionogi, partnering with Roche, plans to apply for U.S. approval this summer, with a decision expected in 2019.2 In addition to its proven effectiveness against both influenza A and influenza B strains, Xofluza also has potential against Tamiflu-resistant and avian flu strains.5 If the new drug lives up to the results reported in the clinical trial, it would be a groundbreaking development in antiviral medication. Requiring only one dose and inducing recovery in one day, Xofluza is the future of our fight against influenza.

 

Figure 1 https://www.cdc.gov/media/releases/2018/t0209-flu-update-activity.html

Hanson Tam ’19 is a junior in Lowell House concentrating in Molecular and Cellular Biology.

Works cited

 [1] Shionogi & Co., Ltd. XOFLUZATM (Baloxavir Marboxil) Tablets 10mg/20mg Approved for the Treatment of Influenza Types A and B in Japan. Feb. 23, 2018. http://www.shionogi.co.jp (accessed Mar. 14, 2018).

[2] Rana, P. Experimental Drug Promises to Kill the Flu Virus in a Day. The Wall Street Journal, Feb. 10, 2018. http://www.wsj.com (accessed Mar. 14, 2018).

[3] A Study of S-033188 (Baloxavir Marboxil) Compared with Placebo or Oseltamivir in Otherwise Healthy Patients With Influenza (CAPSTONE 1). clinicaltrials.gov (accessed Mar. 14, 2018), Identifier: NCT02954354.

[4] Payesko, J. Baloxavir Marboxil Demonstrates Positive Phase 3 Influenza Results. MD Magazine, Oct. 13, 2017. http://www.mdmag.com (accessed Mar. 14, 2018).

[5] Shionogi & Co., Ltd. Shionogi To Present S-033188 Phase 3 CAPSTONE-1 Study Results For Treatment Of Influenza At IDWeek 2017. Oct. 5, 2017. http://www.shionogi.com/newsroom/article.html#122521 (accessed Mar. 14, 2018).

[6] Center for Disease Control. 2017-2018 Influenza Season Week 8 ending February 24, 2018. http://www.cdc.gov (accessed Mar. 14, 2018).

[7] Liu, A. Shionogi’s flu drug wins green light in Japan, but U.S. approval not likely until 2019. http://www.fiercepharma.com (accessed Mar. 14, 2018).

[8] Li, M. et al. EMBO J. 2001, 20, 2078-2086.

[9] Dias, A. et al. Nature. 2009, 458, 914-918.

 

 

 

Featured

Space Biology and the Future of the Human Race

Space biology –– even the name of the field sounds like an oxymoron. Little research has been done in space biology, simply because of how difficult (and expensive) it is to get specimens up in space. Moreover, whatever research that does come from such experiments is not easily replicable and thus, often inconclusive.

Yet space biology is a field that deserves to be given more attention, especially if we consider the future of our planet and the human race. Tech mega-moguls like Jeff Bezos and Elon Musk champion space travel as our future; they envision a time where we can take off in search of a new home once this planet becomes uninhabitable. Yet before this science-fiction becomes a reality, less fiction and more science need to happen. And the first question space biologists or anyone interested in space travel must ask is: can humans even colonize space? Our eyes, our cerebrospinal fluids, all of these things that were evolved specifically for the conditions of planet Earth –– are any of these the limiting factors for human space travel?

In 1991, NASA started the Spacelab Life Science Mission to investigate the effects of microgravity on animals and animal development. The ultimate goal was to examine the possibility of human development in space, and essentially determine if humans could one day give birth in space. They blasted over 2,000 jellyfish polyps into space, and induced them to strobilate (to progress through their life cycle from polyp to mature jellyfish). Jellyfish, specifically Aurelia aurita, were chosen for the job because of their rapid metamorphosis (less than five days) and the ability of scientists to easily induce metamorphosis in jellyfish by dosing their environment with either iodine or thyroxine.1 Being able to conduct an experiment in space is a privilege; the fact that jellyfish were selected to be grown in space over other animals and experiments demonstrates that NASA believed that this particular study would be illuminating for the future of space travel.

The scientists brought these space jellies back to Earth and compared them to their counterparts and found that there were slightly more jellyfish in the space sample that were unable to swim properly. Scientists hypothesized that microgravity was affecting the development of the jellyfish’s statoliths, or specialized cells that perceive gravity and help orient the jellyfish. Because humans have similar cells, the scientists believed that the jellyfish would serve as a model for humans. If in the future, long-time space travel becomes a reality, humans born and raised in space may struggle returning to Earth or other planets, according to the research of jellyfish in space. As strange as it sounds, jellyfish in space have shown us that perhaps long-term space travel is not as glamorous and as simple as we would like to believe.

Other ways that research on space biology, particularly human physiology in space, is conducted is through Head Down Bed Rest (HDBR) simulations. In these simulations, subjects are required to complete all their daily activities in a bed that places their head at a six-degree tilt from their feet for several months. These studies mimic a zero-gravity environment without having to send people to space. These studies have illuminated many effects of microgravity on humans: loss of accurate spatial orientation, loss of head-eye and hand-eye coordination, muscular atrophy, swelling of the head, loss of bone density, deteriorating bone architecture, among other effects.2 Currently, NASA has a fairly decent understanding of the effects of space on humans for short periods of time (around five/six months, or the length of most space missions and HDRB simulations). Longer than that, however, the amount of research is limited. Here, then, is the frontier of space biology: we do not fully understand how the human body reacts to progressively longer amounts of time in space. Finding the answers to this question is the key to space travel.

NASA is pursuing another research avenue to further their understanding with their twin study of astronauts Scott Kelly and Mark Kelly. One of the twins (Scott) was sent up to space for 340 days while his brother Mark remained on Earth as a control. While full research results have not been released, interesting findings are already emerging. Effects in gene expression, telomeres, and gut microbiome have been observed alongside expected physiological effects such as changes in bone density. Telomeres are DNA caps that protect the ends of our chromosomes, and longer telomeres are generally associated with cell longevity. Scott’s telomeres were observed to grow in space –– and once Scott returned to Earth, the length of his telomeres returned to pre-flight levels.3 This interesting and unintuitive finding has prompted NASA to plan a 2018 study of telomere length in ten astronauts. Again, most of the studies here, while interesting, do not offer concrete conclusions due to their small sample size. They do, however, offer interesting suggestions for what the human consequences of space travel will be.

NASA, however, is not the only one invested in human space travel. Both Jeff Bezos and Elon Musk run private space exploration companies that explore the possibility of long-term human space colonization. Bezos is the founder of Blue Origin, a private space company that champions space tourism and space travel. Elon Musk’s company is called SpaceX, and their vision statement writes that their ultimate goal is “of enabling people to live on other planets.”4 Together these companies are the frontrunners in the burgeoning private space exploration industry, and with the help of their money/fame, their vision for the future is rapidly gaining popularity.

Yet it does require a certain kind of arrogance to believe that we can simply escape this world once we make it uninhabitable and find another to colonize. In addition, the reality of space travel is not as chic –– nor as feasible, yet –– as NASA, Bezos, or Musk would have us believe. In the words of Scott Kelly on his first days back on Earth after a year in space,

“I’m seriously nauseated now, feverish, and my pain has gotten worse. This isn’t like how I felt after my last mission. This is much, much worse… I wonder whether my friend Misha, by now back in Moscow, is also suffering from swollen legs and painful rashes. I suspect so. This is why we volunteered for this mission, after all: to discover more about how the human body is affected by long-term space flight. Our space agencies won’t be able to push out farther into space, to a destination like Mars, until we can learn more about how to strengthen the weakest links in the chain that make space flight possible: the human body and mind.”5

The idea of space travel and living on another planet is impossibly alluring. The reality of space travel, and the bodily wear of it isn’t quite as alluring, yet many are working to make us believe in it as our future. And we believe it. We want to make space travel a reality, and so we study jellyfish and gut biomes, all in search of the miraculous –– all in search of a new world.

Connie Cai ‘21 is a freshman living in Grays Hall intending on concentrating in Chemical and Physical Biology.

Works Cited

  1. Dorothy B. Spangenberg et al., “Development Studies of Aurelia (Jellyfish) Ephyrae Which Developed During the SLS-1 Mission,” Advances in Space Research 14 (1994): 239- 247.
  2. Timothy Gushanas, “Human Research Program,” NASA [online]. https://www.nasa.gov/hrp/bodyinspace (accessed March 5, 2018).
  3. Witze, Alexandra, “Astronaut Twin Study Hints at Stress of Space Travel,” Nature [online], January 26, 2017. https://www.nature.com/news/astronaut-twin-study-hints-at-stress-of-space-travel-1.213 (accessed March 5, 2018).
  4. Musk, Elon, “SpaceX,” SpaceX [online]. http://www.spacex.com/about (accessed March 5, 2018).
  5. Kelly, Scott, “Astronaut Scott Kelly on the devastating effects of a year in space,” The Sydney Morning Herald [online], October 6, 2017. https://www.smh.com.au/lifestyle/astronaut-scott-kelly-on-the-devastating-effects-of-a-year-in-space-20170922-gyn9iw.html (accessed March 5, 2018).

 

 

Featured

A Shocking Revelation: Deep Brain Stimulation as a Treatment for Alzheimer’s Disease

Alzheimer’s disease, a highly debilitating neurodegenerative disorder, is the most common form of dementia worldwide.1 Those afflicted struggle with issues with memory, thinking, and behavior; the slow, progressive onset of this cognitive decline2 makes it an insidious and emotionally painful illness for patients—nearly 44 million people worldwide3—and their families. More agonizing still: there is currently no known cure.2 A recent study in the Journal of Alzheimer’s Disease, though, points to another way to slow the development of the disease: deep brain stimulation.4

Deep brain stimulation (DBS) involves implanting a “brain pacemaker” in a patient to deliver electrical impulses to specific regions of the brain. This alters its activities through regulation of specific action potentials, cells, or chemicals.5 DBS has proven efficacious for treating disorders like Parkinson’s disease, depression, and obsessive compulsive disorder5, and now, it appears that the therapy may find success in Alzheimer’s treatment as well. Multiple studies investigating DBS as Alzheimer’s treatment have described differentially successful results, reporting that deep brain stimulation in areas such as the fornix (implicated in the memory circuit of Papez) and the nucleus basalis of Meynert (a perception-related area that degenerates with Alzheimer’s) helped enhance memory by supporting the creation of new neurons in the hippocampus.2

The use of DBS as a treatment for Alzheimer’s disease was a happy, yet accidental, discovery: in 2008, an obese patient treated with hypothalamic DBS (in the hopes that it would regulate his appetite by suppressing his hunger cues) suddenly reported déjà vu, a feeling of rejuvenation, and an ability to recollect old memories in more vivid detail than he ever had before.6 Further imaging revealed that stimulation of his hypothalamus, the brain’s center for hunger and other drives, resulted in increased brain activity in his hippocampus (where short-term memories are consolidated into lasting ones).6 Though it was not intentional, this unexpected consequence opened the floodgates for new ideas for the treatment of Alzheimer’s—a disease characterized by significant memory loss.

Many studies have tested the ability of DBS to treat the symptoms of the condition, and this newest addition in the Journal of Alzheimer’s Disease, from neurologists at The Ohio State University, provides a longer-term study to elaborate on preexisting research. The study found that DBS in the frontal lobe, a region responsible for problem-solving, organization, planning, and judgment, helped slow the cognitive decline of subjects with Alzheimer’s.4 Though the study had a notably small sample size of three participants, the results were promising: subjects’ cognitive functions declined much more slowly than did those of controls without the treatment. One subject, who had been unable to prepare food on her own, was able to plan, organize, and cook a meal independently.4 These types of improvements have given doctors hope for the treatment outcomes of those suffering from Alzheimer’s disease, as well as other conditions that have a negative impact on cognition.

Though Alzheimer’s disease is a hot topic in the medical community, scientists have a lot to learn about which routes of treatment will be the most beneficial. The idea that DBS may successfully alleviate symptoms and slow disease progression is exciting and has sparked important discussions about next steps for treatment. Many researchers agree that alterations to higher-processing networks of the brain have the potential to improve executive functioning in patients, and the numerous ways to achieve this neuromodulation leave plenty of room for creativity and innovation in methods of treatment. Given the potential side effects of DBS—which can include surgical risks, infections, and neuropsychiatric complications7—an ideal treatment might include non-invasive methods.5 That type of therapeutic may be a reality in the future, but the best we can do, for now, is appreciate the possibilities that DBS has made apparent to treaters of the most prevalent form of dementia today.

Julia Canick ’17-’19 is a senior studying Molecular and Cellular Biology in Adams House.

Works Cited

  1. What is Alzheimer’s? Alzheimer’s Association [Online], https://www.alz.org/alzheimers_disease_what_is_alzheimers.asp (accessed February 18, 2018).
  2. Mirzadeh, Z. et al. of Neural Transmission 2016, 123(7), 775-783.
  3. 2017 Alzheimer’s Statistics. A Place for Mom [Online], https://www.alzheimers.net/resources/alzheimers-statistics/ (accessed February 18, 2018).
  4. Scharre, D. W. et al. of Alzheimer’s Disease 2016, 1-13.
  5. Benefits of Deep Brain Stimulation for Alzheimer’s. A Place for Mom [Online], https://www.alzheimers.net/8-3-15-benefits-of-deep-brain-stimulation-for-alzheimers/ (accessed February 18, 2018).
  6. Stetka, B. Reviving Memory with an Electrical Current. National Public Radio, Inc. [Online], https://www.npr.org/sections/health-shots/2016/05/14/477934952/can-electricity-be-used-to-treat-alzheimer-s-disease (accessed February 18, 2018).
  7. Chen, X. L. et al. Interventional Neurology 2012, 1(3-4), 200-212.
Featured

Fall 2017: The Evolution of Science

Our Fall 2017 issue is now available online: The Evolution of Science! Articles are posted individually as blog posts (the articles are linked below), a PDF version is currently displayed on our Archives page, and print issues will be available around Harvard’s campus starting Spring 2018. We hope you enjoy browsing through this issue as much as we enjoyed putting it together! A big thank you to our dedicated staff and supporting sponsors.

 

Table of Contents:

NEWS BRIEFS

Feeling Blue: How Instagram Activity Can Provide Insight into Behavioral Health by Julia Canick ’18

Cyborg Bacteria: Catching Light by Michelle Koh ’21

FEATURE ARTICLES

The Theory of Everything, Challenged by Connie Cai ’21

A History of Microscopy by Kelvin Li ’21

The Limitations of Science Where it Matters Most by Will Bryk ’19

GENERAL ARTICLES

The Evolution of the Tetrapod Forelimb by Priya Amin ’19

A Political Symbiosis by Michael Xie ’20

COMMENTARY

Cutting Time: A Brief History of Surgery by Jeongmin Lee ’19

Why Racial Prejudice Isn’t Scientifically Sound: The Evolving Concept of Race in Science by Jessica Moore ’21

Geoengineering: Turning Back the Climate Change Clock? by Sandip Nirmel ’21

Geoengineering: Turning Back the Climate Change Clock?

By: Sandip Nirmel

Global warming and climate change pose significant threats to the future of life on Earth. With temperatures increasing by an average of 0.17 ° C each decade, scientists have already begun to witness changes in sea ice melt patterns and to witness increased scope of vector-borne diseases, among other consequences (1). To address these concerns, the United Nations convened in 2015 to establish the Paris Agreement, a framework for reducing global greenhouse gas emissions and for slowing the global temperature increase. To date, 170 out of the 197 participant nations have ratified the Paris Agreement, which stands as the most comprehensive climate change framework that the United Nations has established to date (2).

Although the Paris Agreement has noble goals, some members of the scientific community worry that merely slowing down climate change will not be enough to prevent the environment from going “over the edge.” These individuals argue for the proactive implementation of geoengineering mechanisms to reverse some of the most crucial environmental changes that have taken place thus far. As defined by the 2010 United Nations Convention on Biological Diversity, geoengineering mechanisms include “any technologies that deliberately reduce solar insolation or increase carbon sequestration from the atmosphere on a large scale that may affect biodiversity” (2). Put simply, the general goal of geoengineering is to decrease the greenhouse effect. This same United Nations Convention published a moratorium that prohibits field testing of geoengineering techniques by any person; on the other hand, the moratorium allows for “small scale scientific research studies that would be conducted in a controlled setting” (2).

The obvious ambiguities of the United Nations’ geoengineering moratorium have prevented any serious enforcement of its policies. In 2012, American entrepreneur Russ George conducted a unilateral iron fertilization experiment off the coast of western Canada (3). One of the main purposes of this experiment was to determine if the influx of iron would help boost the phytoplankton population in the area, since iron is a vital nutrient for phytoplankton and other marine primary producers (3). Higher phytoplankton population is correlated with more photosynthesis, which translates to a greater intake of carbon dioxide from the atmosphere. So, the hypothesis in this case is that seeding the ocean with iron leads to greater sequestration and export of atmospheric carbon dioxide; this process should theoretically help reduce global temperatures over the long run by decreasing the amount of carbon dioxide in the atmosphere (3). George faced a considerable public outcry since he effectively violated the United Nations moratorium on geoengineering (3).

At this point in time, rather than concerning ourselves with the minutiae of international policy, it is much more meaningful for us to debate a more philosophical question: Is geoengineering the future of climate remediation? Geoengineering has only recently burst onto the scientific scene, and it presents a novel approach towards the long-standing issues of global warming and climate change. Below, we shall explore both sides of the debate and see how geoengineering fits into the evolution of science.

Advocates of geoengineering generally argue that efforts to simply slow down global warming have been fairly unsuccessful, thus necessitating proactive corrective action in the form of geoengineering. In a seminal paper, Dutch atmospheric chemist Paul Crutzen points out that global carbon dioxide emissions would need to be reduced by 60- 80% to achieve zero net emissions, which is largely unfeasible (4). Crutzen then discusses the merits of spraying sulfur dioxide (SO2 ) into the atmosphere, since SO2 is known to reflect solar radiation back out into space, thus counteracting the greenhouse effect brought about by CO2 (4). There is fairly convincing empirical evidence that corroborates Crutzen’s case: following the 1991 Mount Pinatubo volcanic eruption, which released 17 million tons (15.4 million metric tons) of SO2 , global temperatures dropped by about 0.6 ° C the next year.5 Although SO2 clearly can have a myriad of other side effects on the environment (e.g. acid rain), we see that it can potentially be a powerful tool for reversing the current trend of global warming, if we can find a way to harness it correctly. With temperatures increasing at their current rate, such techniques may very well become the only option to keep global temperatures at a sustainable level.

Opponents of geoengineering present a fairly strong case as well. One common argument is that geoengineering involves excessive meddling with the environment, which more often than not results in environmental damage. As alluded to previously, SO2 injection often results in acid rain, which arguably has far more detrimental effects on the environment than potential benefits. With regard to iron fertilization in oceans, some geoengineering skeptics point out that massive phytoplankton blooms often form around seeded areas, disrupting the balance of the local food chain and sapping other vital nutrients from the waters (3). A second oppositional argument involves the notion of a moral hazard.6 Generally speaking, a moral hazard is where someone develops a solution to a problem, yet the presence of the solution makes people think that they no longer need to care about the problem and can behave as unscrupulously as they please.6 In the case of geoengineering, the moral hazard would be that if the general public becomes convinced that geoengineering can resolve all environmental issues, then the public will no longer care about using sustainable practices, since they will think that geoengineering can reverse all future damage that is done to the environment. This moral hazard argument generally would come into play at the policymaking stage, since lawmakers would now need to decide how to regulate geoengineering practices.

We thus continue to observe a spirited debate about the merits of geoengineering. This field holds immense promise and should be explored; however, the scientific community must take calculated steps to make sure that geoengineering’s power is harnessed in a manner that is beneficial on the whole to all stakeholders. Going forward, we can expect to see more discussion and exploration of geoengineering in the scientific community, as global warming continues to steadily progress.

 

Sandip Nirmel ’21 is a freshman in Thayer Hall.

WORKS CITED

[1] Dahlman, L. Climate Change: Global Temperature. https://www.climate. gov/news-features/understanding-climate/climate-change-global-temperature (accessed Sept. 24, 2017).

[2] Convention on Biological Diversity. Climate-related Geoengineering and Biodiversity. https://www.cbd.int/ climate/geoengineering/ (accessed Sept. 24, 2017).

[3] Fountain, H. A Rogue Climate Experiment Outrages Scientists. The New York Times [Online], Oct. 18, 2012. http://www.nytimes. com/2012/10/19/science/earth/ iron-dumping-experiment-in-pacific-alarms-marine-experts.html (accessed Sept. 24, 2017).

[4] Crutzen, P. Climactic Change 2006, 77, 211-219.

[5] Harpp, K. How Do Volcanoes Affect World Climate? Scientific American [Online], Apr. 15, 2002. https:// http://www.scientificamerican.com/article/ how-do-volcanoes-affect-w/ (accessed Sept. 24, 2017).

[6] Fairbrother, M. Climactic Change 2016, 139, 477-489.

Why Racial Prejudice Isn’t Scientifically Sound: The Evolving Concept of Race in Science

By: Jessica Moore

“Heart failure (HF) is a big problem, especially for African Americans. If you’re African American, you’re more likely than people in other ethnic groups to get HF at a younger age, and you’re more likely than others to be hospitalized. Unfortunately, African Americans are also more likely to die earlier than are other people with HF” (1). This quotation comes directly from the website of BiDil, a heart failure medication marketed specifically for African Americans, based on supposed genetic markers. This current drug, which insinuates that ‘race’ can influence someone’s health, and therefore that certain races require different healthcare treatment from others, is a recent example of the constant involvement of the concept of race in science. However, this concept seems to be losing any holding it once had. New genomic advancements have taught us that race, as a concept, and race-based treatments like BiDil, have little to no place in modern scientific thought.

The concept of race, in modern understanding, began in the Americas with the first race-based slave system in the 1600s (2). Before this, language, religion, and class were the primary determinants of a person’s position within society. It wasn’t until the founding of the American colonies that race was utilized to determine social class.

THE SCIENCE OF PHYSICAL DIFFERENCES

However, ideas of physical differences are much older. Scientific observations on the physical differences between people date back to 400 BC, with Hippocrates (3). Hippocrates advocated that environmental factors determined the differences in human populations. He stated, “The forms and dispositions of mankind correspond with the nature of the country” (4). This thought, though scientifically sound at the time, was followed by his equal belief that geographic factors not only influenced physical appearance, but behavioral characteristics as well. This represents the first notions of prejudice based on geographic stereotyping alone. This was further enhanced by the Middle Ages belief in “black sin,” that Africans were descendants of Ham, who was depicted as a sinful man, and “his progeny as degenerates” (5).

Moving into the 17th century, there was a distinct increase in the scientific study of humans, and a push for naturalists to classify humans and their differences (6). This was the result of a sweeping scientific effort at taxonomy by naturalists and their desire to categorize all living systems. Skin color, stature, and food habits were all considered in classifications, but it was ultimately decided that geographic origin was the best determinant of human differences. However, the concept of “race” was yet to be defined (6)

The concept of a “country of origin” was arguably a positive direction in classifying human differences, as it simply focused on heritage and its impacts on physical stature. However, moving into the 18th century, the development of the Americas was marked by a dramatic increase in the slave trade, leading to a degenerate approach to this classification (4). Cultural and political notions favoring the concepts of racial superiority tainted early scientific discoveries (7). This century was marked by the inclusion of behavioral and psychological observations of human differences, retroactively reflecting the “scientific” views of the Middle Ages (7). This concept led people to believe that race, and its subsequent behavioral traits, were innate and unchangeable.

With the rise of taxonomy in science, scientists began to believe that humans could be categorized into subgroups of Homo sapien (4). This is where the concept of race truly arose, along with the suggestion of a hierarchy of races, each with distinct physical, and supposedly behavioral, traits. Johann Friedrich Blumenbach is credited with defining race in 1779 with his division of humanity into five distinct racial groups: the Caucasian race, the Mongoloid race, the Malay race, the Negroid race, and the American race.8 His findings were widely based on cranial research (wherein he evaluated differences in the circumference of the head). Interestingly, he held the unpopular view of the time that races were equal and that, “there is no so-called savage nation known under the sun which has so much distinguished itself by such examples of perfectibility and original capacity for scientific culture, and thereby attached itself so closely to the most civilized nations of the earth, as the Negro” (3). However, his works would sadly be misinterpreted, and his definition of race later used to enforce the concept of innate differences among man (8).

FLAWED RACIAL JUSTIFICATIONS

In the 19th century, there was a movement to shift understanding of race from a mere taxonomic use to a biological one. This was where racial conflict became particularly rooted in scientific thought (9). Using Blumenbach’s methods, scientists Francis Galton and Alphonse Bertillon measured shapes and sizes of skulls in order to determine differences in races, ultimately relating these findings to markers of intelligence (10). This represented the conceptualization of the idea that one’s race could also reflect group differences in intelligence and character. Many scientists began to assert that these differences were obvious, and the concept of race was used to dramatically divide groups by culture and physical appearance. It is important here to remember that this ‘scientific’ belief was widely supported by the political agenda of the 19th and 20th centuries, which needed a socially acceptable justification for actions such as the Eugenics Movement and wide campaigns of oppression and genocide (11).

During the late 20th century and early 21st century, tensions rose between individuals who believed race generated a hierarchy of mankind and those who believed in human equality. Further understanding of DNA and the human genome led scientists to believe that while race, which had been socially accepted as the general classification term for differences between groups of people, was an inherited characteristic, most behavioral attributes were primarily determined by factors such as environment and culture (11). However, the social construct of race was so deeply ingrained that re-evaluating the concept of race as one with a truly sound scientific basis took much time and convincing (12). In 1978 UNESCO released a “Declaration on Race and Racial Prejudice,” which declared this important ideal: “Any theory which involves the claim that racial or ethnic groups are inherently superior or inferior, thus implying that some would be entitled to dominate and eliminate others, presumed to be inferior, or which bases value judgments on racial differentiation, has no scientific foundation and is contrary to the moral and ethical principles of humanity” (3).

Reflecting this ideal, the early 21st century has been marked by a period of increased research in how environment and culture impact behavior over race. The Human Genome Project showed that the concept of race has limited basis in genetics. Francis Collins, former head of the National Human Genome Research Institute, called race a “flawed” and “weak” concept that scientists need to move beyond.12 Racism and the concept that it has a basis in inherited characteristics produces bad science, bad healthcare, and furthermore, bad human relations (13).

As we head into the next decades of scientific development, it is important for us to look back on how race has evolved in science over time, and to recognize that it may not have a place in the sciences any longer. We must evolve past the concept of race in society, and accept what scientists have coined “geographic ancestry” instead as a definer of the physical differences present in the human population. As genetic research has come to show us, the basis for which we have justified so much prejudice and violence needs to be left in history as we take our next steps as scientists and influencers into the future. The concept of race and its prejudices, though a relatively modern construct, has no place in modern scientific society.

 

Jessica Moore ’21 is a freshman in Greenough Hall.

WORKS CITED

[1] Arbor Pharmaceuticals. BiDil. https://www.bidil.com/ (accessed Oct. 2, 2017).

[2] Gossett, T. Race: The History of an Idea in America; Oxford University Press: Oxford, UK, 1963.

[3] Brace, C. “Race” is a Four-Letter Word: the Genesis of the Concept; Oxford University Press: New York, 2005.

[4] Augstein, H., Ed. Race: The Origins of an Idea, 1760–1850; Thoemmes Press: Bristol, England, 1996.

[5] Snowden, F. Before color prejudice: the ancient view of blacks; Harvard University Press: Cambridge, MA, 1983.

[6] Stanton, W. The leopard’s spots: scientific attitudes toward race in America, 1815–1859; University of Chicago Press: Chicago, IL, 1960.

[7] Shipman, P. The Evolution of Racism: Human Differences and the Use and Abuse of Science; Harvard University Press: Cambridge, MA, 1994.

[8] Gould, S. The Mismeasure of Man; Norton: New York, 1996.

[9] Dain, B. A Hideous Monster of the Mind: American Race Theory in the Early Republic; Harvard University Press: Cambridge, MA, 2002.

[10] Banton, M. Racial Theories, 2nd ed; Cambridge University Press: Cambridge, UK, 1998.

[11] Hoover, D. Conspectus of History. 1981, 1.7, 82-100.

[12] Koenig, B. et al. Revisiting Race in a Genomic Age; Rutgers University Press: Piscataway, NJ, 2008.

[13] Malik, K. The Meaning of Race; New York University Press: New York, 1996.

[14] Sarich, V.; Frank, M.. Race: the Reality of Human Differences; Westview Press: Boulder, CO, 2004.

Cutting Time: A Brief History of Surgery

By: Jeongmin Lee

We are walking works of art. From the delicate brain to the flow of blood in our veins, the human body is a dense complex system. The surgeon is expected to cautiously open this living system to mend his or her patient before patching the patient up so that the machination of the body can resume its work. This delicate task has been handled quite differently over the ages, and currently, we are in yet another period of change due to technological advancements. Visiting previous procedures can elicit an appreciation of the surgical methods that we have in this present day.

Even in prehistoric times, people have recognized that there are situations when one must intrude a patient’s body to treat them. Archaeologists have found evidence of trephination, the practice of drilling a hole into the skull in hopes of treating mental diseases, in remains of humans dating back to 6500 BCE (1). Evidence suggests that trephination was practiced throughout the world. The people who were treated with trephination appear to have survived the procedure, as many of the skulls seem to have started healing after the perforation. A possible reason behind this procedure may be related to humoralism from ancient Western medicine.

As time passed, records describe more detailed procedures that revolve around the idea of humors. Hippocrates initiated historical medicine in ancient Greece, according to his disciples who made records under him. A common theme in his lectures was humors, four types of liquids that can affect one’s personality and well-being: blood, phlegm, yellow bile, and black bile (2). It was believed that when the humors were imbalanced, the patient was ill, so medicine was diagnosed to balance the liquids by letting out one of them. Trephination, according to Galen from the Roman Empire, had many risks. So instead, those civilizations practiced bloodletting where a hot patient with too much blood had to lose some blood to balance his or her humors. This practice remained popular throughout the Middle Ages as people believed that diseases could be washed out of the body if the blood containing the disease was drained (3). There were other surgical procedures by the sixteenth century such as amputations, which removed infected parts of the body before the infection spread. However, patients would die in pain or from further complications if the amputations were not quick enough.

Before the advent of anesthetics, amputations and other surgeries could cause the patient to die in pain, and sooner than if the patient had been left unoperated. “Time me, gentlemen” is the most recognized quote by Robert Liston, who practiced completing surgeries in record time (4). However, the quick job often left patients dying on the table or soon after. It took until 1846 before William Morton from America demonstrated how the anesthetic properties of ether can be used to complete a dental procedure painlessly.5 Anesthetics allowed surgeons to take their time when operating, greatly increasing survival rate. Anesthesia is commonly used in surgeries today.

The evolution of surgery continues as surgeons use cutting edge technology to make their incisions. To reduce exposing the body’s interiors as much as possible, some medical centers are employing technology to aid the surgeon’s accuracy and efficiency. The Robotic Surgery Center in New York has a machine that magnifies the point of interest and allows the surgeon to control a small but accurate knife to make the least invasive maneuvers.6 Overall, the history of surgery depicts stories of how procedures become overturned with new discoveries. Akin to how Liston’s quick sutures were not as important once anesthetics increased the time a surgeon could spend operating or how trephination discontinued once Western civilizations reasoned different alternatives, automated machines can find and make ideal cuts that a skilled human hand would not reliably accomplish. Currently, the machines are yet another tool a surgeon can use to minimize risks for the patient. Surgery’s history of constant innovation is what many fields strive to have.

 

Jeongmin Lee ’19 is a junior in Lowell House concentrating in Chemistry.

WORKS CITED

[1] Irving, J. Trephination. https://www. ancient.eu/Trephination/ (accessed Sept. 23, 2017).

[2] Osborn, D. The Four Humors. http:// http://www.greekmedicine.net/b_p/Four_ Humors.html (accessed Oct. 15, 2017).

[3] Clunie, A. Surgery…a Violent Profession. https://www.hartfordstage.org/ stagenotes/ether-dome/history-of-surgery (accessed Sept. 4, 2017).

[4] Jones, A. et al. Time me, gentlemen! The bravado and bravery of Robert Liston. https://www.facs.org/~/media/files/archives/05_liston.ashx (accessed Sept. 23, 2017).

[5] Markel, H. The painful story behind modern anesthesia. PBS [Online], http://www.pbs.org/newshour/rundown/the-painful-story-behind-modern-anesthesia/ (accessed Oct. 15, 2017).

[6] NYU Langone Health. What is Robotic Surgery? https://med.nyu.edu/robotic-surgery/physicians/what-robotic-surgery (accessed Sept. 23, 2017).

 

A Political Symbiosis

By: Michael Xie

Dr. John P. Holdren is the Teresa and John Heinz Professor of Environmental Policy at the Harvard Kennedy School of Government and Professor of Environmental Science and Policy at Harvard University. During the Obama administration, he served as the Director of the Office of Science and Technology Policy and the President’s Science Advisor, becoming the longest-serving advisor since its creation. Among numerous honors, Dr. Holdren was awarded one of the first MacArthur Prizes in 1981 and gave the acceptance speech for the Nobel Peace Prize on behalf of the Pugwash Conferences on Science and World Affairs, as the chair of the executive committee. HSR sat down with Dr. Holdren to get his take on the ever-evolving relationship between the scientific world and public policy.

MX: When and why did you first get involved in science?

JPH: I was interested in science from the time I was in grade school, and so I decided early on that I wanted to work on issues at the intersection of science and public policy—big issues such as population resources, environment development, and international security. I was interested in those issues already in high school. I went to MIT to get a technical education and majored in aeronautics and astronautics with a minor in physics. Then I did a PhD in theoretical plasma physics and worked in the fusion energy program as a physicist at the Livermore Lab, with a deal with the head of the fusion energy division that I could spend one day a week working on the wider societal implications of fusion: What was this niche in the overall energy picture? Why did we need it? What characteristics would it need to have in order to be an attractive energy resource for society in the long run? I was appointed to my first National Academy of Science committee advising the government in 1970, the same year I got my PhD. I was an early advisor in the Council of Environmental Quality. I was a member of the Energy Research Advisory board of the Department of Energy toward the end of the 1970s. I basically spent my whole life at the boundary of science and public policy but most of it with my day job as an academic.

MX: What made you decide to get involved in public policy in addition to your scientific interests?

JPH: If you really want to change the world, it’s not enough to understand the world better. Science is about increasing our understanding of ourselves and our world and our universe and how it all works, but if you want to fix what’s wrong in terms of afflictions of the human condition—poverty, disease, conflict, and inequity—then you have to be prepared to apply those insights about how the world works and how technology works, and you have to get into the policy debate. You have to be prepared to engage with the different sectors of society rather than staying in your academic Ivory Tower. You need to engage with business, with government, and with civil society to get things done, and I was always interested in getting things done.

MX: What do you see as the major differences or disparities in government work and university work?

JPH: The wonderful thing about an academic setting is you have a tremendous amount of freedom, not only to choose what you work on but to spend whatever fraction of your time you want on a relatively small number of issues, whereas in government you don’t have a lot of choice. If you’re in the position that I was in, which was the Science and Technology Advisor to the President of the United States and the Director of the White House Office of Science and Technology Policy, you have to be engaged in everything that relates to policy issues the President is focused on—that means the relation of science and technology with the economy; with biotechnology and public health; with energy, environment, climate change; with national and homeland security; with environmental conservation and protecting the oceans—and so you don’t have the luxury of spending as much time as you might like on one or two or three things.

You also have to deal, much more than in academic life, with emergencies. Emergencies like the H1N1 flu that materialized early in the first term or even before that with the economic recession, which the Obama administration inherited and needed very quickly to figure out how science and technology could be applied to economic recovery. Then, the Macondo oil spill, the Fukushima nuclear accident, the Ebola outbreak. These emergencies come along when you’re in government, and they always tax your ability to do everything else you were supposed to be doing and deal with the emergency at the same time. Another big difference is, in academic life, you have the fun of having students, teaching classes, advising masters and doctoral dissertations, and working with postdocs. The closest thing to that in government is when you have fellows and interns in offices. You have some interaction with younger people who are in the process of learning new things and applying them to try to influence things for the better, but there’s a lot less of that because your other duties take up so much time. One of the principal functions of the university is teaching and mentoring, whereas that’s a very secondary function in government.

MX: How have you seen science and policy traditionally interact and how has that evolved over the years?

JPH: The interaction of science and policy goes back a very long way—certainly, to the administration of Abraham Lincoln when the National Academy of Sciences was founded. The two-way street between science and technology for policy—that is, how can insights from science and technology assist in the policy process, and, the other side of that coin, how can government choices and investments advance the state of science and technology in society— goes back 150 years.

It underwent a major transformation during WWII when Franklin D. Roosevelt appointed distinguished MIT engineer and technical entrepreneur Vannevar Bush as director of the new Office of Scientific Research and Development in the White House. Bush became the first full time science and technology advisor to a US President. Most of that advice was on matters related to the war, and because science and technology for the military had a transformative effect during the war, Bush had the idea that similarly applying science and technology to civilian needs could have a transformative effect on the whole society. He wrote a book at the end of WWII called Science the Endless Frontier. It led ultimately to the creation of the National Science Foundation in 1950. There really emerged a symbiosis of government academia and industry, which was responsible for a large part of economic progress made by the United States over the ensuing decades. The symbiosis involved government paying for fundamental and early stage applied research in the universities and national laboratories, then the private sector picking up the most promising ideas that had emerged from that effort and converting those ideas into practice. Eric Lander, who was my co-chair of the President’s Council of Advisors on Science and Technology and Eric Schmidt, who was the executive chairman of Google, wrote an op-ed piece called the “Miracle Machine” about how this symbiosis worked and arguing that it was very important that we not dismantle the “Miracle Machine” by reducing the government’s investments in research and development.

Initially after the war, most of the science and technology advice Presidents needed from their advisors related to issues such as nuclear weapons, early warning systems, bombers, and missiles, but over time as the effects of this “Miracle Machine” became more apparent, the advice that Presidents wanted expanded. In the Clinton administration the OSTP had a maximum of 66 people. That was bigger than in the previous administration, which was bigger than in the administration before that. The reason it kept growing is that domains in which Presidents needed to be paying attention in science and technology kept getting broader.

In the Obama administration, we ultimately had 135 people in the OSTP. I think the Obama administration was probably, of all modern presidencies, the administration most engaged with science and technology issues. President Obama understood how and why science and technology mattered, and so he was very interested in how we could use science and technology to advance society’s interest. We in the Obama administration launched a very wide variety of initiatives in biomedicine and public health, a whole set of initiatives on science and technology for the economy, initiatives on open data, and a whole set of initiatives on energy and climate change.

President Obama was seized with the proposition that our challenges and opportunities were so big and our resources so limited that partnership was essential, and virtually every one of the initiatives was constructed as a partnership with engagement of the government with the private sector, universities, and civil society. The other thing, which has grown over the years, has been international collaboration in science and technology. Some of the earliest collaborations internationally were on fusion energy in the late 1950s. The areas of collaboration expanded subsequently to include space, and the International Space Station today is a terrific example. One of the instructions that President Obama gave me at the beginning of the administration is to build up further our science and technology collaborations with China, Russia, India, Japan, the European Union, Brazil, Korea, and we did that. This dimension of science technology and policy being an issue that is international has become very pervasive.

MX: How has the changing political climate and transition to the Trump administration affected the scientific world? How much does governmental change affect scientific thought?

JPH: The choice of who is going to be the President potentially has quite a lot of impact on what the role of science and technology in government is going to be and what the role of government in supporting science and technology is going to be. So far, President Trump has appointed a number of people to positions of great responsibility who don’t appear to be interested in scientific or technological facts. You’ve got people who deny the reality of climate change running the EPA and the Office of Management and Budget in the White House, and people who are at least skeptical about the reality of climate change running the Department of Interior and the Department of Energy. This is absolutely extraordinary in terms of a lack of interest in the relevance of science to the government decision-making. A lot of the science and technology appointments are still vacant many many months into the administration. You have to assume either the administration has been incredibly distracted by other things or that they’re just not very interested in the role of science and technology in these departments and agencies. I have no successor. There is no one that has been nominated to be the Director of the OSTP, no one who is serving as the Assistant to the President for Science and Technology.

In addition, the Trump administration has proposed budgets that would severely cut government investments in research and development. One has to hope that the Congress, which ultimately appropriates the money, will not accept the Trump administration’s proposals. Trump’s proposals would cut research on energy technology by about 50 percent. This is after we committed in Paris with 19 other countries— now 21 other countries have joined—to double the government’s investments in clean energy research and development in five years, and now President Trump proposes to cut it in half in one year. President Trump proposed to cut the National Institutes of Health by 20 percent. The Congress does not seem inclined to do it, fortunately, but cuts at the National Science Foundation and cuts at the National Oceanic and Atmospheric Administration indicate a lack of understanding about the role of government investments in science.

President Trump and others around him have said, “If research is worth doing, the private sector will do it.” This is not actually realistic. The private sector will never invest in very basic research to the extent that societies need and require because the uncertainty is too great, the risks of failure too high for the private sector to do it. That’s why we’ve had and needed this symbiosis between government making investments in basic research, taking those risks on behalf of all of society because what we know from history is that you can’t predict which basic research project is going to yield big gains. We know when you look at the whole portfolio that some of them are transformative. Think about the laser, which emerged from absolutely fundamental research. The people who figured out the laser had no idea that 50 years later that lasers would be the way we do eye surgery, the way we cut metal, the way we copy documents, the way we play videos. No private enterprise would have invested in all the research needed to do that. Even more recently: the fracking revolution and all the natural gas that has enabled us to displace a lot of coal and electricity with much cleaner natural gas. The discoveries that enabled that were all funded by the federal government. They weren’t funded by the energy companies. There’s a lot of reason to worry about what the current administration is doing. Now, we will have to some extent, the private sector, states, cities, civil society organizations, picking up some of the things that the government drops, but the government’s role is too important and too costly for all of this to be picked up.

MX: Do you think we have enough scientists involved with policy at the current time?

JPH: I have said for a long time that it would be a good thing if more scientists devoted some of their time to thinking about the implications of their science, the implications of technology for society—if they spent some of their time explaining to a wider audience what they do, why they do it, how they know what they know, what the implications are for society. In 2007, when I was the president of the AAAS, I said that I thought scientists and engineers should tithe 10 percent of their time, no matter what their main focus was, to thinking, talking, and writing about the wider implications of what they do because we have a society now where virtually every issue is infused with science and technology. We need our scientists and technologists to do a better job communicating with everybody else about what those connections are. So, the short answer is, we don’t yet have enough scientists and technologists engaged with public policy. We have a lot— literally thousands—of scientists participate in the studies of the National Academies of Science and Engineering and Medicine, advising the government on different topics. The numbers aren’t small but it’s not as big as we need it to be. Additionally, we need to get better at STEM education, so that we have not just assurance that the next generation of Nobel Prize winners will be educated and trained but that we will have the tech-savvy workforce the jobs the twenty-first century increasingly require and the science-savvy citizenry that democracy requires if it’s going to work in an era where science and technology are infusing virtually every public policy issue.

MX: What areas of science will be the most important for policy and what areas of policy will be the most important for science in the next few years?

JPH: Many people say that the end of the 20th century was the start of the age of information technology and the 21st century is going to be the age of biotechnology. I think increasingly it’s all of the above. I think we’re seeing more and more interactions between biotechnology, information technology, nanotechnology, robotics, and other fields. We need to have ways to regulate these very rapidly moving technologies that protect public safety but at the same time not stifle innovation. That’s an enormous challenge for policy. A whole array of defense technology is going to continue to be important: autonomous vehicles, robots of various kinds. Information warfare and cyber security has obviously become very important. We have to figure out what we’re going to do to protect privacy in an era when all kinds of information can be put together to learn things about people’s “private business.”

There’s also a great challenge around employment. If we have self-driving cars and trucks, a lot of people are going to become unemployed. What are those people going to do? In the past, we’ve largely succeeded in inventing new jobs just about as fast as old kinds of jobs were made obsolete, and so we don’t have a huge unemployment rate in this country. But the unemployment rate is too high in too many places, and we need to worry about technological unemployment, where technologies replace jobs faster than jobs are created. Artificial intelligence is also becoming more important. There’s lots of specialized artificial intelligence out there already. There are people thinking it will become possible, over the next several decades, to have a more generalized artificial intelligence. How is that going to be used? How are we going to benefit from its upside while protecting ourselves from its downside? These are going to be huge policy issues.

 

Michael Xie ’20 is a sophomore in Leverett House concentrating in Chemistry and Physics.

WORKS CITED

[1] J. P. Holdren, personal interview, Oct. 16, 2017.

The Evolution of the Tetrapod Forelimb

By: Priya Amin

 

What is the distal limb pattern of tetrapod forelimbs? The tetrapod distal limb pattern consists of three segments: the stylopod (the first segment of the limb including the humerus), the zeugopod (the second segment of the limb including the radius and ulna), and the autopod (the hand) (1).

What is the basic phylogenetic breakdown of the fin to limb evolution? The vertebrate group Osteichthyes diverges into Sarcopterygii (lobe-finned fish) and Actinopterygii (ray-finned fish) according to several physical characteristics. Most importantly, Sarcopterygii develop lobe fins, which include a central axis and radial fin rays. Within Sarcopterygii, a group named tetrapodomorpha develop a partial distal limb pattern which lacks digits but includes a humerus, radius, and ulna. Early tetrapods like Acanthostega and other amphibians develop digits, and the full distal limb pattern seen in humans is born! Diverging groups of derived tetrapods specialize and adapt this ancestral pattern of a stylopod, zeugopod, and autopod, which define the distal limb pattern.

COELACANTH (Middle Devonian—Present)

This sarcopterygian (“lobe-finned” fish) showcases the ancestral fin model for tetrapods. Accordingly, the lobed fin of coelacanths has a central axis with fin radials. Because radials deviate on both sides of the axis, the fin is considered to be biserial (2).

In addition, because the species Latimeria is considered a ‘living fossil,’ we can study the movement and morphology of the musculature of this lobe-finned fish (3). Accordingly, a study by Miyake et al. revealed that the musculature in the ‘shoulder’ and ‘elbow’ joints of living coelacanths shows an ancestral condition of movement with the human stylopod (humerus) (4). In particular, the stylopods of Latimeria and Homo sapiens both have paired musculature that is necessary for maintaining stable posture and achieving weight-bearing positions. The presence of this musculature in Latimeria suggests that the basic muscle arrangement needed for terrestrial life may have existed at the earlier stages of tetrapod evolution.

EUSTHENOPTERON (Late Devonian)

Eusthenopteron, a tetrapodamorph, features some of the earliest evidence of a distinct humerus, ulna and radius, which evolved from the ancestral sarcopterygian lobe-fin. Research has shown that its humerus has growth patterns similar to those of tetrapods (5). However, its tetrapod-like humeral characteristics are not terrestrial adaptations. A study by Meunier and Laurin revealed that the long bones of Eusthenopteron are very similar to tetrapod long bones (6). Furthermore, the long bones of Eusthenopteron seem to have been capable of supporting more mechanical stress than the fins of extant acintinistians like Latimeria, which has implications for enhanced movement.

PANDERICHTHYS (Late Middle Devonian)

The flat, L-shaped humerus of Panderichthys represents a key early adaptation for tetrapod evolution: the limb would now be able to prop a large head and support limited front-to-rear movement needed for walking (7,8). The blade-like radius and ridge-and-groove surface of the ulna of both Panderichthys and Tiktaalik represent another ancestral condition of the tetrapod forelimb. In addition, in a study by Boisvert et al., a CT scan revealed previously undiscovered distal radials (9). Before this discovery, the prevailing notion was that tetrapod digits were newly evolved structures. Consequently, the distal radials of Panderichthys (as well as the distal radials of Tiktaalik) have been identified as the possible ancestral condition for digits and human fingers.

TIKTAALIK (Late Devonian)

The pectoral fin of Tiktaalik represents the key functional and morphological transition state between fins and limbs. Like other tetrapodomorphs, Tiktaalik has retained dermal fin rods. Like early tetrapods, its partial distal limb morphology includes a humerus, radius, and ulna; however, it lacks digits. The fin is relatively narrower and stouter than the fins of other tetrapodamorphs. In addition, the fin of Tiktaalik differs significantly from other tetrapodomorphs due to its expanded endoskeleton and reduced dermal exoskeleton. This limb morphology suggests a specialized adaptation for locomotion in shallow floodplains, including the capability for flexion and extension (10).

ACANTHOSTEGA (Late Devonian)

Possessing eight digits, Acanthostega was a primarily aquatic, early tetrapod with limited limb movement (1). Its relatively flat articular surfaces suggest limited flexibility at the wrist and elbow; as a result, the limb most likely acted more as a swim paddle than a more derived tetrapod limb. With the presence of digits, the digital arch of Acanthostega is noticeably more curved (2). Furthermore, in comparison to primitive tetrapod fins, the limb of Acanthostega is distinctly broad and flattened (7). Acanthostega demonstrates the full distal limb pattern ancestral to all derived tetrapods, including a stylopod, zeugopod, and autopod.

HOMO SAPIENS (Present)

Much further derived than Acanthostega, we have evolved a very specialized form of the distal limb pattern. The stylopod includes the humerus, the zeugopod includes an elongated radius and ulna, and the autopod consists of carpals, metacarpals, and phalanges (11). Capable of full flexion and extension, the human limb has fully adapted for movement on land.

From lobe-finned fish to humans, the evolution of the distal limb pattern can be traced along the phylogenetic tree. This pattern defines the three segments of the tetrapod forelimb. By studying this transition, we can see how our arms and legs have evolved for movement on land. And, we can appreciate our fish ancestors!

 

Priya Amin ’19 is a junior in Pforzheimer House concentrating in Integrative Biology.

WORKS CITED

[1] Laurin, M. How Vertebrates Left the Water; University of California Press: Berkeley, CA, 2010.

[2] Mednikov, D. N. Paleontol. J. 2014, 48(10), 1092-1103.

[3] Cloutier, R.; Forey, P. L. Environ. Biol. Fish. 1991, 32(1-4), 59-74.

[4] Miyake, T. et al. Anat. Rec. 2016, 299(9), 1203-1223.

[5] Sanchez, S. et al. Proc. R. Soc. Lond. Biol. 2014, 281(1782), 20140299.

[6] Meunier, F. J.; Laurin, M. Acta Zool. 2010, 93(1), 88-97.

[7] Shubin, N. H. Science. 2004, 304(5667), 90-93.

[8] Clack, J. Science. 2004, 304(5667): 57-58.

[9] Boisvert, C. A. et al. Nature. 2008, 456(7222), 636-638.

[10] Shubin N. H. et al. Nature. 2006, 440, 764-771.

[11] Mariani, F. V.; Martin, G. R. Nature. 2003, 423(6937), 319-25.

The Limitations of Science Where it Matters Most

By: Will Bryk

Without realizing it, you and everyone you know have been desensitized to the biggest questions of existence.

Every human gradually accumulates consciousness and awareness of his or her existence from the time of birth up through the teenage years. A baby simply does not have the means to contemplate its own existence, while a child does a bit more, and a teenager does. To see why this decade-long process desensitizes us, imagine instead that evolution had gone another way. Imagine that, somehow, only once our bodies and brain were fully grown and ready would we “turn on” and open our senses to the world. Being born would feel like popping into the universe with a fully capable brain and body, but without any knowledge of the universe.

Suppose you were just born this way. A second ago, you emerged from non-existence to full existence instantly. What would your first thoughts be? You would probably not casually start living as you do now. You would probably look around frantically, limbs flailing in all directions, blood pumping, brain racing, knees weakening, collapsing to the floor, and screaming out of sheer bewilderment, “WHAT IS THIS? WHERE DID THIS COME FROM?” The urge to know those answers would be so strong that it would fully encapsulate your being for many years. This “awakening” to the mystery of existence would be the most significant moment of your life, and the sheer memory of it would stir the most powerful of emotions. It is plausible that many people in such a hypothetical world would frame their entire life mission around answering these fundamental questions about existence.

We do not live in this hypothetical world. In our world, we are desensitized to the mystery of existence. Despite how essential, how fundamental, how relevant these questions are to our short existence in this place, most people do not think about these questions. You do not often see people walking around in the street with their arms out in shock, wondering what anything is or where everything came from. But sometimes late at night, with the right music playing and not much else distracting your mind, these questions about existence reappear. They become as obvious as daylight, as important as anything in your life could possibly be. In such moments, we yearn for answers.

Many in today’s age look to science as a beacon of hope to figure out the answers to these deep questions. After so much recent scientific progress, it is natural to expect the trend to continue until even such questions are answered. However, these fundamental questions about existence are precisely the questions that science cannot answer. This great mystery is precisely the one that science can never uncover. It is not just that science has not found answers to these questions yet; it is that science will never find answers to these questions. What follows is an argument in support of the position that science is fundamentally limited in that it cannot answer the deepest, and in some sense, most important questions.

HOW FAR WE’VE COME

There is no dispute that the scientific enterprise has become a beaming torch lighting the path forward toward our understanding of the cosmos and our place in it. The past 500 years of scientific progress is a testament to the power of this enterprise. Consider the world in 1500 AD. The universe consisted of the Earth, moon, sun, and the stars, which were visible to the naked eye. Stars were just a name for strange dots in the night sky. The origin of these celestial objects was either some mystical force or some complete unknown, depending on whom you asked. We also had no clue where people came from, or where any living beings came from for that matter. There was no knowledge of a microscopic realm. People fell ill and died without explanation. The fastest mode of communication was horseback or boat.

Compare this to the world of 2017. We now know we live in a universe with hundreds of billions of galaxies, each with hundreds of billions of stars, many of which have their own planets. Stars are gigantic balls of nuclear fusion. We understand how the planets and stars formed from disks of dust. Evolution tells us the wonderfully detailed story of where humans and all living organisms come from. We have discovered a breathtaking microscopic world no less detailed than our own. We’ve even discovered underlying physical laws that explain the motion of matter. With all this new understanding, we have invented technologies that probe unseen worlds from the molecules of life bouncing around in your finger to the planets of stars thousands of light years away. We now understand the causes of most illness. We can communicate by video with anyone on the globe. The totality of human knowledge is at one’s fingertips at any second. We routinely fly in giant birdlike machines we call airplanes. There are human footprints on the moon.

The mark of science on the advancement of mankind is undeniable. If a visitor were brought to 2017 from the 1500s, they might attribute this unbelievable advancement to some sort of sorcery, but it is instead due, in large part, to a very meticulous method of learning about the world—the scientific method. The scientific method has been instrumental in forming our understanding of nature. It acts like a truth sieve allowing false theories to pass through, leaving only the truth behind. Equipped merely with hypotheses, and physical experiments that test the hypotheses, practitioners of the scientific method were able to uncover general laws that the universe follows. That such a simple method can be so successful is somewhat astounding. Nevertheless, humans are interested in more than just finding the laws of nature. Many will not be satisfied until we know not only the laws, but also the explanations behind the laws.

TURTLES ALL THE WAY DOWN

Unfortunately, though the scientific method might be a fantastic tool for discovering fundamental laws of nature, it is simply incapable of providing the explanations for these fundamental laws. For instance, Newton pieced together many observations and hypothesized the law of gravity (Fg = Gm1 m2 /r2 ) in the late 1600s. The theory matched observed orbital data and revolutionized our understanding of the universe. Yet Newton also wrote, “Hitherto we have explained the phenomena of the heavens and of our sea by the power of gravity, but have not yet assigned the cause of this power.”1 Newton admitted that the gravitational cause, or explanation, for the motion of the heavens lacks its own explanation. Einstein in fact came along and discovered an explanation. He found that Newton’s law of gravity was an approximation of a deeper truth, namely that gravity was due to curvatures in space-time. However, we now find ourselves in the same problem. If you ask the natural question why space-time has such properties, then you are back in Newton’s shoes a little bit wiser but no more satisfied. The fundamental law changed from Newton’s equation to Einstein’s, but the explanation for the fundamental law still escapes us. Newton’s quote could very well have been attributed to Einstein.

Take another example of unsatisfied explanation. The electromagnetic force is a fundamental force of nature described by Coulomb’s law. The strength of this force is dependent on a certain constant that we call Coulomb’s constant, or k for short. We have a large body of observations that indicate that the value of k is roughly 9 × 109 Nm^2 C^-2. Because of that value, the sun continues to shine, buildings do not collapse, and you can get up in the morning. But why does k equal that particular value? Is it possible that k could have instead equaled 8 × 10^9 , for example? If it is not impossible to imagine a universe in which k equaled 8 × 10^9 , then there must be an explanation for why our universe is governed specifically by k = 9 × 10^9 . Yet the scientific method provides no explanation for this particular value. Thinking in this way, there are all sorts of constants and laws whose fundamental explanations science does not currently provide. Why these constant values, and not others? Why this physical law, and not another? These are the more fundamental questions, the ones we must answer in order to really understand the universe, and yet they are precisely the ones for which science does not have answers.

A physicist might respond that there could be a physics-based explanation for fundamental laws, or for the strengths of constants, but that we have not discovered it yet. It may very well be true that in 100 years, the next Einstein is able to explain the value of Coulomb’s constant. The problem is that this physics-based explanation would itself rely on some assumed axiom, whose explanation would be unknown. At any point in the history or future of science, there will be a set of fundamental laws whose explanations are unknown. Any time an explanation for a fundamental law is discovered, that explanation will itself rely on deeper fundamental laws whose explanations are unknown. Maybe a physicist would instead respond that there are an infinite number of universes, and that we just happen to live in the universe where the laws are the way they are and the constants are the values that they are. Again, we now could ask what is the explanation for the infinite number of universes and why does each one have its particular properties. No matter the explanation, there will always necessarily be a deeper explanation that escapes our grasp.

This central limitation of science is humorously represented by the archetypal curious child. The child asks “Why is the sky blue?” The father, who in this example happens to be a scientist, responds, “Because molecules in the air scatter blue light more than red.” “But why?” “Because of quantum mechanical phenomena.” “Why?” “Because that is how the world works on the quantum level” “Why?” “Because that’s just how things are.” “Why?” Exhausted, the father pauses, considers the big questions for a few seconds, and then responds, “Because I said so!” The moral of the story is that we can always seek a deeper explanation. The child recognizes this. We adults come to recognize this too when annoyed by the child. We then realize that our foundational explanations of the world are mere assumptions based on observation, not pillars of self-explaining truth.

DISSATISFACTION GUARANTEED

A scientist might now note that if we keep making progress and keep finding deeper and deeper explanations, then we are making progress toward a more fu

ndamental understanding, even though, by the arguments above, we cannot reach fully satisfactory answers. We may come up with a singular theory that explains the physical laws and the strength of physical constants—say the conservation of some quantity X, for example. Of course one can ask for the explanation why the conservation of X must hold. But at least we have reduced two questions to a singular, more fundamental one. Science has done this for 500 years. At first we had unlimited numbers of questions about how everything worked. Now we have an understanding of physics that reduces physical phenomena to a small set of laws that can be written on a T-shirt. We might not yet know the full nature of these laws, but in a very real sense, we have come to deeper understanding. So maybe it is true that while science cannot find the explanations for the most fundamental laws, we can be satisfied in our advancement toward those explanations.

This viewpoint is satisfying for many questions. A computer scientist does not need to know how computer chips work in order to code up a website. Similarly, when a father wants to know why the sky is blue, he does not keep asking “Why?” until he arrives at the fundamental laws of the universe. The father is satisfied at some explanatory depth, maybe that atoms scatter blue light more than red, and does not keep asking “Why?” Both the computer scientist and the father recognize their ignorance of the deeper explanations, but at some point, they do not continue to ask for practical reasons. They are satisfied not knowing the deepest explanation.

But there are certain questions that require an explanation which itself needs no further explanation in order to be satisfied. Questions like, “WHAT IS THIS? WHERE DID THIS COME FROM?” are examples of such questions. Unlike the father or the computer scientist, an explanation that X answered these questions would only be satisfying if X needed no further explanation. Right after popping into existence in the “awakening” scenario described earlier, if someone told you that there was a being that created everything, you would not be satisfied. Even though the two questions were reduced to one—“Where did the being come from?”—you would feel unsatisfied until you knew the explanation behind this being, and the explanation behind that explanation, and so on. You would only be satisfied once you somehow had a self-explained explanation, an explanation that needed no further explanation.

Because science is an observational discipline, it always relies on some assumptions whose explanations are unknown. But we cannot have deeper unknowns if we want satisfying answers to the most essential questions. Science therefore cannot answer the most essential questions. At some depth, the torch burns out.

LIGHT AT THE END OF THE TUNNEL

So far, we have concluded that some questions are unanswerable by science. All this argument implies is that our prized scientific method of arriving at truth is limited. It does not imply that the truth itself is limited. The scientific method cannot answer the deepest of questions, but that does not imply that the answers themselves do not exist. The torch burns out at some depth, but there very well could be more to explore down the tunnel. In fact, there are strong arguments in favor of the position that there must exist answers to these questions. One of the best arguments is simply that the alternative is too absurd to be taken seriously. Though it is certainly possible that answers to recursive questions, such as where the universe comes from, do not exist, it just seems too absurd. It hurts the brain to think that the universe could exist without having an explanation for existing, because we can imagine it being another way and it is not, which warrants an explanation. But this argument is not a proof. It could be a quirk in our brains that we require explanations for all things. One way, still, to get around the potential quirk is to say that even if there is no explanation for the universe’s existence, then that in itself is the deepest answer. In other words, if it is true that there is no fundamental explanation, then that truth becomes the deepest answer, and if you understood that truth you would understand how it could be that there is no fundamental explanation.

We have now come to the conclusion that the scientific method cannot find the answers to the deepest questions, and yet we think these answers must exist. In that case, what do we do if we want to reach those answers? Unfortunately, it may not be possible for human brains to discover these answers. Just because the answers exist does not mean that humans can reach them. For example, we know a fish will never understand general relativity no matter how many millions of years we try to teach it, so we might be like the fish and general relativity like the deepest answers.

Actually, if you think about it, the previous statement was incorrect. There is in fact hope for the fish—because the fish can evolve! The fish and its whole species may not currently be able to understand general relativity, but we know that after a few hundred million years, fish evolved into humans who were indeed able to understand general relativity just 100 years ago. Humans are like the fish. Though the human species doesn’t even have the correct tools to search for the deepest answers, like a fish lacking the tools to understand general relativity, we do currently have the ability to change ourselves until we have the correct tools, like a fish realizing that it could evolve until it has the correct tools.

It seems likely that we would need to enhance our brains in order to comprehend a self-explaining explanation, because such a concept is currently incomprehensible. Artificial intelligence is one example of a technology that presents a realistic opportunity for rapid brain enhancement. For example, we may in the near future be able to mentally connect to a machine with astronomical intelligence at a level we currently cannot grasp. Maybe once we can achieve a certain level of extreme intelligence we would then have the tools to find the deepest answers. If we cannot prove absolutely that this scenario is impossible, then, by definition, it is possible that this artificial intelligence opportunity, or some other brain enhancement opportunity, may arrive within our lifetimes.

It is therefore unlikely but possible that we will discover the answers to the deepest questions within our lifetimes. This is not a conclusion to take lightly. In our lifetimes, it’s possible that we can learn the answer to the totality of existence. From the perspective of someone who went through the “awakening” scenario, it is obvious that there is no greater goal. Maybe it should be obvious to us as well.

 

William Bryk ’19 is a junior in Quincy House concentrating in Physics and Computer Science.

WORKS CITED

[1] Newton, I.; Frost, P. Principia; Macmillan and Co.: Cambridge and London, 1863.

 

A History of Microscopy

By: Kelvin Li

INTRODUCTION

Sight, if not the most brainpower hungry sense, is certainly the one we use the most. Think about it, our lives are inundated with visual stimulation from intellectual processes like reading text, watching videos, and interpreting body language and facial expressions to more simple tasks such as walking in a straight line and anything that requires hand-eye coordination. Simply put, being alive depends a great deal upon the power of eyesight.

And yet it’s startling to think about all the things that are invisible to human eyes, the tiny invisible things that power the whole world. After all, eyes are only cameras (albeit extremely versatile ones), and like all cameras, they have a limit to how much they can zoom in. The desire to view the things beyond the ability of the eye has long captivated humans. Only in relatively recent history (compared with the entirety of human existence) have they succeeded in beating the eye in viewing power. An entire field of science has been created to delve into the depths of the physical world; its name is microscopy and here is its history.

SIMPLE AND COMPOUND LIGHT MICROSCOPY

As early as the thirteenth century, there is record of lenses in the form of water-filled glass spheres being used as magnifying glasses by jewelers to cut gems. Even before that, Euclid in the third century BCE had elucidated the mechanics of image formation with mirrors and Ptolemy in the second century CE had posited a description of refraction (1,2). Interest in lenses initially stemmed from the desire to correct vision dysfunctions and indeed, accurate corrections for nearsightedness and farsightedness with concave and convex lenses respectively were described by mid-sixteenth century Italian scientists (2).

At its simplest, a microscope is just an extremely strong magnifying glass. The technical distinction between vision correction lenses and a microscope is just the degree of magnification. Our eyes can at best see differences of about 0.1 mm. Anything that has a resolving power stronger than that, anything that can clearly show images finer than 0.1 mm, is thus considered a microscope (3). It’s rather difficult to pin down who created the first lenses that had this sort of magnifying power because the 0.1 mm threshold was not a specification that opticians recorded. However, we do know that by the mid-seventeenth century, Antonie van Leeuwenhoek, an amateur Dutch biologist, had essentially perfected the art of crafting single lens microscopes.

Van Leeuwenhoek was not particularly skilled at using the microscope, rather it was his expert craftmanship of the instrument itself that secured his place in the history of microscopy. Leeuwenhoek’s ability to grind lenses to minute focal lengths and mount them securely in the instrument resulted in the best single lens (as opposed to compound or even non optical) microscopy work.1 His best microscope could resolve close to one micron (10-6 meters) or a magnification of about 300 times. He is credited with discovering sperm cells and blood cells, producing the first detailed drawings of rotifera and infusoria, and obtaining the first images of bacteria (1,3). The clarity of van Leeuwenhoek’s simple microscope images would not be surpassed until over a century later, a testament to the quality of his microscopes.

Simple microscopes, though, are not the microscopes that most people first encounter in their high school biology classes. Those would be compound microscopes, and allegedly the first was invented by the father and son team of Hans and Zacharias Jansen near the end of the sixteenth century (3).

Compound microscopes, from a mechanical standpoint, are equivalent to an inverted telescope. They consist of two lenses with the second magnifying the image produced by the first lens both affixed to a tube that can move relative to the specimen (4). The system of lenses used in compound microscopes affords them tremendous range in magnification and some of the earliest work was done by Robert Hooke, an English scientist who produced stunning images of small animals and plant cells that showed off the power of the microscope.

ABERRATIONS

Despite the quality of Hooke’s drawings, compound microscopes lagged behind simple microscopes in image clarity. This was due to optical aberrations that emerge when light is passed through multiple lenses, namely, chromatic and spherical aberrations. As the name chromatic aberration implies, compound microscopes had trouble with accurately transmitting color. As Newton showed with his famous experiment with the prism, white light is comprised of all the different colors of light, each having a different wavelength. This difference in wavelength is what causes problems for a compound microscope. Light is bent different amounts depending on its wavelength with shorter wavelengths refracting the greatest and this results in each color focusing at a different point. Obviously, this is not ideal since the retina is fixed and cannot separate to accommodate different focal points. What results is an image that is ringed in colored halos (3).

The solution to this distortion came from two people, Chester Hall and John Dolland. Hall discovered that different types of glass dispersed colors different amounts (3). Dolland took that discovery and by experimenting, found that by combining two glass types and lens powers, he could bring the outlying colors of red and blue into focus and reducing the blur by a factor of ten (2). This set up is called an achromatic doublet—achromatic because it reduces chromatic aberrations and doublet because the lens is comprised of two pieces of glass. Though they were at first used only in telescopes because the large sized lenses were easier to craft, microscopes eventually caught up in the nineteenth century and the images produced were no longer color distorted.

The second major kind of distortion associated with compound microscopes was spherical aberration, which arose from the curved nature of small lenses. Light that strikes the edges of the lens is refracted significantly more than light striking near the center of the lens, resulting in a difference of focal length and a blurry image (5). Before the nineteenth century, there were two imperfect solutions to spherical aberrations. One approach was to use a lens with less curvature and consequently a smaller difference in refracting power between the edges and the center of the lens; however, this resulted in a lower magnifying power (3). The second approach was to limit the amount of light that could strike the lens to a narrow angle around the center, but this limited the resolution of the microscope.3 Both of these approaches did not allow for greater enlargement power. It wasn’t until the 1830s that the work of Joseph Lister managed to fix the problem.

Lister demonstrated that by lining up a series of lenses each with a small degree of magnification, only the first lens contributed to the spherical aberration (3). This resulted in high levels of magnification with minimal aberrations. Though the hassle of positioning and fixing the lenses limited its ease of use, compound microscopy’s stronger resolving power eventually won over scientists. To this day, compound microscopy is used commonly in all sorts of settings from high school biology classes to laboratory settings and is powerful enough to view bacteria in clarity.

SHIFT AWAY FROM OPTICAL MICROSCOPY

Optical microscopy is wonderfully convenient when viewing objects at the wavelength of visible light (approximately 400-700 nanometers), but with advances in theory in the early twentieth century, there was a need for better resolution in microscopy. Around this time, the French physicist Louis de Broglie theorized that electron beams could behave like waves and calculated that they would have a wavelength of about 5 picometers, or about 100,000 times smaller than visible light (6). This prediction paved the way for a new era of microscopy. Electron microscopes operated by shooting a beam of electrons rather than photons at a specimen and quickly surpassed optical microscopy in magnification.

Before diving into the advances in electron microscopes, it makes sense to reason out why electron microscopy returns such stunning results. It turns out the reason is that shorter wavelengths increase microscope resolution. Long waves have large spaces between successive peaks, and thus will pass over small specimens returning no signature and thus no image. Short waves, on the other hand, have peaks spaced much closer together and thus will hit the target and leave a detectable shadow or bounce back signature much in the same way that light leaves shadows when it hits an object. Though humans can’t see electrons like they can see photons, a fluorescent plate placed near the specimen is able to translate the invisible electrons into a visible image.

Electron microscopes, as one can imagine, are much more advanced than light microscopes. For starters, electrons cannot travel far in the atmosphere so electron microscopy must take place inside a vacuum (6). This increased difficulty in set up is balanced, however, by the fact that magnification is controlled not by accurately positioning the lenses and specimen, but by focusing the beam of electron to different degrees.6 Additionally, the microscope must also include a fluorescent plate to translate the scattered electrons into visible data, and as a result of all of the intricacies of electron microscopy, these microscopes tend to be bulky and finnicky in operation.

TRANSMISSION ELECTRON MICROSCOPY

In 1931, two German electrical engineers, Max Knoll and Ernst Ruska, succeeded in creating the first transmission electron microscope (TEM) (6). The instrument had essentially three components: a source of electrons, a series of lenses that directed the electrons through the specimen and to the detector, and equipment to interpret the electron signature (7). Similar to the way a light microscope displays a shadow where light could not pass through the specimen, the image produced via TEM shows relative levels of transmission of electrons throughout the specimen. “Darker” areas are more electron opaque while “lighter” areas are less so.

Unlike visible light which has a longer wavelength and is thus less energetic, electron beams tend to shred up lighter weight atoms resulting in large levels of scatter and a less contrasted image (3). To work around that, scientists must “stain” the specimen with heavier elements such as uranium, lead, and/or osmium (3). Though this allows for higher quality images, the staining process only adds to the list of difficulties of using TEMs. The microscopes are large, expensive, and difficult to operate and maintain. Suitable specimens must be able to withstand the complicated preparation process and vacuum imaging chamber, and the images produced are colorless. And finally, the stains result in a residue contaminated by heavy metal toxins that are difficult to dispose of properly.

SCANNING ELECTRON MICROSCOPY

As an alternative to the TEM, another German physicist, Manfred von Ardenne, described and produced the first images using a scanning electron microscope (SEM) in 1940 (6). The microscope was smaller than the TEM because it recorded electrons that bounced off the surface (secondary electrons) of the specimen with detectors placed closer to the source than the target (8). This is also what makes SEM images look three dimensional because rather than projecting the electron signature onto a single flat surface, the SEM measures electrons that travel at all angles and combines it to form an image with variable depth (3).

SEMs are also more amenable to researchers because the specimen preparation process is less involved. While specimens for TEM require extremely thin slicing and treatment with stains, SEM specimens only need a thin layer of conductive coating such as a gold film before being placed in the vacuum container, a process not even necessary if the electron beams are strong enough.8 The elimination of destructive preparation techniques has allowed researchers to image larger objects such as insects or small mechanical parts (3). The images produced show such fine details of everyday objects in new ways that they at times seem alien. The microscope is still expensive and needs to be precisely configured for optimal results, but overall, SEMs are easier to use compared to other forms of electron microscopy.

ATOMIC LEVEL MICROSCOPY

As if intracellular imaging weren’t impressive enough, the 1980’s brought the invention of the scanning tunneling microscope (STM), which has the ability to view individual molecules and atoms. This level of magnification grew out of a desire to describe the inner workings of the atom. Like electron microscopy, it relies on the wave nature of electrons (9). The first STM was built in 1981 by Swiss physicists Gerd Binnig and Heinrich Rohrer and was used to study the topography of a gold surface; it worked by measuring the level of electron tunneling at different points along the surface (9). STM relies on tunneling, a quantum mechanical phenomenon in which electrons have some probability of traveling through an impermeable surface, something unexplainable by classical mechanics. What’s useful about tunneling is that it is highly sensitive to distance, specifically, atomic level distances.

The STM operates with an extremely fine pointed probe positioned extremely close to the sample surface (3). Voltage applied to the tip allows for electrons to tunnel to the surface and the number of electrons that make it across the gap is correlated with the distance between the tip and the surface (9). The image produced by all the data is a topographical map of relative “heights,” which is useful because it can determine the atomic structure of the surfaces of different substances. STM has been used to investigate the arrangement of atoms of different metals and crystals and the details of absorption and diffusion of various gases on metal surfaces (9). The advantage of STM over other forms of electron microscopy, beyond its resolution, is that tunneling can take place in a variety of environments and does not need an external source of electrons or lens to focus the beam (3,9). This eliminates much of the specificity involved in undertaking electron microscopy and vastly opens up the domain of imageable surfaces. In fact, STM has been performed in water, ionic solutions, and even insulating fluids that wouldn’t allow electrons to travel conventionally (9).

As with any technology, there are complications that come with using STM. First, the STM setup is still rather complicated since the probe is extremely sensitive. A series of springs and magnets are used to counteract outside disturbances, and the movement of the probe is controlled via the piezoelectric effect where applied current generates compression or expansion of a material (3). The construction of the tip is also problematic because it is only one atom in width. This is accomplished by precisely manipulating extremely strong electric fields to shave off excess atoms. Despite the intricate nature of the STM equipment, the versatility of the probe makes it one of the most powerful imaging tools for modern scientists.

CONCLUSION

Microscopy has come a long way from its humble optical beginnings. Though born out of human curiosity and the desire to view the invisible world lurking all around us, microscopy has been instrumental for scientific and societal advancement. Being able to view bacteria and other pathogens validated germ theory allowing for the development of antibiotics and other therapies that have vastly improved the quality of life over the last century. Being able to view chemical reactions at the atomic level has advanced the efficiency of the chemical industry. And the recently awarded 2017 Nobel Prize in Chemistry was for biomolecular imaging techniques that have allowed for real time observation of biochemical processes.10 Microscopy will always be integral to scientific discovery, and the future of imaging techniques will undoubtedly have far reaching benefits for all humanity.

 

Kelvin Li ’21 is a freshman in Wigglesworth Hall.

WORKS CITED

[1] Singer, C. J. R. Soc. Med. 1914, 7, 247- 268.

[2] Walker, B. H. In Optical Engineering Fundamentals, 2nd ed.; SPIE Press: Bellingham, Wash, 2008; pp 5-11, 113-116.

[3] Croft, W. J. In Under the Microscope: A Brief History of Microscopy; Weiss, R. J., Ed.; Series in Popular Science; World Scientific Publishing Co.: Singapore, SG, 2006; 5, pp. 5-13, 57-81, 105-112.

[4] In A Dictionary of Physics; Oxford University Press: Oxford, UK, 2014; 7.

[5] Optical Abberations. https://www. olympus-lifescience.com/en/microscope-resource/primer/anatomy/ aberrations/ (accessed Sept. 9, 2017).

[6] Bradbury, S. et al. Electron Microscope. Britannica Academic [Online], Aug. 13, 2017. http://academic. eb.com (accessed Sept. 9, 2017).

[7] Bradbury, S. et al. Transmission Electron Microscope (TEM). Britannica Academic [Online], July 22, 2011. http://academic.eb.com (accessed Sept. 9, 2017).

[8] Bradbury, S. et al. Scanning Electron Microscope (SEM). Britannica Academic [Online], May 17, 2016. http:// academic.eb.com (accessed Sept. 9, 2017).

[9] Quate, C. F. Scanning Tunneling Microscope (STM). Britannica Academic [Online], Aug. 13, 2017. http:// academic.eb.com (accessed Sept. 9, 2017).

[10] Press Release: The Nobel Prize in Chemistry 2017. https://www.nobelprize.org/nobel_prizes/chemistry/ laureates/2017/press.html (accessed Oct. 8, 2017).

 

 

 

The Theory of Everything, Challenged

By: Connie Cai 

Since 2004, there have been 67 anti-evolution education bills introduced by local governments in the United States (1). Three of those bills have been approved in Mississippi, Louisiana, and Tennessee. These laws make it legal for public school teachers to criticize the theory of evolution—as well as other politicized topics like climate change—and teach alternative explanations to evolution. The most popular of these alternate explanations is intelligent design, or the belief that natural selection cannot by itself create as complex animals as we observe. The success of this legislation hinges upon the fact that today’s debate about evolution in schools is not so much that creationism should be taught over evolution, but rather, that teachers should be allowed to provide alternatives.

This shift in argument has allowed anti-evolution legislation to successfully gain a hold in our education system. The introduction of these regulations and the way they frame the evolution debate are part of a movement that Nick Matzke, former Public Information Project Director at the National Center for Science education, dubs “stealth creationism” (1). The proposed bills, he says, attempt to distance themselves from any potential religious motivations or undertones; instead, they frame their argument as providing critical analysis of contentious science topics (1). At its core, the stealth creationism movement argues that evolution is just a theory and that students should be allowed to debate its merits.

However, treating evolution as “just a theory” undermines the validity of American science education by challenging one of the fundamental and overwhelmingly evidence-supported tenets of biology. Moreover, it compromises students’ understanding of the scientific process—how evidence is collected, synthesized, and used to create the basic theories through which we understand our world. As James Williams, a science instruction educator, states, “Theories such as gravity [or] evolution are not hypotheses in want of further evidence, but rather the sturdiest truths and descriptions of how the material world works that science has to offer” (2). How the kids of today learn and understand science is crucial for our society’s scientific progress; teaching them alternatives to a theory as well-developed as evolution demonstrates to kids that even facts can be challenged by opinion.

EVOLUTION IN CLASSROOMS

The stealth creationism movement is the most recent development in the long history of evolution in the classroom. In the 1920s, the teaching of evolution was banned in most states. During the infamous 1925 Scopes Monkey Trial with Clarence Darrow and William Jennings Bryan, the American Civil Liberties Union (ACLU) attempted to challenge the bans. Though the trial did bring the topic of science education to the nation’s attention, the ACLU was unsuccessful. It was not until 1968 that the Supreme Court ruled that bans on teaching evolution in the classroom were unconstitutional. More recently, a United States federal court ruled in the 2004 Kitzmiller v. Dover Area School District case that teaching intelligent design was unconstitutional because intelligent design could not be separated from its creationist and religious precedents (3). In that court case, the Dover Area School District required high school teachers to teach intelligent design through a textbook called Of Pandas and People alongside evolution. In contrast, the aforementioned laws in Tennessee, Louisiana, and Mississippi do not require teachers to teach intelligent design, but rather protect their right to do so in the classroom.

The support for intelligent design and other evolution alternatives has had a largely religious base. For many fundamental Christians that grow up in communities where the line between church and state is often hazy, it is no wonder that the topic of evolution—which is taught in the last years of high school after years of deeply ingrained religious teachings—does not settle well. While these communities form the basis of voter support for anti-evolution regulation, political groups and think tanks like the Discovery Institute take that support and lobby on behalf of the stealth creationism movement. The Discovery Institute is a secular think tank, but in their mission statement, the Institute writes that it supports the “role of faith-based institutions in a pluralistic society” (4). The Institute funds intelligent design research and educational programs; for them, protecting intelligent design is a question of academic freedom and free speech. The logic of anti-evolution legislation follows similar premises—stealth creationism’s success relies on framing the argument as one of academic freedom. However, a debate on the validity of evolution is not truly academic freedom. Academic freedom applies to the realm of what is true or of seeking what is true; academic freedom is not a space for unsubstantiated data.

SCIENCE AND SOCIETY

Allowing a debate on evolution under the misguided assumption of academic freedom has dangerous consequences. In a study conducted by Technical University Dortmund, researchers found a strong correlation between “acceptance of science” and “acceptance of evolution” (2). This illustrates the strong tie between what we teach and how we understand the world; additionally, the research stresses the importance of having a well-developed and accurate science curriculum if we want a society that trusts science again. In addition, the debate surrounding evolution fits into a much larger cultural context and the growing alternative facts movement. Recently, more politicians have been vocal about their distrust of the scientific community. Current political dynamics seem to be moving away from evidence and data-based policies; a clear example of this is persistent climate change denial and the refusal to enact legislation regulating fossil fuel consumption. In these times, how we teach science and how we encourage scientifically literate citizens are becoming ever more pertinent questions. Moreover, the mounting skepticism from politicians and the public has mobilized the scientific community. In April 2017, the first March for Science was held to address these issues; over one million people across the world participated in the rallies that sought to celebrate science and the role that science has in our society, as well as to call for more evidence-based policymaking and science funding.

The March for Science, while a prominent rallying symbol for the scientific community, also demonstrates how politically charged science has become. Based solely on evidence, evolution is undebatable; politically, evolution is a hotly contested and controversial topic. Because it has become politicized, addressing the issue of evolution is not one that can easily be fixed by presenting the multitude of supporting data for evolution.

Yet fixing our current science curriculum and confronting the recently passed legislation are extremely important tasks to undertake. Allowing the laws to stand as is will have potentially dangerous consequences for the role of scientific truth in our society; as of now, the success of the bills in certain states has inspired copycat bills elsewhere.

SHAPING EDUCATION

Though it will be difficult to have an immediate effect on the current political climate, certain proposed changes may increase the acceptance of evolution and other contested science topics. First among those changes is teaching evolution at a younger age. Currently, evolution is not formally taught until high school, and in the eyes of many researchers and science instructors, that is too late. Kids learn about other aspects of biology, but evolution is, for myriad reasons, postponed. Williams, a science education instructor, states that “To me, [to hold off on teaching evolution] is odd—it’s like trying to teach chemistry but not putting atoms at the center (2). The argument for teaching evolution sooner is that it helps introduce the concept in students’ minds and prevents them from forming misconceptions about evolution. After all, most of the students who come in not believing in evolution have never been formally taught it; what they know about evolution is given to them by their parents or religious communities. As Dittmar Graf, a researcher at the Technical University, explains, “When somebody has a misconception in science, if it’s embedded, it’s incredibly difficult to change” (2).

Secondly, it is important to recognize stealth creationism for what it is and understand that intelligent design supporters are relying less on religious arguments. It is crucial, in this case, to make clear the distinction between science and nonscience in the classroom: true science must be rigorous, widely accepted, and evidence-based. In a statement made by the InterAcademy Panel (a global organization that represents numerous national science organizations), countries agreed to make “decision-makers, teachers, and parents educate all children about the methods and discoveries of science” and that there were clear “evidence-based facts about the origins and evolution of the Earth and of life on this planet” (2).

Building up trust at the intersection of politics, science, and society is a difficult but paramount task for our nation’s progress. It all begins with a solid foundation in the classroom—controlling what is taught in the classroom shapes the minds of future generations. Though the debate surrounding evolution has always been seen as a contest between science and religion, in our current times it is shaping into a debate about what constitutes scientific truth and the collective societal understanding of science. Change may be slow, but the truth matters—and everyone deserves to know the truth. Connie Cai ’21 is a freshman in Grays Hall.

WORKS CITED

[1] Jaffe, E. The Evolution of Teaching Creationism in Public Schools. The Atlantic [Online], Dec. 20, 2015. https://www.theatlantic.com/education/archive/2015/12/the-evolutionof-teaching-creationism-in-publicschools/421197/ (accessed Sept. 4, 2017).

[2] Harmon, K. Evolution Abroad: Creationism Evolves in Science Classrooms around the Globe. Scientific American [Online], Mar. 3, 2011. https://www.scientificamerican.com/article/evolution-education-abroad/ (accessed Sept. 4, 2017).

[3] National Center for Science Education. Kitzmiller v. Dover: Intelligent Design on Trial. https://ncse.com/ library-resource/kitzmiller-v-dover-intelligent-design-trial (accessed Sept. 20, 2017).

[4] Discovery Institute. https://discovery.org/id/ (accessed Sept. 20, 2017).

[5] Matzke, N. Science. 2016, 351(6268), 28-30.

 

(The following notes relate to the images in the print copy of the article.)

Fig 1. These are embryo drawings of Ernst Haeckl, one of the first developmental biologists. The similarities he notices between the embryos of different species was used to support the theory of evolution, as it demonstrated shared ancestry between different species and a common evolutionary theory. (Image from Wikimedia Commons)

Fig 2. Proponents of intelligent design often use the watchmaker analogy. A watch (a working system) must be designed by a watchmaker; similarly, nature (a working system) implies a designer. (Image from Wikimedia Commons)

Cyborg Bacteria: Catching Light

By: Michelle Koh

What is a cyborg? One might imagine Terminator-esqe half-human, half-machine hybrids or other creatures with fantastic mechanical augmentations, but we must direct our attention down to the cellular level—to cyborgian beings that are far smaller. Despite these cyborgs’ underwhelming size, UC Berkeley researcher Kelsey Sakimoto and his fellow researchers of Professor Pei-dong Yang’s lab have engineered a new biohybrid bacteria that may make a formidable impact on the solar fuel industry.1 These originally non-photosynthetic organisms are able to grow their own tiny semiconductor “solar panels” to harness solar energy and store it in the chemical bonds of acetate, an essential natural and industrial building block (1).

WHAT IS SOLAR FUEL?

According to the Royal Society of Chemistry, more energy is delivered to Earth in one hour by the sun than all the energy that we consume through fossil fuels, nuclear energy, and other renewable sources of energy in a year (2). Plants and other photosynthetic organisms have mastered the process of capturing and storing this solar energy in chemical fuels or “solar fuels” (2). Since the 1950s, scientists have strived to mimic these processes to create more sustainable alternatives to traditional energy sources such as fossil fuels (2). Unlike the energy generated by other sources of renewable energy, like photovoltaic cells and wind turbines, the physical nature of solar fuels means that they can be much more easily stored and transported through existing distribution networks and methods.

Acetate and other carbon-based solar fuels can be used as feedstock, or raw material, for the production of many products such as fertilizers, pharmaceuticals, and plastics (2). Currently, the petrochemical industry produces much of the feedstock for these industries. However, solar fuel-derived feedstock, in addition to being more renewable, reduces the harmful greenhouse gas emissions.

PHOTOSYNTHETIC BIOHYBRID SYSTEMS

Humans have developed incredible scientific and technological capabilities so far as to not only replicate some of the most complicated biological and chemical systems in nature but also surpass them in efficiency. Nevertheless, some processes, like the conversion of CO2 and other small atmospheric molecules to more complex organic molecules, have been more difficult to mimic (3).

The reduction of, or the addition of electrons to, CO2 is surprisingly difficult. Electrons must be transferred from some catalyst, or an electron carrier, to CO2 , and new carbon-to-carbon bonds must form (3). Furthermore, as each process and chemical reaction in the biological world is highly specific in its reactants and products, scientists also have to replicate the high-accuracy selection of a single product (3). Attempts to reproduce these processes in the lab have often ended in tangles of chemical problems that seem to contradict one another, yet biological organisms have evolved so that the cell is able to incubate and facilitate a vast number of diverse reactions through a delicately-regulated chemical environment.

As a result researchers have developed photosynthetic biohybrid systems (PBSs) to take advantage of the existing biological systems that have been developed so elegantly through evolution (3). By combining these systems with high-efficiency inorganic light harvesters, they are able to enhance or induce photosynthetic capabilities in organisms (3). The key challenge of this field has been smoothly integrating the biotic and abiotic components of the PBSs. Some PBSs feed electrons collected by an inorganic light harvester to the biological part of the system, though engineering and producing nanowire arrays and intricate carbon cloths to do so can be costly (4). Other researchers have developed PBSs by isolating specific enzymes, such as hydrogenases, and combining them with semiconductor nanoparticles (4). However, whole-cell PBSs are favored due to their self-replication and self-repair capabilities (4).

Sakimoto et al., on the other hand, have been able to engineer microorganisms that not only facilitate CO2 reduction, but also synthesize their own inorganic light harvester materials (3). Sakimoto’s team discovered that the introduction of Cd2+ (cadmium) ions to initially non-photosynthetic Moorella thermoacetica bacteria can induce the bio-precipitation of cadmium-sulfide (CdS) nanoparticles on the cell surface (4). The growth of these semiconductor light harvesters is able to transform the M. thermoacetica bacteria into highly efficient photosynthetic systems, producing products that are 90% acetic acid and 10% biomass (4).

Since the bacteria are able to produce their own inorganic semiconductor light harvester particles, Sakimoto et al.’s new PBS is cost-effective (4). The complex micro-fabrication techniques, high-purity reagents, and high-temperature processes are essential in synthesizing the semiconductor components in PBSs, but they are incredibly energy and resource intensive (4). Aside from the initial set-up of the system, Sakimoto et al.’s system requires very low maintenance, as the bacteria are able to remake the CdS particles even after the particles degrade (4).

CYBORGIAN EVOLUTION

Sakimoto and colleagues aim to experiment with other semiconductor particles and bacterial species in order to optimize the efficiency of their PBS (4). Since cadmium sulfide is highly toxic, they hope to replace these nanoparticles with other less toxic semiconductor materials such as silicon (4). As other researchers strive to develop PBSs that not only reduce CO2 but also complete other crucial biological processes such as N2 fixation, Sakimoto et al.’s discovery may signify the advent of a new cyborgian evolution (3).

 

Michele Koh ’21 is a freshman in Holworthy Hall.

WORKS CITED

[1] Cottingham, K. Cyborg Bacteria Outperform Plants When Turning Sunlight into Useful Compounds. https://www. acs.org/content/acs/en/pressroom/ newsreleases/2017/august/cyborg-bacteria-outperform-plants-when-turning-sunlight-into-useful-compounds-video.html (accessed Oct. 1, 2017).

[2] Royal Society of Chemistry. Solar Fuels and Artificial Photosynthesis: Science and Innovation to change our Future Energy Options; Royal Society of Chemistry: Cambridge, U.K., 2012; 4-11.

[3] Sakimoto, K., Acc. Chem. Res. 2017, 50, 476-481.

[4] Sakimoto, K. Science. 2016, 6268, 74- 77.

 

Feeling Blue: How Instagram Activity Can Provide Insight Into Behavioral Health

By: Julia Canick

In a world that places a high premium on happiness, the prospect of coping with a mental illness as debilitating as depression can be frightening. Scarier still, general practitioners diagnose only about half of major depressive disorder (MDD) cases (1). In an effort to raise this unacceptably low rate of success, Andrew Reece and Christopher Danforth (of Harvard University and University of Vermont, respectively) turned their attention to an unlikely source of information: social media.

In a recent study, the two researchers combined different computational methods, including color analysis, metadata components, and face detection technology, to analyze 43,950 Instagram photos from 166 participants (2). While previous predictive screening methods had successfully analyzed online content to detect some health conditions, they had all relied on text, not visual posts. This study was different: the analysis identified hallmarks of depression not through the content itself, but through a few key qualities about what was posted.

Some of the results were not shocking: Reece and Danforth found that depressed individuals were more likely to post photos that were bluer, grayer, and darker than those posted by their non-depressed counterparts (2). Furthermore, depressed posters were more likely to use a black-and-white filter (like Inkwell) on their photos, while people not afflicted with the disorder were more likely to use filters that gave their photos a warmer tone (like Valencia) (3). This is certainly a reasonable finding; previous studies had pointed to a general perception that darker and grayer colors were associated with a negative mood (4).

Another finding was more unforeseen: depressed Instagrammers were more likely than non-depressed ones to post photos with faces—with the caveat that they had, on average, fewer faces per photo (2). Depression is strongly associated with lowered social activity, so the higher frequency of posts that featured faces is counterintuitive (5).

These findings are fascinating on their own, but the most astounding part is, perhaps, the marked increase in the percentage of successful depression diagnoses made: the computer model correctly identified 70% of the cases of depression in the study, a marked increase from the 47.3% success rate of general practitioners (1,2). The system was even able to recognize markers of depression before the date of the first diagnosis, an advance that suggests that further refinement of this algorithm may lead to better preventative measures for major depressive disorder (2).

The implications for the success of a photo analysis model like this one are staggering. The generation of information using entirely quantitative means, instead of human participation, eliminates subjective opinion, and, with it, the natural propensity for observer bias. Furthermore, the speed and automation of the diagnosis process can accelerate the search for treatment, and could give rise to future programs where doctors would, with patients’ permission, receive notifications if their patients’ Instagram posts raise red flags.

This data model isn’t perfect—for example, a major concern is that patients who are officially diagnosed with MDD may identify with the label, and therefore post content in accordance with their self-image (6). This presents the potential issue of patients perpetuating stereotypes associated with their diagnoses because they tell themselves they “must act” in a manner in keeping with the general image of MDD, instead of taking steps to break the stigma accompanying their illnesses. However, the benefits of this algorithm far outweigh the possible downfalls; Instagram garners nearly 100 million posts per day, and the utilization of these photographs as data points for analysis of mental health would be revolutionary for the advancements of both diagnostic tools and therapeutic accessibility (7)

Gray days often call for gray photos—and, now, there’s a way to harness that observation to make MDD diagnosis and treatment more accurate and accessible for those who are struggling.

 

Julia Canick ’18 is a senior in Adams House concentrating in Molecular and Cellular Biology.

WORKS CITED

[1] Mitchell, A. J. et al. The Lancet. 2009, 374(9690), 609-619.

[2] Reece, A. G.; Danforth, C. M. EPJ Data Science. 2017, 6(1), 15.

[3] Brown, J. When You’re Blue, So Are Your Instagram Photos. EurekAlert! [Online], Aug. 7, 2017. https://www. eurekalert.org/pub_releases/2017-08/ uov-wyb072717.php (accessed Sept. 24, 2017).

[4] Carruthers, H. R. et al. BMC Med. Res. Methodol. 2010, 10(1), 12.

[5] Bruce, M. L.; Hoff, R. A. Soc. Psychiatry Psychiatr. Epidemiol. 1994, 29(4), 165- 171.

[6] Cornford, C. S. et al. Fam. Pract. 2007, 24(4), 358-364.

[7] Instagram. https://instagram-press.com (accessed Sept. 24, 2017).

A (Dis?)harmonious Union: Chimeras

By: Una Choi

Background: Animal Chimeras

Chimeras prefigure prominently in classical and modern mythology; creatures ranging from the Greek chimera, a monster bearing lion, goat, and serpent anatomy, to the modernized hippogriffs found in popular fantasy fiction today have captured imaginations for centuries.

Biologically, animal chimeras are organisms containing two disparate genomes. Natural chimeras are commonly seen through mechanisms like fetal microchimerism, a phenomenon describing the retention of fetal cells in the mother’s body for months and even years after pregnancy (1). Chimera here refers to the deliberate transplantation of human stem cells into nonhuman animal embryos. Human stem cells were first successfully derived from adult tissue cells in 2007 (2). Pluripotent stem cells (PSC) differentiate into any cell type found in the original organism. Although initial attempts to trigger the differentiation of these stem cells into therapeutic tissues involved in vitro exposure to various chemicals, the difficulty of achieving the precise environment required for successful differentiation has led to a recent trend of transplanting stem cells of one species into embryos of another.

Diabetes and Rat-Mouse Chimeras

Dr. Hiromitsu Nakauchi of Stanford utilized rat-mouse chimeras to reverse successfully diabetes in mice (2). Dr. Nakauchi rendered the Pdx1 gene associated with pancreas development in rats non-functional and injected these same rat embryos with mouse stem cells, forcing the rats to develop its pancreas with pure mouse cells (2). The mice recipients originated from the same inbred strain as the donor mice, so they did not reject the transplanted organs (3). In constructing a pancreas derived almost solely from donor mouse cells, Dr. Nakauchi decreased risk of tissue rejection, thus enhancing the likelihood of a successful transplantation of the donor-derived tissue into the donor animal.

These mouse-derived pancreases were not solely constructed from mouse cells; indeed, 10% were composed of rat cells, as the rat supplied the blood vessels. These blood vessels, however, were rapidly replaced when the pancreases were transplanted in the mice (3).

This successful growth and implantation suggests a possible treatment for Type 1 diabetes, an autoimmune condition associated with the destruction of pancreatic beta cells (3). When Dr. Nakauchi transplanted these formed islets into diabetic mice, the mice-derived islets normalized the hosts’ blood glucose levels for over 370 days without immunosuppression, revealing the therapeutic potential of cross-species implantation (3).

Similar attempts to knock out completely the genes associated with the development of a particular organ in a developing animal have proven successful with the murine pancreas, heart and eye.

Pig-Human Chimeras

Mice and rats, however, differ drastically from humans. Consequently, several researchers are focusing on pigs as potential sites for human organ development. Pigs’ organs are similar in size to that of humans, and their metabolism also closely resembles human metabolism (1).

While researchers have successfully blocked the generation of the murine pancreas, heart, and eye, efforts to create pigs incapable of developing the organ of interest have proven unsuccessful. Organs like the pancreas, which stems from a single kind of progenitor cell, will be more easily constructed than complex organs like the heart.

In a 2017 paper published in Cell, Wu et al. found that naïve human PSCs, which have an unlimited self-renewal capacity, successfully engrafted into pre-implantation pig blastocysts (4). This transplantation, however, failed to contribute significantly to normal embryonic development in the pigs. The same group then injected human PSCs into cattle blastocysts. Both naïve and intermediate human PSCs survived and integrated into the cattle. Similarly, when human PSCs were injected into pig blastocysts, the blastocysts retained the human cells. These human PSCs selectively incorporated into the inner cell mass (ICM), marking the first step into successful incorporation of the donor cells into the host (4).

When the human PSCs were later injected into pig embryos, 50 of the 67 embryos exhibiting retained human PSCs were morphologically underdeveloped. In addition, this formation of interspecies chimeras was highly inefficient, revealing the persisting challenges to constructing viable pig-human chimeras (4).

Ethical Implications

The incorporation of porcine, bovine, and equine biological heart valves into human patients and the use of insulin derived from porcine pancreas are widely accepted medical tools (1). The production of a human-derived organ in a pig or other nonhuman animal, however, has been met with significant controversy.

The accidental incorporation of human cells in non-target locations in the host animal can result in ethical consequences; if human-derived cells are significantly utilized in the development of the non-human brain or the reproductive organs, the test animal may be considered excessively humanized. A 2013 study from the University of Rochester Medical Center reported that mice injected with human brain cells exhibited enhanced synaptic plasticity and learning (2). These human glial progenitor cells outcompeted their murine counterparts, resulting in white matter largely derived from humans.

These concerns of unwanted integration have resulted in the prohibition of human stem cell transplantation into monkey early embryos, as the evolutionary closeness may result in an increased susceptibility of monkey brains to human cell-catalyzed alteration (2).

In response to the above concerns, the National Institutes of Health (N.I.H.) instituted a 2015 moratorium on the use of public funds to incorporate human cells into animal embryos. This ban is still in place at the time of this article’s writing. These efforts have delayed current chimera research dependent on public funds; Dr. Nakauchi’s pancreas experiment is the result of eight to nine years of work and a 2014 relocation from Tokyo to Stanford due to Japanese regulations.

Scientists have pointed to new molecular techniques to address some of the more common ethical concerns. CRISPR-Cas9, a popular gene editing technique, might be used to direct implanted human cells to target organs in the embryo, thus preventing the accidental incorporation of human cells into the brain and reproductive tissues (2). The injected human stem cells could also be modified to include ‘suicide genes’ activated upon neural differentiation, ensuring the elimination of any human-derived differentiated brain cells (2).

In addition, primate cells divide more slowly than non-primate cells; primate neural progenitor cells go through more cell divisions (5). A sow’s gestation period, for example, is around 3 months, representing far shorter development period than that of humans (1). Transplanted human neural progenitor cells in chimeras would only be able to achieve the same high threshold of cell divisions if they were able to somehow sense the shortened development window and divide more rapidly. This scenario, however, is unlikely, as previous studies of human/mouse blood stem cell xenografts suggest the host regulates human stem cell proliferation; the likelihood of an accidental integration of human stem cells into the murine brain and a subsequent development of human cognitive capacities is slim.

Future Implications

The discovery that human-derived stem cells were utilized in the development of tissues in pigs holds promising implications for the field of organ transplants. Around 76,000 people in the United States await transplants. 2 In Europe, over 60,000 people are on the organ transplant waiting list. Attempts to grow human-derived organs in pigs and other animals could thus address the organ shortage.

Because the organs generated in the developing embryos are derived from donor cells, transplantation of those organs into the donor or organisms closely resembling the donor can decrease risk of organ rejection. This reduces the need for immunosuppressant drugs commonly used with organ transplants today.

The ability to generate human-derived tissues in non-human embryos allows scientists to study human cell development outside of human embryos. 5

 

Una Choi ’19 is a sophomore in Kirkland House studying Molecular and Cellular Biology and Economics.

 

Works cited

[1] Bourret, R. et al. J. Stem Cell Res. Ther. 2016, 7, 1-7.

[2] Wade, N. New prospects for growing human replacement organs in animals. NYTimes

[Online], Jan. 26, 2017. https://www.nytimes.com/2017/01/26/science/chimera-stemcells-

organs.html?_r=0 (accessed March 11, 2017).

[3] Kilsoo, J. et al. Stem Cells Dev. 2012, 21, 2642-2655.

[4] Wu, J. et al. Cell 2017, 168, 473-486.

[5] Karpowicz, P. et al. Nature 2004, 10, 331-335.

What to do with Virtual Reality

By: Jeongmin Lee

Innovators have gone out of their way to open up a new dimension we all can experience as virtual reality; however, one of the greatest questions regarding this technology can be summed in two words: what now? Virtual reality utilizes computer technology to immerse a user into a simulated world. It takes the user “out of the physical reality to virtually change time, place and (or) the type of interaction” (1). While this may sound like science fiction, virtual reality devices immerse players simply through goggles covering one’s eyes to display the simulated world, and in some models, users have hand-held devices they can use to interact with other objects in that virtual reality. Now that this technology exists, people from multiple fields of expertise are coming together to figure out how virtual reality can be useful.

Simulations and games through virtual reality can serve not only as time-wasting entertainment but also as art, stories, and education. Drawing programs and basic entertainment games have been released with some of the earliest virtual reality headsets. Some entertainment systems try to increase physical activity through a virtual tennis match, while others take advantage of the technology’s ability to immerse the player. With virtual reality, other detailed simulations have been made by HumanSim to train surgeons and dentists on virtual patients (2). Doctors can practice with the virtual world to hone their skills without harming physical patients. In addition to medical education, general education is also looking to bring virtual reality into their classrooms.

Some professors are considering integrating virtual reality into their lectures. As virtual reality is known for its powerful immersion, educators consider this technology an opportunity to better engage students with the course material. The newest technologies have been often used in classrooms, but their usage was not always necessitated (3). For example, once the touch-screen tablet was made, uses of the tablets in class were limited due to the fact that writing down or selecting answers of an activity can be easily done on a regular computer or even pencil and paper. Some classrooms even implemented smart screen white boards, but again, a regular whiteboard could accomplish the same tasks (4). The newer technology did not seem to bring significant benefits compared to past inventions. However, virtual reality may prove to be different.

As virtual reality allows one to generate another world that follows a different set of physical laws, the technology can directly show students what a world with less gravity would look like or help them visualize how large dinosaurs are imagined to be by allowing the users to fly around the model while they are still in their seats. Dr. Der Manuelian, Director of the Harvard Semitic Museum, employs virtual reality to recreate the past of Egyptian pyramids in the classroom (5). Through the immersion, one can experience the different chambers of the pyramids and look around to see not only the color and artifacts that might have existed in that time period but also how large the tombs were and how narrow the hallways leading to the tombs were. This same experience cannot be replicated by a mere video as one must use considerable spatial thinking to imagine the proportions when looking at a stationary screen, but through virtual reality, students can experience what they are learning.

Virtual reality has a lot of potential in its future. Virtual reality can instantly engage players more effectively than many other media by capitalizing on the players’ sight. This immersion, when mastered, can paint experiences and tales more clearly than ever before. Additionally, the technology can have selective use in classrooms where virtual reality labs can be used to aid those who learn best spatially. However, surrounding any user can help them visualize certain scenarios and possibly allow the user to make some theoretical mistakes without harming anyone physically such as surgeons who can practice and make mistakes before operating on a real patient. Virtual reality offers strong involvement of the user into a different world more easily than any other medium has. Through artists, technicians, professors, students, and the general consumer, virtual reality has many uses and can fill many roles in the near future.

 

Jeongmin Lee ’19 is a sophomore in Lowell House concentrating in Chemistry.

 

Works cited

[1] Fuchs, P., et al. Virtual Reality: Concepts and Technologies; CRC Press: Boca Raton, FL, 2011; pp. 3-7.

[2] Virtual Reality Society. https://www.vrs.org.uk (accessed Feb. 4, 2017).

[3] Minshew, L.; Anderson, J. CITE Journal 2015, 15, 334-367.

[4] Brodkey, J. Larry Cuban on School Reform and Classroom Practice. https://larrycuban.wordpress.com/2010/03/20/wedont-need-smart-boards-we-need-smart-people-jerry-brodkey/ (accessed Mar. 31, 2017).

[5] Rajan, G. S. Giza 3D: Harvard’s Journey to Ancient Egypt. YouTube [Online], Oct. 2, 2016, http://www.youtube.com/ channel/UCASIa29X_S1YdFyeRHijW1A (accessed Feb. 4, 2017).

Climate Change: What Sweden’s Doing that Trump Isn’t

By: Jia Jia Zhang

Imagine living in a greenhouse-gas emission and rubbish free environment. Hard to envision? Not for Sweden. The country’s revolutionary recycling system works so efficiently that Sweden has been importing rubbish from other countries for several years to sustain its recycling plants (1). Even better, as of February 2, 2017, the progressive nation plans to cut fossil fuel use every four years until it reaches its goal of eliminating greenhouse-gas emissions by 2045 (2). Any remaining emissions will be offset by forests that the government intends to plant (2). This promising new law passed with a majority vote in the Swedish Parliament and takes effect in 2018 (2).

Meanwhile in America, moves to counter climate change have only been following a regressive trend. President Trump and his Environmental Protection Agency head Scott Pruitt continue to express skepticism towards the impact of human activity as a major factor in inducing climate change. In fact, any mentions of “climate change,” Obama’s Climate Action Plan, or climate negotiations with the United Nations were removed from the official White House website shortly after Trump’s inauguration ceremony (3). In addition, a media blackout was issued in January for the Environmental Protection Agency and the United States Department of Agriculture (3). Trump is not only denying the reality of climate change, but suppressing the information from United States citizens.

Under the Obama administration, the Climate Action Plan and the 2015 Paris Agreement set the nation on a path towards reducing carbon pollution, increasing energy efficiency, and expanding upon renewable and other low-carbon energy sources (4). However, Trump has taken several steps backwards, promising to not only reduce regulations for U.S. oil and gas drilling and coal mining industries, but also to back out of the 2015 Paris agreement to cut greenhouse gases—which was agreed upon by nearly 200 countries (5).

Trump’s disbelief in climate change does not change its pervasive existence. If you have been out and about recently and noticed the remarkably warm winter weather, you can thank climate change for that. Of 17 hottest years ever recorded, 16 have occurred since 2000.6 Although the world’s oceans are experiencing a small incremental increase in temperature, this seemingly insignificant shift causes a string of deleterious consequences. The coral reefs are particularly sensitive to temperature change; consequently, millions of polyp-colonies are dying. Huge populations of fish rely on coral reefs for survival, so in accordance with the food chain, a dissipating coral reef population leads to a dissipating seafood population. In coming decades, people will need to start preparing for an increase of 20 or more hurricanes each year and mega-droughts that will last for up to 10 years (7). Food prices will inevitably rise and living conditions will dramatically fall.

To prevent a slew of natural disasters from occurring, we as humans, as instigators of climate change, need to mitigate the damage we have done and stop ourselves from further exacerbating the situation, just as Sweden is attempting to do. Putting environmental policies at the forefront of politics does not discount other prominent issues, rather it prevents an increasingly threatening issue from eclipsing all others.

 

Jia Jia Zhang ’20 is a freshman in Pennypacker concentrating in Human Developmental and Regenerative Biology.

 

Works Cited

[1] Sheffield, H. Sweden’s recycling is so revolutionary, the country has run out of rubbish. http://www.independent.co.uk/environment/ sweden-s-recycling-is-so-revolutionary-thecountry-has-run-out-of-rubbish-a7462976.html (accessed Feb. 20, 2017).

[2] Doyle, A. Sweden sets goal to phase out greenhouse gas emissions by 2045. http:// http://www.reuters.com/article/climatechange-sweden-idUSL5N1FN6F2 (accessed Feb. 20, 2017).

[3] Abbott, E. Donald Trump Signing Executive Order Declaring Climate Change A Threat Is Fake News. http://www.business2community. com/government-politics/donald-trump-signing-executive-order-declaring-climate-changethreat-fake-news-01769494#d4zAX5rkBaaPmBq3.97 (accessed Feb. 20, 2017).

[4] Center for Climate and Energy Solutions. President Obama’s Climate Action Plan. https:// http://www.c2es.org/federal/obama-climate-plan-resources (accessed Feb. 20, 2017).

[5] Fortune. President Trump Prepares to Withdraw from Groundbreaking Climate Change Agreement, Transition Official Says. http:// fortune.com/2017/01/30/donald-trump-parisagreement-climate-change-withdraw/ (accessed Feb. 20, 2017).

[6] Patel, J. How 2016 Became Earth’s Hottest Year on Record. https://www.nytimes.com/interactive/2017/01/18/science/earth/2016-hottest-year-on-record.html?_r=0 (accessed Feb. 20, 2017) [7] CNN. 11 Ways Climate Change Affects the World. http://www.cnn.com/2014/09/24/world/ gallery/climate-change-impact/ (accessed Feb. 20, 2017).

Machine Learning: The Future of Healthcare

By: Puneet Gupta

The U.S. healthcare system is a mess. Both the system’s infrastructure, such as the role of insurance companies, and its clinical aspects, such as how care is provided, are lacking in multiple ways. Though improvements in the infrastructure are necessary, this article will primarily discuss and suggest changes to the clinical side of the healthcare system. A new movement to bring about change in private practices, hospitals, and other healthcare facilities revolves around one new innovative field of science and technology: machine learning (ML).

An Overview

Machine learning, in simple terms, focuses on developing algorithms and software based off of the machine’s past experiences. A program capable of machine learning is able to perform a certain task or improve how it performs a task through previous runs and without any additional changes in the software. In the fewest terms, machine learning is the extraction of knowledge from data.

Machine learning is split into three primary categories: supervised learning, unsupervised learning, and reinforcement learning. In supervised learning, a ML model is given data that has been labeled with a certain outcome, and then learns the relationship between both (data and outcome) to make predictions regarding the outcome for future data. In unsupervised learning, a ML model is given data that has not been labeled with an outcome, so it is able to sort and separate the data into groups of its choice, unlike supervised learning, which has certain outcomes or groups that the data must fit into. In reinforced learning, the model attempts to figure out the most effective way of achieving the highest ‘reward’ through choosing different sets of actions. In other words, the system is rewarded when it achieves a certain outcome, and it tries to determine the best way of achieving the highest reward (1). Overall, machine learning models attempt to adopt principles based on how humans innately learn and involves building systems that can ‘think’ and adapt themselves.

Machine Learning in Healthcare

In earlier decades, when walking into a healthcare setting, patients could see stacks of papers, piles of manila folders, and clutters of pens and pencils all over. Despite all the new advances in technology, at the turn of the millennium, offices and clinics are still filled with inefficient workspaces. In order to implement change, to transition into electronic health records, and to generally improve healthcare technology, the government issued the Health Information Technology for Economic and Clinical Health Act (HITECH) in 2009 (2). Though progress has been made in getting many healthcare systems to bring in new information technology (IT), there is still much room for innovation to be made to improve all aspects of patient care, including safety, patient experience, efficiency, and effectiveness. With the overall quality of care in the U.S. lacking in comparison to those of other countries, the demand for change has increased, with more people seeing machine learning as the solution. ML is currently being used in healthcare, but not to its full potential and capabilities, nor is it being applied to the extent that it is used in other industries, such as finance, where it has brought major positive changes and a variety of benefits.

ML’s primary use in the near future will involve data analysis. With each patient comes large bulks of data including X-ray results, vaccinations, blood samples, vital signs, DNA sequences, current medications, other past medical history, and much more. However, we still are not able to efficiently obtain, analyze, and reach conclusions well. One of the major challenges is integrating the data obtained for each patient into one system, as that will allow for efficient communication between providers, allow for rapid data analysis, and give providers all the information they need to accurately treat their patients. However, much of the data today is encrypted and has restricted access due to the constant efforts to protect patient privacy, making this transition difficult, alongside the fact that many medical devices are not interoperable (3). Once a single database can be established, the benefits of ML can be reaped.

One of the primary applications to healthcare for machine learning involves patient diagnosis and treatment. It is important not only in emergency medical situations, but also in general primary care and in specialized physicians as well. For example, ML can be used to predict mortality and length of life remaining using physiological patient vitals and other tools including blood test results, either in the immediate future, such as for a traumatic car accident, or in the long-run, such as for cancer (3). Most significantly, ML models can be used to help physicians diagnose patients, especially in cases involving relatively rare diseases or when outcomes are hard to predict. For example, in a recent clinical study, several machine learning models were used to analyze data from electronic health records to predict heart failures, and the outcomes indicated that these ML systems predicted outcomes well (4). Moreover, machine learning can be used to determine the most effective medication dosage, reducing healthcare costs for the patients and providers. ML can be used not only in determining dosage, but also in determining the best medication for the patient. Genetic variations among different races, ethnicities, and individual people in general impacts the effectiveness of certain drugs and people’s response to these drugs, such as HIV medications (3). Once more advanced ML algorithms and models are developed, they would be able to rapidly recognize these differences and reach accurate and reliable conclusions. Some technologies are being used currently for interpreting a variety of images, including those from magnetic resonance imaging (MRI), X-rays, and computed tomography (CT) scans.5 However, more advanced ML algorithms that can effectively identify potential regions of concern on these images and then develop possible hypotheses are needed. Even in surgery, new machine learning models need to be developed for robotic surgeries to increase the probability of successful surgical outcomes, which can potentially eradicate the need for human surgeons (6).

Many issues involving erroneous and imprecise data arise in data collection, as much data is simply wrong (3). This is especially true in waveform data, where environmental factors and patient movement can affect the recorded signals. New and advanced algorithms need to be established that can distinguish real data from artificial and poor data, thereby improving the reliability of the data gathered and allowing the physician to make an accurate diagnosis. Even in very common electrocardiogram readings, many physicians reach different conclusions in regards to the patient’s condition. Artificial data and data with poor signal quality play a major role in this analytical difference.3 Many times, physicians are overwhelmed by the plethora of data collected, but ML algorithms that can identify and streamline the most pertinent data without leaving behind other crucial information need to be developed. Moreover, ML algorithms that can allow the AI to explain the reasoning behind its proposed diagnosis or treatment plan is necessary.

Challenges and Controversies

Adapting artificial intelligence (AI) and machine learning into all healthcare systems is unfortunately not easy. Healthcare systems have been structured so that change is difficult. Much of the decision makers in healthcare systems and policies are elderly, who tend to have strong preferences for the typical ‘pen and paper’ and prefer simpler systems in which they have more control. ML systems are complex and need to be integrated into health care systems in the simplest yet most effective form. Moreover, many healthcare facilities are not motivated or incentivized enough to spend their budget in investing in adequate research, staff, and other support for developing these ML models. This adaptation of AI and ML is necessary not just in the United States health care system, but all across the world. However, as the U.S. is one of the leading places for innovation and development in this health information sector, the country needs to bring about a large-scale change in its system first, despite the difficulties in installing such a system, in order to start a ripple effect.

As with the rise of most new technologies, machine learning brings about a heated debate on ethics. When we train machines to ‘think for themselves,’ we have given up our control over them in that we don’t know what the system learned or what it is thinking, thereby putting our lives in danger. Some believe that our advancements in machine learning will reach a point at which we no longer need human physicians, which would significantly hurt the economy, workforce, and patient experience in clinics. Many are afraid that when they come into a doctor’s office, they will no longer have that physician-patient contact and connection, but instead must confront a machine. When building and training machine learning systems, access to large databases of patient information is needed, raising privacy concerns, for which there is still no accepted standard in regards to AI. Furthermore, advances in ML can lead to issues regarding insurance coverage. For example, some insurance companies may start demanding access to the AI that is tracking a patient’s health records to see how their overall health is and determine premiums based off that. Moreover, it is possible that when future research studies show the success of ML and AI, hospitals and clinics might increase the fees associated with these services, leading to inequality based off income. How will we react if the AI gives us wrong treatment or diagnoses? What if a physician’s diagnosis and an AI’s diagnosis are different? It is important to consider all these challenges as we further develop and improve our machine learning systems.

Future of Machine Learning

Today, many major companies and startups, including Enlitic, MedAware, and Google, have launched massive projects focused on improving AI and ML and bringing it to the healthcare system, such as Google’s DeepMind Health project and IBM’s Avicenna software (7). Moreover, IBM’s Watson Health is collaborating with the Cleveland Clinic and Atrius Health in using cognitive computing in their health system, from which experts are hoping to see reduced physician burnout (8). More recently, current ML algorithms being tested and developed include k-nearest neighbors, naive and semi-naive Bayes, lookahead feature construction, Backpropagation neural networks, and more (9).

Artificial intelligence and machine learning are undoubtedly the future, as refined automation of data collection and replacement of jobs in all industries by machine learning systems is inevitable. Scientists and researchers must focus on developing effective, efficient, and innovative algorithms while ensuring that their functions and models do not endanger the human job market. Both Elon Musk and Stephen Hawkings foresee AI and ML not only dangerous economically, but also physically (10). Nonetheless, it is imperative that we continue to work on transforming the quality of care and healthcare system as a whole through machine learning, a science and technology that is to revolutionize the world in all aspects of life for decades to come. The benefits of machine learning outweigh these theoretical nightmares.

 

Puneet Gupta ’18 is a junior in Dudley House concentrating in Biology.

 

Works cited

[1] Introduction to Reinforcement Learning. http://www.cs.indiana.edu/~gasser/ Salsa/rl.html (accessed Feb. 25, 2017).

[2] Rhodes, R. et al. Amer. J. of Hospice and Pall. Med. 2015, 33, 678-683.

[3] Johnson, A. E. W. et al. Proc. IEEE 2016, 104, 444-466.

[4] Wu, J. et al. Med. Care 2010, 48, S106-S113.

[5] Wang, S.; Summers, R. M. Medical Image Analysis. 2012, 16, 933–951.

[6] Nevett, J.; Waghorn, M. Robots ‘set to replace human surgeons entirely for complex operations’ potentially cutting risk of errors. Mirror, May 4, 2016. http:// http://www.mirror.co.uk/news/world-news/robots-set-replace-human-surgeons-7897465 (accessed March 3, 2017).

[7] Simonite, T. IBM’s Automated Radiologist Can Read Images and Medical Records. MIT Technology Review, Feb. 4, 2016. https://www.technologyreview. com/s/600706/ibms-automated-radiologist-can-read-images-and-medical-records/ (accessed Feb. 27, 2017).

[8] Slabodkin, G. Caradigm takes endto-end enterprise approach to population health. Health Data Management, March 2, 2017. https://www.healthdatamanagement.com/news/caradigm-takes-end-toend-enterprise-approach-to-populationhealth (accessed March 3, 2017).

[9] Kononenko, I. Artif. Intell. Med. 2001, 23, 89-109. [10] Cellan-Jones, R. Stephen Hawking warns artificial intelligence could end mankind. BBC News, Dec. 2, 2014. http:// http://www.bbc.com/news/technology-30290540 (accessed March 1, 2017).

Innocent Until Proven Free? The Question of Neuroscience and Moral Responsibility

By: Kristina Madjoska

“Plead insanity!” you might have heard Detective Cohle say to some of his interrogees in the show True Detective. Even if you have not watched True Detective, chances are you have heard of the term insanity plea. The idea that moral and legal responsibility is alleviated when a person cannot control their impulses or reason through ethically meaningful situations is fundamentally embedded in our legal understanding of insanity. This logic certainly makes intuitive sense: if someone has acted under a compulsion and hasn’t been able to choose their course of action freely, then they shouldn’t be blamed for having bad intentions. After all, it was not their choice. Neuroscience has consistently provided support for this reasoning, as evidence of abnormal brain structures and neurotransmitter actions in mentally ill patients has piled up. Yet in an essential way, modern neuroscience also problematizes this logic. If increasing evidence shows that our brain biochemistries are responsible for the choices we make, then are we allowed to talk about free choice? And if free choice cannot be argued for, can we rightfully assign blame to anyone?

These questions suggest that a ‘pledge to insanity’ should not be an exclusive privilege of the mentally ill. In certain ways, all criminal acts are a result of some malfunction of our brains, or simply said, we are all in some small ways insane. This is important to our concepts of law and morality because, when assigning blame, we typically consider whether a person has freely chosen to act in a way that deserves this negative judgment. If it is true that we have no control over our actions, then blame might be assigned purely to those who happen to have gotten the wrong combination of genes and environmental conditions which have prompted them to act contrary to the rules. There are multiple thoughtful approaches that neuroscientists have taken to study this issue. Neuroscientific literature on the topic is broad, yet two especially salient questions emerge: Does our brain biochemistry cause our behavior? And if this is true, then is it still smart or rightful to regard ourselves as responsible for our actions?

The Moral Compass

Before we begin delving into the intricacies of these two questions, it is first important to discern which parts of our brain participate in our moral decision-making. Refined neuroimaging techniques like fMRI (functional magnetic resonance scanning) have allowed us to inspect in real-time where brain activity is localized and what cognitive and behavioral responses it corresponds to. Two scientists from the University of California, Santa Barbara give an extensive overview of the current understandings of the neurobiology of ethical reasoning (1). Synthesizing multiple findings, they posit that morality in the human brain works by a mechanism of parallel processing. This means that there are several different regions in the brain that work simultaneously to produce any ethical thought, value or conclusion. Such a division of labor reflects the plurality of human moral thought. For example, a study from the National Institute of Health has shown that social emotions, particularly disgust, play a critical role in our evaluations of morally meaningful situations. Our basal ganglia and amygdala, parts of our brain that are also associated with disgust elicited by stimuli like spoiled food, are significantly active when people are asked to think about statements that frequently elicit moral reprehension, like incestuous relationships or murder (2). At the same time, our emotional moral responses also depend on feelings of compassion and admiration. Observed activity in the anterior cingulate cortex, a region in the frontal part of our brain involved with empathy, showed that these feelings depend on our ability to imagine ourselves in someone else’s shoes (3). However, it is critical to note that emotion is not the only important medium through which we come to understand morality. Human capacity to reason abstractly is also a crucial aspect of this process. A specific type of abstract reasoning is our belief attribution system. This system is responsible for our ability to imagine other people’s intentions, even if their actions do not explicitly demonstrate them. Functional belief attribution would help us to, for example, see behind a dubious smile or party invitation. The right temporoparietal lobe, a module on the surface of our right brain, is involved in our performance of belief attribution (4).

To date, there is no convincing evidence of centralized command integrating these multiple parallel processes. In most cases, the parallel neural mechanism manages to (at least somewhat) coherently inform our moral evaluations. Nevertheless, how effectively we act upon these evaluations also depends on how well we can control our impulses. Impulse control is the means through which we postpone momentary pleasure-seeking for long-term, objective benefits. In a significant way, impulse control affects whether we act on immediate threats or desires when they contradict our more nuanced understanding of the moral thing to do. A paper published by the International Journal of Law and Psychiatry identifies the prefrontal cortex, the part of our brain found just behind our foreheads, as the vital center for impulse control (5). Evidence from multiple studies shows that people who have diminished gray matter in their prefrontal cortices behave more rashly, are more often violent and are more likely to have trouble with the law. Our ability to roughly correctly evaluate hypothetical moral situations is mostly independent from our ability to hold on to these considerations in the heat of the moment.

The Causation Effect

New and elegant neuroimaging technologies and the studies they have propelled have certainly contoured our understanding of how our brain thinks about and acts on right and wrong. Yet, the more we learn about these relationships, the more deeply they begin to problematize the idea of our moral responsibility. If our morally relevant actions are determined by our amygdalas and prefrontal cortices, then should we be held responsible for them? A meticulous way in which scientists have tried to study this question is through DBS, or deep brain stimulation.6 Although DBS is not typically used to study moral behavior, it has been extremely useful in clarifying the brain-behavior relationship. The technique essentially involves a surgical transplantation of a pacemaker, which transmits electrical signals to local brain regions to stimulate activity. Given that neurons, the cells that make up our brains, communicate through electrical signals, the pacemaker can simulate this type of communication. DBS has seen some success in the treatment of depression, Parkinson’s, aggressive behavior and addiction. Experiments with DBS demonstrate that, when electrical activity is altered (in this case through DBS), altered behavior follows. This evidence indicates that the brain-behavior rapport is possibly not only correlational, but also causal. An older study supporting that idea found that brain activity in regions correlated with a certain behavior preceded the conscious awareness of that behavior being initiated7 . This means that the participants may not have exercised conscious control over their own actions. Although definitive proof of causation is still not available, these studies open up the possibility that free will may not, in fact, be as free as we would have thought.

The Question of Responsibility

Because of ethical considerations, causation experiments have not been performed specifically for moral decision-making and criminal behavior. However, if we extend the logic of DBS studies to our moral processing system, then it seems like there might not be much that we can do to control our moral behavior either. The neural determinism suggested by neuroscientific literature invites a discussion of whether freedom of choice is necessary for assigning moral blame. Two philosophical intuitions, compatibilism and incompatibilism, argue opposing sides of the issue. On the one hand, compatibilists maintain that even if our behaviors are primarily guided by the structures of our brain, the choices we make can still be called our own, because it is our brains that have initiated them (8). Therefore, we should be held responsible for their consequences. On the other hand, incompatibilists argue that our choices must be made freely if we are to assign any sort of blame to them. Perhaps, the truth is a more nuanced version of both, and a sliding scale between what we consider insanity and health may be a better representation of it.

However, it is just as pressing to consider how neuroscience has already affected the practicalities of the justice system. Brain scans and images are increasingly finding their way to attorneys’ briefs and judges’ benches; they have been used to argue for juvenile criminal offenders (their prefrontal cortices and impulse control are underdeveloped) and for mentally ill patients, among others (1). One of the most precise insights on the issue has been provided by the writers of the University of California, Santa Barbara paper. In discussing the implications of neurological determinism, they say: “The biggest threat to our taking seriously the idea that many who commit bad, even criminal acts, are less free, less rational, less responsible, and less blameworthy than we have been thinking all along may be the following: if we take seriously that these individuals are impeded in nontrivial ways in their ability to make good choices and therefore don’t deserve to be punished as harshly as they have been up until now, then what do we do with them? One answer is that we stop using the criminal justice system solely to levy punishment on wrongdoers and use it more to prevent subsequent harm from occurring” (1).

Even though it is an already existing concern in legal systems globally, this consideration may need to make its way into legal thought more quickly and more fundamentally than it has so far.

Takeaways

Although neuroscience is a vibrant and fertile field that requires much more work to be done, the implications of its findings already widely and deeply challenge our social relationships and structures. Realizing that we do not control our own actions as much as we often assume may be a little disheartening. Hopefully, however, in studying ourselves more closely, we can begin to more thoughtfully and more compassionately respond to others. Hopefully, neuroscience can inspire us to approach anyone from a substance-abusing neighbor to a lying friend not as a blameworthy criminal, but as a human in need of our help and care. Just as importantly, to begin adjusting our legal thought to the nuance of neuroscience may help our society deal with criminal behavior in a more effective and meaningful way.

 

Kristina Madjoska ’19 is a sophomore in Lowell House concentrating in Neurobiology.

 

Works cited

[1] Funk, C. M.; Gazzaniga, M. S. Curr. Opin. Neurobiol. 2009, 19, 678-681.

[2] Borg, J. S. et al. J. Cogn. Neurosci. 2008, 20, 1529-1546.

[3] Immordino-Yang M. H. PNAS. 2009, 106, 8021-8026.

[4] Young, L. et al. PNAS. 2007, 104, 8235-8240.

[5] Penney, S. Int. J. Law Psychiatry 2012, 35, 99-103.

[6] Sharp, D.; Wasserman, D. Neuroethics 2016, 9, 173-185.

[7] Libet, B. et al. Brain 1983, 106, 623-642.

[8] McKenna, M.; Coates, D. J. Compatibilism. Stanford Encyclopedia of Philosophy. https://plato. stanford.edu/entries/compatibilism/ (accessed Feb. 19, 2017).

The New Age of Aging Research

By: Eric Sun

Aging. To some, this word symbolizes equality, wisdom, and progress; to others, this word represents weakness, disease, and death. To me, aging has taken on a mixed meaning. When I was a small child, I remember lying awake in bed and counting my heartbeats as if the thumping in my chest was also the ticking of my biological clock. I imagined that each person was given a certain number of heartbeats in a lifetime. Aging, to my young mind, was simply the slow and eventual countdown of these limited heartbeats. Although not many people will admit it, the fear of aging and death is extremely common (1). Throughout history, our fascination with mortality has contributed to the rise and spread of religions and legends. Since then, scientific research has begun to shed light on one of life’s greatest mysteries.

Aging Research in History

During the late Middle Ages, alchemy was a booming practice. The holy grail of alchemy was procurement of the fabled philosopher’s stone—a stone that had the ability to extend the life of its wielder indefinitely (1). Despite numerous efforts, there are no records of any successful attempts at synthesizing the object. The philosopher’s stone was not the only fabled anti-aging object. After the discovery of the Americas, there was growing interest in the possibility of uncovering the fountain of youth in this uncharted territory. In 1513, the conquistador Juan Ponce de León set out on a quest to find the fabled fountain, which ultimately ended in failure (1).

In the following decades, the interest in anti-aging ‘research’ faded along with its associated myths. In fact, aging research was under the public radar from the Renaissance until the 1940s when James Birren produced a theory involving what he called the “tertiary, secondary, and primary processes of aging” (1). Birren developed the field of gerontology, which is the study of aging, and expanded it firstly socially and secondly scientifically. At this time, aging research was widely considered a pseudoscience—a label that was not helped by the unscientific blood and serum transfusions championed by charlatans as anti-aging treatments. Ironically, a recent study reported that old mice that received plasma transfusions from younger mice were physiologically healthier, although this has yet to be validated in humans and although it was quite clear that these results were unknown in the early 1900s (2). Under Birren’s lead, the scientific stigma surrounding aging began to dissipate as more scientists were attracted to this young and growing field of research.

Notable Advances

In the following years, aging research underwent a series of profound, exciting breakthroughs. Perhaps the most famous discovery was that of the telomere. Telomeres are the repeating DNA sequences at the end of each of the chromosomes. Through the DNA replication mechanism, the telomeres deteriorate after each cycle of replication and the chromosomes become shorter. Although telomeres themselves do not appear to have any significant function outside of protecting other DNA sequences from degradation, when a cell exhausts its telomeres, each successive division results in deterioration of essential genes and deleterious effects that often result in cellular death (3).

The first indication of telomeres came in 1962 through Leonard Hayflick’s discovery of the limit on somatic cell replication. Hayflick, considered by many to be the father of modern aging research, carried out a groundbreaking experiment that indicated that somatic cells could only divide a finite number of times (1). At the time, it was widely accepted that cell lineages were immortal and that each body cell was capable of an indefinite number of divisions. It was only until several other scientists replicated Hayflick’s result that the socalled Hayflick limit became largely accepted by the scientific community. This limit to cellular division was typically 50-54 divisions for human somatic cells (1). In the 1970s, Jack Szostak discovered the existence of telomeres at the end of chromosomes, which explained the Hayflick limit phenomenon (3). If cell replication was restricted by the length of the telomere, and the telomere was of a finite length, then surely cell lineages are finite. Szostak garnered a Nobel Prize for his work. Soon after the discovery of telomeres, the enzyme that extends telomeres on chromosomes, telomerase, was discovered. In recent years, overexpression of telomerase has been linked to the vicious proliferation and immortality of cancer cells (3). Telomeres serve as the switch for immortality—at least on a cellular level.

An often overlooked, but perhaps even greater breakthrough was the development of several notable theories of aging. Imagine an organism as a car. Cars, no matter how well kept or maintained, begin to lose function with time. At first, there may be a few scratches to the windshield, buildup in the exhaust pipe, and worn-out tires. These are minor issues that can be amended relatively easily. Then, the engine starts to malfunction, the wires begin to rust, and the car becomes unsalvageable. Like a car, the organism has many parts that are being used daily. Similarly, an organism can break down through continuous wear and tear. This seemingly obvious idea has been revolutionary in the field of aging research. Contrary to other theories that proposed that humans were genetically programmed to age, the cumulative damage theory presented aging as a random process (1). As such, it may be reasonable to conclude that aging is the byproduct of environmental effects. Surely, this would mean that after centuries of medical advancement, which included vaccines, antibiotics, and surgery among its ranks, humans have been able to increase their life spans considerably. Yet, despite significant increases in life expectancy, meaning more humans are realizing the full extent of their maximum lifespans, the actual human lifespan has stayed relatively the same (3). A more recent theory proposed that the maximum lifespan is determined genetically and that environmental factors can only contribute to expedited biological aging. Given the saturation of human population survival curves, this theory is especially convincing (3).

As a corollary to the cumulative damage theory of aging, aging is regarded as a holistic process—a process that is affected by a multitude of genes and environmental factors. One suspected contributor to the aging process is free radical damage.1 Free radicals are molecules that harbor a single, unpaired valence electron and induce oxidative damage in cellular machinery. Free radicals are byproducts of cellular respiration and can damage DNA. In particular, mtDNA (mitochondrial DNA) is at risk of oxidative damage due to both its proximity to free radical formation, as cellular respiration occurs in mitochondria, and significantly lower levels of DNA repair. The free radical theory of aging has become especially popular in the health industry where antioxidants, compounds that neutralize free radicals, have become synonymous with anti-aging treatments.3 The effectiveness of antioxidant consumption in retarding aging has not been validated. Other notable candidates for contributing to aging include protein aggregation, cross linkage, and induced apoptosis (1).

In order to discern other contributing factors, several longitudinal studies on aging have been implemented. Longitudinal studies offer one major advantage over the cross-sectional studies traditionally employed in medical research in that they allow scientists to track an individual’s health as they grow older. The Baltimore Longitudinal Study of Aging (BLSA) is the most prominent of these studies and was started in 1958 by Nathan Shock, a pioneer in the field of aging research, along with over 1,000 participants (4). Since then, several other studies have taken root including The SardiNIA Project executed by the National Institute on Aging that includes 6,100 participants from the island of Sardinia off the coast of Italy (3). Armed with the powerful tools of bioinformatics, these studies have become potential windows from which to understand the intricacies of human aging.

Aging Research Today

Aging research has gained steady momentum in recent years. In fact, one of the most famous aging experiments was conducted in 1993 by Cynthia Kenyon, a professor at UCSF and now vice president at Calico. Kenyon discovered that mutations in the daf-2 and daf- 16 genes doubled the lifespan of C. elegans, a model organism. Her future work saw increasingly lengthened lifespans from modulating these two gene (5). The search for homologous counterparts in humans is ongoing. A recent subset of aging research has focused on life extension treatments in more complex model organisms such as D. melanogaster, lab mice, and Rhesus monkeys (1).

Recently, other molecular mechanisms have been implicated with aging. These include reservatrol, sirtuins, and rapamycin. Reservatrol, a compound commonly found in red wines, activates sirtuin deacetylases, which extend the lifespan of lower organisms and may also be involved in human aging (6). Reservatrol has also been related to cardioprotective benefits. Discovery of these mechanisms and possible relations to aging have been led by pioneers such as David Sinclair of Harvard Medical School. Treatments involving rapamycin, an immunosuppressant, have increased the longevity of mice (7). The search for contributing molecular factors of aging is an active and promising facet of aging research.

In the past decade, the advent of computational tools for large-scale data analysis has revealed fascinating insights into aging. Computational biology and bioinformatics have expedited the search for biomarkers of aging. Traditionally, pulse wave velocity and telomere length served as the gold standards of biological age measurement, but only explained a fraction of individual variance in aging (3). Recent research has implicated a litany of cardiovascular traits, physical and mental characteristics, and genetic mutations as potential biomarkers. In 2014, Steve Horvath, a professor at UCLA, developed a method for deriving an estimate of biological age (DNAm) from DNA methylation patterns, which was highly correlated with chronological age and seemed to explain several tendencies in both aging and disease (8). There is ongoing research in detection of a central aging signal that explains most physiological causes of aging.

Aging research has garnered considerable public spotlight in the past several years. Aubrey de Grey, a computer scientist turned biologist and founder of the SENS foundation, gave an extremely well-received TED talk on a strategy that he has proposed to tackle the obstacle of aging. The strategy involves partitioning the aging process into several major factors: aggregates, cellular senescence and growth, cross linkage, and mutations. By targeting medical advancements in each field separately, the problem becomes more manageable and the human lifespan could potentially be elongated in small increments over a long period of breakthroughs (9). Other social movements such as transhumanism have highlighted the potential of anti-aging treatments in the near future. Transhumanism embraces emerging technologies and their potential in bettering the human body or quality of life—including extension of the healthy lifespan (9).

Controversies

Since the age of alchemy, aging research has been a field brewing with controversy. Today, there are two major concerns with developments in aging research and rejuvenation technology. First, critics of anti-aging research are concerned with the very real possibility of overpopulation. The current age distribution of ages in the United States is a micro-example of what an ageless population might entail. There are already concerns that the aging Baby Boomers generation may overburden the healthcare and Social Security systems. Imagine this same effect but with continuous, cumulative addition to the old end of the age spectrum. Critics espousing this belief, however, do not take into consideration what current aging research implies about future anti-aging therapies. Nearly all current testing in model organisms has indicated that anti-aging treatments tend to promote extended, healthy aging. That is, the relative age of individuals would simply be stretched across a longer temporal span. Individuals under treatment who are chronologically 70 years old may instead be 50 years old biologically. As such, fears of skewing towards an elderly population are largely unfounded in a relative world. Additionally, longer healthy life spans would entail greater productivity from an individual over their lifetime (9).

Other opponents of aging research cite religious and ethical concerns (10). After all, if we are extending our lifespans beyond their natural limit, are we not playing God? There is no simple solution to address these concerns. There will always be advocates and critics of aging research and scientists should be attentive to these ethical concerns as they continue to pursue this line of research. In the end, if an anti-aging treatment is procured, it is only an additional opportunity that has been extended and would be by no means obligatory.

The Path Ahead

Aging research is an exciting and growing field. Developments in our understanding of the fundamental aging process are likely to proffer increased insight in related research areas such as cancer, diabetes, and Alzheimer’s research. Aging is still a relatively underpopulated field of research and looks to benefit from the recent explosion of biotechnology and big data-aided research (11). In the coming decades, one can expect to see greater innovation and progress in aging research. Perhaps one day, even the fabled philosopher’s stone or fountain of youth may manifest as a product of this push for greater understanding.

 

Eric Sun ‘20 is a freshman in Hollis Hall.

 

WORKS CITED

[1] Hayflick, L. How and Why We Age; Ballantine Books: New York, 1996.

[2] Scudellari, M. Nature 2015, 517, 426-429.

[3] Austad, S. Why We Age, 1st ed.; Wiley: Hoboken, NJ, 1999.

[4] Shock, N. et al. Normal Human Aging: The Baltimore Longitudinal Study of Aging; NIH-84-2450; NIH: Washington, D.C., 1984.

[5] Kenyon, C., et al. Nature 1993, 366, 461- 464.

[6] Baur, J. A.; Sinclair, D. A. Nat. Rev. Drug Discov. 2006, 5, 493-506.

[7] Wilkinson, J.E., et al. Aging Cell 2012, 4, 675-682.

[8] Horvath, S. Genome Biology 2013, 14, R115.

[9] de Grey, A.; Rae, M. Ending Aging: The Rejuvenation Breakthroughs That Could Reverse Human Aging, 1st ed.; St. Martin’s Press: New York, 2007.

[10] Green, B. Radical Life Extension: An Ethical Analysis. Santa Clara University [Online], February 27, 2017. https:// http://www.scu.edu/ethics/all-about-ethics/ radical-life-extension/ (accessed Mar. 26, 2017).

[11] Arking, R. Biology of Aging: Observations and Principles, 2nd ed.; Oxford University Press: Oxford, U.K., 2006.

 

The Discovery of Metallic Hydrogen

By: Felipe Flores

The generation of metallic hydrogen by Professor Isaac Silvera and postdoctoral fellow Ranga Dias, PhD represents a crucial advance in the field of high-pressure physics. Originally theorised in 1935 by physicists Eugene Wigner and Hillard Bell Huntington, the ongoing project was finally brought to fruition at Harvard University in January 2017. Both Silvera and Dias agreed to be interviewed for this article.

What was the key to the discovery?

Generating a sample of metallic hydrogen has been an ongoing project for many years in several laboratories, and the Silvera lab has finally succeeded. An essential component to Silvera and Dias’ success was achieving the correct pressure to obtain a transition to a metallic state. While the necessary density of the metallic phase was predicted accurately in the 1930s, the predicted pressure to achieve a metallic phase at the time was around 25 GPa, while modern predictions placed the figures at 400-500 GPa (1,2). Indeed this recent success was obtained with a pressure of 495 GPa, a pressure never achieved before in hydrogen experiments (3). Achieving such pressure had its challenges: “The diamonds we use to contain the samples tend to break or allow diffusion,” explained Dias; “the hydrogen sample can diffuse into the diamond and cause defects, which will weaken the diamond and make it break before reaching the ultra-high pressure needed ” (4). The key therefore, lies in modifying the diamond to make it sustain the immense force necessary for this experiment’s success. Dias summarized the special technique as “adding a diffusion barrier to a very polished diamond with as little defects as possible” (4). Once the scientists achieved the ultra-high pressure and observed a phase transition, they measured the reflectivity of the sample to be consistent with that of a metal, as well as obtained a density in agreement with theoretical predictions.

Why is the discovery so exciting?

Theoretical calculations predict metallic hydrogen to be a metastable material as well as a room-temperature superconductor (5,6). Metastability means the material would remain in the same metallic state even after the high pressure is released (just like diamond is a metastable form of carbon). If such is the case, metallic hydrogen could potentially be an extremely efficient way of storing energy, for instance to be used as a rocket propellant. Superconductivity means the material could carry currents without any resistance or energy loss. Such a property has only been achieved at extremely low temperatures in other materials, while metallic hydrogen is theorized to behave this way at 17˚C, far higher than other candidates. If both properties are confirmed to be true, we might see a revolution in electronics and transportation. For instance, magnetic levitation vehicles could become more accessible, electronics could become more efficient, and space travel could become cheaper and to farther distances.

What is the current status of the experiment?

Unfortunately, the sample was accidentally destroyed, a common fate in high-pressure physics, as both Professor Silvera and Dias explained (7). In order to create the metallic hydrogen, Silvera and Días followed procedures necessary ensure not to break the deice’s compressing diamonds. Upon submitting their paper, they kept the hydrogen at a low temperature and at a high pressure until the acceptance of their work, in case the pair was instructed to perform other tests. The crystals were slowly developing defects while held under the stress of a high pressure during the evaluation of the paper, and when Silvera and Dias later shone a low-energy laser upon the sample, the diamonds broke and the metallic hydrogen was lost. “It was a surprising that the diamonds broke with such a low-energy laser,” (7) said Professor Silvera. However, it is also uncommon to retain samples compressed for such a long time. “The accumulation of defects over all that time was probably responsible,” added Silvera (7).

The discovery has not been met absent of criticism and skepticism, especially on the reproducibility of the experiment. The Silvera lab is currently reproducing their experiment, although with a few tweaks to the procedure, such as the use of a different type of diamond. Another central criticism was the pair’s use of alumina to create a diffusion barrier around the hydrogen, as some thought the metallic nature of the sample could stem from the aluminium casing. However, Dias is not concerned about this possibility, explaining that even at pressures such as 400 GPa there has been no observed change in aluminium, and as such it is highly doubtful that it would be the culprit. Additionally, Silvera and Dias utilised a layer of only 48 nanometres of aluminium in their experiment, which, they say, is so thin it could not be the source of metallic properties they measured.

How do you feel about the future of the experiment?

“Hopeful,” said Silvera; “optimistic,” said Dias (4,7). If their success is repeated, the scientists will transport their sample to Argon Labs, near Chicago, in order to x-ray the metallic hydrogen sample, examine its structure, and determine whether it is indeed the desired metallic phase and whether it is metastable, amongst other things. While the future of metallic hydrogen is yet to be determined, the confident attitudes of Silvera and Dias are encouraging for expectant scientists around the world as physics embraces its newest exciting discovery.

 

Felipe Flores ‘19 is a sophomore in Quincy House studying Human Developmental and Regenerative Biology and Physics.

 

Works Cited

[1] Wigner, E.; Huntington, H. B. J. Chem. Phys. 1935, 3, 764-770.

[2] Silvera, I.; Cole, J. J. Phys. Conf. Ser. 2010, 215, 012194.

[3] Dias, R.; Silvera, I. Science. 2017, 355, 715-718.

[4] Flores, F. Interview with Ranga Días, PhD. Apr. 4, 2017.

[5] Ashcroft, N. W. Phys. Rev. Lett. 1968, 21, 1748-1749.

[6] Brovman, E. G et al. Sov. Phys. JETP. 1972, 35, 783-787.

[7] Flores, F. Interview with Prof. Isaac Silvera. Apr. 7, 2017.

Habitat Conversion: A Major Driver of Species Loss

By Priya Amin

What causes species loss? As biodiversity has become increasingly threatened, the call to understand the impact of human activity on species around the world has become particularly urgent. Drivers of species loss include climate change, pollution, and habitat conversion. Climate change has been linked to rising levels of carbon dioxide, which are attributable to human activity (1). Similarly, pollution is a byproduct of human expansion; undoubtedly, pollutes like fertilizer run-off and plastic waste have negatively impacted many land and marine ecosystems (2,3). However, while climate change and pollution are key drivers of species loss, both of these factors only indirectly stem from human expansion.

Habitat conversion for human development, on the other hand, directly trades economic profit for habitat loss. Therefore, as the demand for more resources inevitably rises as a result of population growth, so will the rate of habitat conversion. In addition, habitat conversion is deeply rooted within the story of human migration and expansion. Looking toward the future, scientists’ sizable estimates of habitat conversion stem from humanity’s growing stewardship of Earth’s land and resources. Economic activity that causes habitat loss includes urbanization, mining, water development, and agriculture. However, a leading cause of global change is likely the growing human modification of environments for agriculture (4). To supplement the growing need for food and biofuels, it is projected that approximately one billion additional hectares of agronomic land will be developed in the next few decades (5).

As a result of agricultural land development, an ecosystem experiences a reduction in its ability to foster biodiversity (6). This is particularly crucial to the survival of biodiversity hotspots, which are characterized by tremendous habitat loss and a subsequent loss of endemic plant species (7). For example, the Cerrado biome, a recognized biodiversity hotspot, is home to a variety of unique habitats, including dense woodlands, open grasslands, and dry forests; it is also the richest savannah ecosystem in the world (8). The agricultural potential and climatic suitability of the area has led to rapid human occupation. Cattle ranching and intensive farming in the Cerrado biome have caused tremendous decline in natural habitat cover (8).

The drastic loss of spatially consistent natural cover in the Cerrado biome denotes the decline of many endemic plant species, which is also mirrored in the Atlantic rain forest biome. The forest-grassland mosaic of Rio Grande do Sul, Brazil has been largely converted for agri- and silviculture.9 Due to extensive logging practices, the area’s Araucaria broadleaf forest is only a mere fraction of its original extension, and the Araucaria angustifolia species has been recently placed on the International Union for Conservation of Nature and Natural Resources (IUCN) Red List of Endangered Species (9). According to a study by Hermann and colleagues, land conversion rates far outweigh preservation attempts in the area. Collecting data from satellite images, silviculture in the area was expanded by 94% over the six-year study, and grassland was the main target for agricultural land conversion. On a larger scale, this reflects global developments in temperate grasslands (9).

An overwhelming number of studies have studied the impact of agricultural habitat conversion on birds, which are often used as indicators of biodiversity status. They play vital roles in many ecosystems, ranging from pollination and seed dispersal to insect control and nutrient cycling. It has been estimated that approximately a fifth to a quarter of pre-agricultural bird numbers have been lost due to agricultural development (6). In particular, avian breeding success is impacted by agricultural land conversion. A study conducted by Cartwright et al. concluded that the formerly Critically Endangered Mauritius kestrel Falco punctatus experiences a decline in breeding success as the area of agriculture near a nest site is increased (10). This may be attributable to the increasing spatial variation in the availability of native prey, which is exasperated by land conversion. In addition, loss of farmland bird populations has been observed in Europe. For example, farmland bird populations that are dependent on key aspects of these agro-ecosystems experienced a 40% decline between 1980 and 2000 (1). The status of bird populations is a crucial predictor of how other species in ecosystems will be impacted. Unfortunately, estimates based on current agricultural trends indicate that avian species, and biodiversity, will continue to decline (6).

Agricultural habitat conversion has caused much biodiversity and species loss throughout the world. The impact on biodiversity hotspots is especially alarming. Agricultural land conversion of these species-rich areas has been especially detrimental to biodiversity. The startling decline in avian numbers is an ominous sign that many species are threatened by the growing trends in agriculture.6 Thus, with an increasing human population, it is incumbent that special care is taken to protect encroached environments and endangered species.

 

Priya Amin ’19 is a sophomore in Pforzheimer House concentrating in Integrative Biology.

 

WORKS CITED

[1] Cao, L. et al. PNAS. 2010, 107, 9513- 9518.

[2] Cooper P. F. et al. Water Sci. Technol. 2010, 61, 355-63.

[3] Jambeck J. R. et al. Science 2015, 347, 768-771.

[4] Keitt T. H. Ecol. Appl. 2009, 19, 1561- 1573.

[5] Oakleaf J. R. et al. PLoS ONE 2015, 10, e0138334.

[6] Gaston K. J. et al. Proc. R. Soc. B. 2003, 270, 1293-1300.

[7] Myers N. et al. Nature 2000. 403, 853- 858.

[8] Diniz-Filho J. A. F. et al. Sci. Agric. 2009, 66, 764-771.

[9] Hermann J.M. et al. Ecosyst. Health Sustainability 2016, 2, 1-11.

[10] Cartwright S.J. et al. J. Appl. Ecol. 2014, 51, 1387-1395.

[11] Doxa A. et al. J. Appl. Ecol. 2010, 47, 1348-1356.

Dimensional Analysis

By: Julia Canick

Most people accept that reality has three spatial dimensions. But what if that is not true? Scientists are now considering the notion that we inhabit a holographic universe—that is, a universe in which we exist on a two dimensional surface where the information on the surface is presented in three dimensions.

The Idea

The idea of a holographic universe was first conceived in the 1990s by scientists Leonard Susskind and Gerard ‘t Hooft (1), While ‘t Hooft proposed the original theory, Susskind gave it an interpretation in the context of string theory (2). The holographic principle likens the universe’s style of encoding information to that of a black hole; black holes store information in bits of area, not volume, which suggests that they need only two dimensions to hold data (3). Similarly, holographic theory suggests that the entire universe is a two-dimensional structure ‘painted’ on the cosmological horizon (4). Therefore, a mathematical description of the universe would require one fewer dimension that it may seem.

Much like a hologram on a credit card, the universe could be an image of two-dimensional information, perceived in three dimensions. This idea has been studied in complex spaces with negative curvature, called anti-de-sitter spaces; new research, however, suggests that this concept can also hold in a flat spacetime—such as the one we inhabit (5).

The Math

A recent paper published in Physical Review Letters fleshes out the mathematics behind this model.6 Researchers from the University of Southampton tested algorithms provided through holographic theory against deviations in cosmic microwave background radiation left over from the Big Bang, almost 14 billion years ago. They found that the holographic theory was a good predictor of the structure of these deviations, which supports the mathematical model’s legitimacy (1).

This doesn’t mean that there isn’t a third dimension; rather, as Raphael Bousso of Stanford University describes it, “The world doesn’t appear to us like a hologram, but in terms of the information needed to describe it, it is one.” It is a more efficient way to explain the world than the one we currently employ (3).

The Impact

This idea is relatively new, but could be groundbreaking. These findings help bridge quantum mechanics and general relativity. Quantum mechanics (the study of the very small) and general relativity (the study of the cosmically large) are currently at odds when it comes to describing how the universe works at every scale. Removing a spatial dimension actually helps reconcile the theories, and could be the key to a deeper understanding of the fundamentals of the universe’s existence (1). Because gravitational relativity is described in three-dimensional space, while quantum theory is described only in two dimensions, the elimination of an entire dimension would be necessary to allow the two models to coexist harmoniously (5). Though scientists have far from proven that we live in a hologram, the holographic principle is a significant jumping-off point for human beings attempting to define the space we inhabit. As researchers uncover more elegant explanations of the puzzling workings of the universe, the holographic theory may become increasingly relevant.

 

Julia Canick ’17 is a senior in Adams House concentrating in Molecular and Cellular Biology.

 

WORKS CITED

[1] Mortillaro, N. Are We Living in a Holographic Universe? New Study Suggests It’s Possible. CBC News [Online], Feb. 1, 2017. http://www.cbc.ca/news/technology/ living-holographic-universe-1.3959758 (accessed Feb. 24, 2017).

[2] Susskind, L. J. Math. Phys. 1995, 36, 6377-6396.

[3] Minkel, J.R. The Holographic Principle. Scientific American [Online]. https://www.scientificamerican.com/ article/sidebar-the-holographic-p/ (accessed Feb. 24, 2017).

[4] Holographic Universe. ScienceDaily [Online]. https:// http://www.sciencedaily.com/terms/holographic_principle.htm (accessed Feb. 24, 2017).

[5] Vienna University of Technology. Is The Universe a Hologram? ScienceDaily [Online], Apr. 27, 2015. https:// http://www.sciencedaily.com/releases/2015/04/150427101633. htm (accessed Feb. 24, 2017).

[6] Afshordi, N. et al. Phys. Rev. Lett. 2017, 118, 041301

Spring 2017: Changing Reality

Our Spring 2017 issue is now available online: Changing Reality! Articles are posted individually as blog posts (the articles are linked below), a PDF version is currently displayed on our Archives page, and print issues will be available around Harvard’s campus starting Fall 2017. We hope you enjoy browsing this issue as much as we enjoyed putting it together!

 

Table of Contents:

NEWS BRIEFS

Dimensional Analysis by Julia Canick ‘17

Habitat Conversion: A Major Driver of Species Loss by Priya Amin ‘19

The Discovery of Metallic Hydrogen by Felipe Flores ‘19

FEATURE ARTICLES

A (Dis?)harmonious Union: Chimeras by Una Choi ‘19

The New Age of Aging Research by Eric Sun ‘20

Innocent Until Proven Free? The Question of Neuroscience and Moral Responsibility by Kristina Madjoska ‘19

Machine Learning: The Future of Healthcare by Puneet Gupta ‘18

COMMENTARY

Climate Change: What Sweden’s Doing That Trump Isn’t by Jia Jia Zhang ‘20

What To Do With Virtual Reality by Jeongmin Lee ‘19

OSIRIS–REx: A New Frontier after New Horizons

By: Alex Zapién

It has been over a year since the New Horizons spacecraft approached Pluto in July 2015. Now, New Horizons continues its path into space much like New Frontier, the project responsible for New Horizons, continues its progress. New Frontier is also responsible for Juno, the space probe that successfully entered Jupiter’s orbit in July of 2016, and for OSIRIS-REx (1). The OSIRIS-REx (The Origins, Spectral Interpretation, Resource Identification, and Security- Regolith Explorer) spacecraft was sent by the National Aeronautics and Space Administration (NASA) to the asteroid Bennu this fall in order to obtain and return asteroid samples for analysis back here on Earth (2).

The OSIRIS-REx spacecraft was launched on September 8, 2016 from Cape Canaveral, Florida and is scheduled to reach Bennu in 2018 (2). The mission’s objective is to retrieve, analyze, and return a carbonaceous asteroid sample in order to gain a deeper understanding of the minerals and organic material found in Bennu and other Near Earth Objects (NEOs) (2). Analysis of the collected data will allow scientists to precisely map the asteroid’s orbit and measure the orbit deviation caused by non-gravitational forces out in space, otherwise known as the Yarkovsky effect (3).

Bennu was chosen out of an initial pool of 500,000 known asteroids and 7,000 NEOs. Researchers narrowed down the pool based on the object’s proximity to Earth, size, and composition (3). The distance from the object to Earth had to be between 0.8 astronomical units (AU) (about 75 million miles/120 million km) and 1.6 AU (about 150 million miles/240 million km) and would ideally have an Earth-like orbit with inclination and eccentricity to make the journey to the object more accessible. In addition, the size of the asteroid was also highly considered. The smaller the asteroid’s diameter, the more rapidly it rotates. Asteroids with diameters less than 200 meters rotate so rapidly they eject loose material from the surface, which would pose a serious hazard for the OSIRIS-REx spacecraft upon contact (3). The ideal asteroid would have a diameter of about 500 meters, which is exactly the size of Bennu’s diameter. Finally, the ideal asteroid needed to have the properties of a primitive, yet extremely old asteroid. This meant having a carbon-rich composition that had not been significantly changed in nearly 4 billion years. The composition of the asteroid is determined based on how the asteroid reflects the Sun’s light. Asteroids that fit this description usually contain amino acids and original material of gas and dust from solar nebula that collapsed to form the solar system, which can tell us about the building blocks of life on Earth as well as the conditions at the solar system’s birth (1, 3).

With the criteria, the choice was narrowed down to five candidates with Bennu ultimately being chosen. Interestingly, Bennu satisfied every criterion but one: distance. Researchers decided choosing Bennu would be more appropriate, as it comes within only 0.002 AU (about 186,000 miles/300,000 km) from the Earth every six years, much closer than the ideal range of 0.8 and 1.6 AU (3). This proximity gives Bennu a high probability of impacting the Earth within the next 200 years. Indeed, NASA scientists have calculated that the odds of Bennu starting an orbit (in the early to mid-2100s) that would lead to a collision with the Earth in the late 22nd century is 1 in 2,700, making it a fascinating NEO (4).

Ultimately, it was Bennu’s composition, size, and hazardous orbit that made it a perfect and accessible NEO.3 If successful, the OSIRIS-REx spacecraft will return a sample to Earth in 2023. The OSIRIS-REx mission will improve our understanding of both the formation of the solar system and the quality and trajectories of asteroids that could collide with Earth (2). We will be able to devise future strategies to mitigate possible Earth impacts from celestial objects, and, more importantly, we will gain a greater understanding of our planet and our solar system. There is no doubt that we will reach a new frontier in science.

Alex Zapién ‘19 is a sophomore in Cabot House, concentrating in physics.

WORKS CITED

[1] New Frontiers Program. https://discoverynewfrontiers.nasa.gov/index.cfml (accessed Sept. 26, 2016).

[2] OSIRIS-REx. https://www.nasa.gov/osiris-rex (accessed Sept. 26, 2015).

[3] OSIRIS-REx: Asteroid Sample Return Mission. http://www.asteroidmission.org/why-bennu/ (accessed Sept. 27, 2016).

[4] Wall, Mark. http://www.space.com/33616-asteroid-bennu-will-not-destroy-earth.html (accessed Sept 27, 2016)

The Falcon 9 Fireball Investigation

By: Alex Zapién

On September 1, 2016, the Space Exploration Technologies Corporation (better known as SpaceX) lost one of its 70-meter (229-foot-tall) Falcon 9 rockets when it unexpectedly exploded during a simulated countdown on a launch pad in Cape Canaveral, Florida (1). The incident occurred two days before its intended launch, and eyewitness testimonies described the event as “a series of explosions that could be felt for miles” (2).

While the cause of the explosion is unknown, SpaceX Founder, CEO, and Lead Designer Elon Musk believes “an anomaly on the pad” may have occurred when the rocket was being filled with propellant (1). Industry officials say it could take several months to determine the exact cause and even more to take remedial action (2). The rock-
et was unmanned, so there thankfully were no injuries; however, the aftermath of the explosion has worsened the backlog of delayed commercial launches and will likely complicate the company’s pursuit of future government contracts (2). The mishap has also raised questions about the reliability of the Falcon 9 booster, which was slated to haul cargo and astronauts to the International Space Station in the future (3).

SpaceX had been trying to go beyond its mission of solely building revolutionary space technology by also directly aiding in the transportation of Facebook’s six-ton satellite AMOS-6. However, the satellite was also lost in the incident (2). AMOS-6, valued at approximately $200 million, was part of a project led by Facebook, Eutelsat, and Spacecom. Not only would AMOS-6 have been the first satellite Facebook put in orbit, but it would have also provided direct internet access to people in sub-Saharan Africa (4). Following the incident, Facebook Founder and CEO Mark Zuckerberg expressed his frustration, saying he was “deeply disappointed to hear that SpaceX’s launch failure destroyed [the] satellite that would have provided connectivity to so many entrepreneurs and everyone else across the continent” (4). Despite the catastrophic results, Facebook has remained committed to connect everyone worldwide and they are currently developing other technologies that will provide the people of Africa with the same opportunities AMOS-6 would have provided (4).

The explosion of the rocket and its commercial interest also deeply frustrated Elon Musk, sparking him to work for an investigation. On September 9, 2016, more than a week after the incident, Musk openly asked the public and directly asked the National Aeronautics and Space Administration and the Federal Aviation Administration for help (5). Public responses have proven helpful. Responses include videos of the Falcon 9 rocket explosion, some of which appear to show an object hitting the rocket; however, there is no further evidence to corroborate this. Preliminary results from an investigation have also pointed to a rupture in the cryogenic helium system (6). Overall, the case has proven to be rather perplexing and besides the video evidence, there are no further leads. Musk claims that “this is turning out to be the most difficult and complex failure we [SpaceX] have ever had in 14 years.”

As of October 1, 2016, the Falcon 9 investigation has not concluded, but SpaceX has hopes for another launch, a resupply mission to the International Space Station. Dates are tentative, but SpaceX Chief Operating Officer Gwynne Shotwell mentioned the “November timeframe” as time to return to space.6 Although focused on addressing the issues behind the anomaly, SpaceX remains committed to its mission of revolutionizing space technology and serving the world.

Alex Zapién ‘19 is a sophomore in Cabot House, concentrating in physics.

WORKS CITED

[1] Malik, Tariq. Scientific American. http://www.scientificamerican.com/article/spacex-falcon-9-rocket-explodes-on-launch-pad-in-florida/ (accessed Sep. 29, 2016).

[2] Pasztor, Andy. The Wall Street Journal. http://www.wsj.com/articles/spacex-rocket-test-hit-by-explosion-1472738051 (accessed Sep. 29, 2016).

[3] Chang, Kenneth et al. The New York Times Science. http://www.nytimes.com/2016/09/02/science/spacex-rocket-explosion.html?_r=0 (accessed Sep. 30, 2016).

[4] Letzter, Rafi. Business Insider: Science. http://www.businessinsider.com/spacex-falcon9-explosion-facebook-satellite-amos6-2016-9 (accessed Sep. 30, 2016).

[5] Cofield, Calla. Space. http://www.space.com/34029-elonmusk-seeks-help-solving-rocket-explosion.html (accessed Sep. 30, 2016).

[6] Mack, Erick. News Atlas. http://newatlas.com/spacex-falcon-9-explosion-tentative-launch-date/45704/ (accessed Oct. 1, 2016).

Choosing The Right Reality

By: William Bryk

The average teen in the United States spends 9 hours a day using technological media (1). That statistic might have been shocking 10 years ago, but nowadays we skim it and then quickly move on to the next trending article on BuzzFeed. The onslaught typically begins in the morning. You open your eyes, and you immediately feel the urge to check the phone that you checked just 8 hours before. It then continues into our classrooms, our dorm rooms, city streets, airport terminals, meals with family, and chats with friends. From the elderly glued to the TV to adults poking at smartphones to students juggling 20 open tabs to children forgoing friends for animated characters, the epidemic is rampant in our society. When Harvard’s Cambridge campus lost power in early October and students could not access their precious WiFi—that portal to the virtual realm—the ensuing chaos had students speaking in apocalyptic terms. The modern Homo sapiens is completely addicted to electronic media.

In the past few years, a new technology has emerged that threatens to escalate this obsession from a lifestyle to life itself. Currently, our smartphones, tablets, and TV’s act as doorways from our physical reality with its atoms, colors, and senses to a simulated reality with its corresponding bits, pixels, and sensors. When watching a YouTube video on our smartphones, we stand in the multi-dimensional reality we were born into while peeking into a 2-dimensional fake reality on a screen. Virtual reality (VR) technology, as the name suggests, reverses this scenario. With a VR headset covering our eyes and ears, we are pinned in front of a 2-dimensional fake reality on a screen, which tricks our mind into thinking that we are peeking into a multi-dimensional reality. From the perspective of the headset wearer, it is a reality no less real than the physical world.

EXPLORING NEW REALITIES

Developments in the field of simulated worlds have branched into two separate technologies. The first is virtual reality, and the second is augmented reality, often referred to as mixed reality (MR). In virtual reality, the user wears a headset that presents a different computer image to each eye, blocking out all external light. These images present a simulated scene from two different angles to mimic the way eyes normally see objects. The two images are just wide enough to cover the more than 100-degree field of human vision; as a result, the user is completely immersed. As the user moves his or her head, the headset changes the two images to simulate what one would see at any given head angle (2). To film media content that is compatible with VR, you need a special camera that takes light from every direction in 3-dimensional space.

Humans have long desired a device capable of visually transporting them to another world without actual movement. Early panoramic paintings in the 19th century and the stereoscope in the 1930’s both flirted with tricking the brain into seeing depth where there was not. But the virtual reality era really could not have begun until computer technology enabled a visual experience so vivid that it tricks the brain continuously. In the 1960’s, as industrial computers became widespread, Ivan Sutherland built what is widely considered to be the first VR headset (3). Unfortunately, it was so heavy it needed to be hung from the ceiling, making it impractical as a consumer product. VR gained excitement in the 1990’s with Nintendo and Sega introducing some of the first consumer VR headsets, but the lack of color and advanced graphics made for uncomfortable gameplay (4). It was not until Palmer Luckey hacked together a VR headset in his late teens, making use of recent advances in computational power and sensor accuracy, that the tremendous consumer potential of the technology was realized. In 2014, Facebook purchased Luckey’s VR technology for $2 billion, and virtual reality was at once thrown into the limelight (5).

Luckey’s Oculus Rift headset and others like it—HTC Vive, Samsung’s Gear VR, PlayStation VR—have begun a revolution in the tech world. Their appeal is understandable. Instead of watching a YouTube video, you can be inside it. Instead of FaceTiming with your mom, you can have a virtual discussion in the same room (as long as you both have VR cameras). You can experience what it’s like to live on other planets, visit a glacier in southern Antarctica, and fly like a bird. With VR, you can become the most interesting person in the world in one reality despite being collapsed on a sofa with half-eaten chips lying across your stomach in another. Though there are some concerns with VR, such as headaches after long exposure, headsets will certainly advance to avoid these problems.

Flying like a bird is something I actually got the chance to try out at an exhibit called “Birdly” that premiered in Cambridge last December. You are strapped down to a winged shaped structure with a fan blowing in your face. You put on a VR headset, and you are suddenly transported to the top of a skyscraper in NYC. Look to your right and left and you can see your two wings, with NYC in the background. As I jumped off the building and learned how to fly by flapping my wings (hands), the combined visual and sensory experience had me truly feeling like a bird, something humans have wondered about for eons. At some point during the experience, I forgot I was even a human. When they took the headset off, I felt puzzled as though I had left my world and arrived in a different one, while all the audience saw was me putting on and then taking off a goofy looking headset. The possibilities for VR are truly only limited by our abilities to imagine and craft realities beyond our own experience.

MIXING IT UP

Mixed reality is a whole other beast. This technology is one that is not quite full virtual reality and not quite normal reality, hence the name. In mixed reality, rather than cover up your field of view with simulated images, your field of view is instead overlaid with holograms. Holograms have always been a favorite of science fiction, and have also been non-fiction for decades. What has recently caught the world by surprise, however, is the sudden leap in technology previewed in two mixed reality consumer products: Microsoft HoloLens and Magic Leap.

In both these products, the consumer will wear some sort of transparent headset, whether it is a full headset in the case of HoloLens or a currently unrevealed wearable in the case of Magic Leap. These products display detailed 3D virtual objects interacting within the user’s physical world. The objects do not simply appear without connection to the objects in the room. Incredibly, both technologies create a 3D map of the items in the user’s room, such as chairs, tables, and walls. The virtual objects then interact with those items as physical objects would. For example, you can swing around a holographic fly swatter with just your hands trying to smack a virtual spider that is crawling along the objects in your room! With HoloLens, the holographic objects are projected onto the transparent film in front of the user’s eyes. With Magic Leap, the holographic objects will be projected directly onto the user’s retina. Magic Leap claims to have achieved a breakthrough with their product that makes for a mixed reality greater than anything before it, but it is a secretive company that has yet to release a product demo beyond teaser trailers.

Watching video previews of these two technologies is all it takes to grasp how exotic this form of media will be. In a HoloLens demo, the user summons a hologram of a football game, with a 3-foot long field and little 4-inch people running around tackling each other (6). The user takes this simulated game and places it on his dining room table. He pokes and analyzes the game from all angles, creating a perspective of a football match unlike anything anyone anywhere has come close to experiencing. In a Magic Leap demo video, the viewer, presumably wearing the headset, sits in an auditorium full of middle school students. All of a sudden, a massive whale jumps out of the gym floor, splashes huge waves around the gym walls, does a little spin, and dives back into the floor (7). Had I not been aware that whales don’t normally spring out of gym floors, I would have believed that the whale was really there. That’s how real mixed reality can become, according to Magic Leap. As one Microsoft commercial puts it, when any reality we dream of can be projected onto the world, “you can change the world you see” (8).

THE NEXT FRONTIER

With technologies capable of presenting a whole new medium for creativity and design, it is not surprising that many companies are investing heavily in virtual and mixed reality. Facebook’s 2 billion dollar investment in Oculus Rift, Google’s investment of 540 million dollars in Magic Leap, and Microsoft’s massive investment in HoloLens development are just a few examples. TechCrunch forecasts that the MR/VR market could hit 150 billion dollars by 2020 (9). Many have compared the current state of virtual and mixed reality to the state of the smartphone market before the iPhone, expecting the technologies to explode in popularity once companies and consumers recognize the full range of applications.

But the societal effects of virtual and mixed reality extend much farther than those of the smartphone. Gaming is the first industry to be disrupted. We no longer have to watch an animated character fire onto our enemies in a Call of Duty video game battle. The user can now strap on a VR headset, take a gun controller, and instead physically join the battle. In the coming years, workers from all disciplines, including engineers, surgeons, designers, construction managers, artists, and educators, will likely follow the VR/ MR trend. A whole generation will recognize the advantages of the technology, such as an MR engineering design that can be collectively viewed in a meeting or a VR perspective of ancient Rome for a fourth grade classroom. In one HoloLens demo, an architectural designer walks into an empty abandoned space needing renovation. She straps on a HoloLens, and immediately imports complete holographic designs for the room, moving holographic chairs and tables around with just a quick finger swipe (10).

These are some of the obvious ways VR and MR technologies will initially alter society. However, once these devices reach the hands of millions of people, there will undoubtedly be a slew of new applications, the effects of which cannot be predicted. This is exactly what happened after the first smartphones came onto the market. The full potential of smartphones was only realized once enough people owned them. It was this mass adoption that enabled a wave of innovations that completely transformed the flow and power of information. The very same thing could happen when VR and MR are adopted worldwide, except that the flow will consist not of information and apps, but of vivid experiences and perspectives.

Of course, putting a smartphone in the hands of each person did not come without a social cost. Even though these technological devices give us a great deal of power, they have also turned us into technological zombies, to some extent, who have far less time for basic human behavior such as face to face contact and self-reflection. And if teens spend 9 hours a day on technological media, what will they do with a much more compelling VR or MR product that becomes widely adopted? Given how exciting these new devices will be, it is not difficult to imagine that much of the public could wear these headsets the entire day. Today’s conversations are often interrupted by the buzzing of a smartphone, but in a few years our conversations might be overloaded with interruptions and distractions from the lens of our headset. While some might view wearing a Magic Leap-type device daily as a ticket to the sci-fi future we’ve been waiting for, others see humanity on course for a meaningless technological wasteland.

AN EXISTENTIAL CHOICE

In the future, we might become so addicted to virtual reality that it will actually replace reality. VR technology could advance to the point that living out our lives in the virtual world would be superior to living in our normal reality. In fact, this idea is one proposed solution to the Fermi Paradox. The Fermi Paradox asks why there is no evidence of advanced extraterrestrial civilizations colonizing the universe if there are trillions upon trillions of planets, many of which could have evolved intelligent life. One somewhat frightening solution is that when a civilization becomes advanced enough to simulate virtual realities, the civilization chooses to plug into the simulated reality forever. These supposed extraterrestrials make the choice to forgo the reality they were born into, with all its constraining physical laws and inconvenient stellar distances, for a better one in which we can craft our own physical laws and superpowers. It seems our species might have to make a similar choice pretty soon.

Our relationship with technology has been progressing toward this moment for a century, making us more and more connected to our devices. First came the television, then the personal computer, then the smartphone, and now MR and VR. With our technology for simulating reality continuously marching forward, we could at some point have available to us any reality we can dream of at the press of a button. The question is: should we press it?

Will Bryk ‘19 is a sophomore in Leverett House.

WORKS CITED

[1] Wallace, K. Teens Spend 9 Hours a Day Using Media, Report Says. CNN [Online], Nov. 3, 2015. http://www. cnn.com/2015/11/03/health/teenstweens-media-screen-use-report/ (accessed Oct. 17, 2016).

[2] Charara, S. Explained: How Does VR Actually Work? Wareable [Online], Oct. 5, 2016. https://www.wareable. com/vr/how-does-vr-work-explained (accessed Oct. 18, 2016).

[3] History Of Virtual Reality. Virtual Reality Society [Online], Jan. 10, 2016. http://www.vrs.org.uk/virtual-reality/ history.html (accesssed Oct. 21, 2016). [4] Brown, L. A Brief History of Virtual Reality. Wondershare [Online], Sept. 30, 2016. http://filmora.wondershare. com/virtual-reality/history-of-vr.html (accessed Oct., 19 2016).

[5] WMExperts. Microsoft HoloLens and the NFL Look into the Future of Football. YouTube [Online], Feb. 2, 2016. https://www.youtu.be/HvYj3_ VmW6I (accessed Oct. 19, 2016).

[6] Kovach, S. Facebook Buys Oculus VR For $2 Billion. Business Insider [Online], Mar. 25, 2014. http://www. businessinsider.com/facebook-tobuy-oculus-rift-for-2-billion-2014-3 (accessed Oct. 20, 2016).

[7] Tusa Tuc. Magic Leap Create New Incredible Hologram. WOW! Augmented Reality in HD. YouTube [Online], Oct. 26, 2015. https://www. youtu.be/vZRFcGrrsyc (accessed Oct. 19, 2016).

[8] Microsoft. Microsoft HoloLens – Transform Your World with Holograms. YouTube [Online], Jan. 21, 2015. https://www.youtu.be/ aThCr0PsyuA (accessed Oct. 19, 2016).

[9] Merel, T. Augmented And Virtual Reality To Hit $150 Billion, Disrupting Mobile By 2020. TechCrunch [Online], April 6, 2015. https://techcrunch. com/2015/04/06/augmented-andvirtual-reality-to-hit-150-billionby-2020/ (accessed Oct. 20, 2016).

[10] WindowsVideos. Windows Holographic: Enabling a World of Mixed Reality (Narrated). YouTube [Online], June 1, 2016. https://www. youtu.be/2MqGrF6JaOM (accessed Oct. 19, 2016).

[11] McCormick, R. Odds Are We’re Living in a Simulation, Says Elon Musk. The Verge [Online], June 2, 2016. http://www.theverge. com/2016/6/2/11837874/elon-musksays-odds-living-in-simulation (accessed Oct. 20, 2016).

Waving Goodbye to the World’s Water and Energy Woes with Tidal Power and Desalination

By: Kristine Falck

The state of the globe today puts the world’s future in question. We have a burgeoning population heading on 7.4 billion people, an 80% reliance on fossil fuels, and an imminent fresh-water shortage. By 2035, current estimates predict world energy consumption to increase by 50% as well as world water consumption by over 85% (1). What is also of concern, is the fact that over 90% of current energy producing methods are classified as water intense (1). This is alarming, especially in light of the highly interdependent relation between water and energy. In other words, the world is headed for crisis due to these pinnacle challenges. There is immediate need for a shift towards the renewable-energy sector, in addition to a feasible solution to looming water scarcity.

With both an impending fresh water shortage and energy availability predicament, harnessing the world’s ocean tidal power as a source of both power and water for desalination could prove to be of immense value in the near future. With thousands of miles of shoreline around the world that have constant exposure to unharnessed wave energy, the ocean is an untapped source of clean energy. The total estimated ocean power is about 10,480,000 MW, yet only very small amounts of this estimated power are used (about 8000 MW, mainly in France, Canada, Australia, and the US) (2). Capturing this tidal energy could significantly augment global energy production, and diminish reliance on harmful fossil fuels. Since already some 2.8 billion people—a number that is expected to grow to 3.5 billion in the next decade— worldwide suffer from water scarcity, production of clean water is an imminent problem the world must address (1).

THE FOSSIL FUEL STATUS QUO

Not only are fossil fuels leading our demise in climate change, but they also comprise the largest consumption of the fresh water sector. Currently, the US derives 90% of its electricity from thermoelectric power generation plants (1). This in turn account for a staggering 45% of the U.S.’s total water withdrawals including freshwater sources like lakes and rivers, and saline sources, such as oceans and estuaries (3). The majority of these plants rely on what is called “once-through” cooling technology. Essentially, millions of gallons of water are withdrawn daily, before being dumped back at a higher temperature into whatever body of water they were withdrawn from. In addition, the production and refining process of oil requires lots of water (estimated that the US withdraws 2 billion gallons of water each day to refine nearly 800 million gallons of petroleum products like gasoline) (3). Unfortunately, corn-based ethanol, touted as an eco-friendly alternative fuel, is not water-friendly either; 324 gallons of water use are attributed to the production of a single gallon of ethanol, meaning it uses more than gasoline (which requires 3-6 gallons) (4). In essence, the production of both clean water and energy is an increasingly urgent problem.

The large use of water is particularly concerning because fresh water only comprises a mere 2% of the world’s water supply. These concerns have grown because of recent high profile droughts: California experienced its fifth straight drought year, and even British Columbia, Canada, a region blessed with a natural abundance of fresh water, was confronted with a stage 3 drought, drawing attention to the severity of the issue at hand. What can we do to save our dwindling fresh water supply?

TIDAL POWER

With both an impending fresh water shortage and energy availability predicament, harnessing the world’s ocean tidal power as a source of both power and water for desalination, could prove to be of immense value in the near future. Capturing tidal energy could serve to significantly augment global energy production, and diminish reliance on harmful fossil fuels. Despite its availability, tidal power technology is only currently in usage in Perth, Australia. Yet, it is capable of being installed in other highly turbulent tidal coastlines such as Alaska, Washington, California, British Columbia, Scotland, and the Chilean coast.⁵ The ocean is at our disposal as an untapped beacon of energy, and will serve as an important component to our strategy in addressing our global energy crunch.

DESALINATION

Since 98% of the world’s water supply is saltwater, it is evident that the desalination of seawater will become instrumental to fulfilling the world’s water needs (9). As John Lienhard, director of the Center for Clean Water and Clean Energy at MIT said, “As coastal cities grow, the value of seawater desalination is going to increase rapidly, and we will see widespread adoption” (6).

DUAL PURPOSE PLANTS

Combining desalination and tidal power solves these problems, and as like everything else in life, it comes down to its economics. By developing a single integrated facility comprising of both a desalination plant and an energy generation facility, significant monetary benefits could be achieved through integrated planning, shared optimized infrastructure, and lower capital costs. Utilizing the high pressurized water already present in tidal energy plants to desalinate water through reverse osmosis, the costs of desalinating water would drop significantly, as nearly 60% of desalination’s high operating costs is due to the creation of the highly pressurized water (7).

Ocean power generators, which capture the energy of waves and tidal motion, have the potential to produce a significant amount of electricity. Carnegie Wave Energy in Australia has designed a technology called CETO, which is able to capture nearly 70% of transmitted tidal energy (7). A coal power plant in comparison is a mere 42% efficient (7). This means these combined plants will also be extremely efficient in their energy conversion– another benefit. The leading edge 3MW CETO 5 technology employs three 36-foot wide steel buoys tethered to the ocean floor (7). When a wave crashes and the buoys bob, the seabed pumps are activated, and water is thrust at high pressure through a subterranean pipe to a power station on land (8). The surging water, in turn, spins hydroelectric turbines that activate a generator, and then undergo desalination through reverse osmosis once leaving the generator (8). Because the water is being forced into the power station at such high pressures, it is also able to be desalinated through reverse osmosis, passing through membranes after leaving the generator (8). The efficiency of this system in converting wave energy to electrical energy can reach over 50 percent and is capable of desalinating 50 billion liters of water each year. Other benefits of CETO’s technology includes the fact that buoys can power themselves at a remote location and support a flexible suite of sensors and equipment for maritime security and monitoring off the grid (7).

Carnegie Wave Energy’s CETO 5 technology is a leader of tidal-desalination combined facilities. Their technology is currently in usage in Perth, Australia, and would be capable of being installed in other highly turbulent tidal coastlines (such as Alaska, Washington, California in the United States; British Columbia, Canada; Scotland, and the Chilean coast) (11). These dual-purpose facilities, which utilize tidal power to create energy as well as desalinate water, will prove to be important facilities in the future. By converting ocean wave energy into zero-emission electricity and desalinated water, this integrated facility will indisputably become an economical solution to the world’s imminent water and renewable energy cruses.

Kristine Falck ’20 is a freshman in Straus Hall.

WORKS CITED

[1] United Nations Department of Economic and Social Affairs. http://www. un.org/waterforlifedecade/water_and_ energy.shtml. (accessed Oct. 17, 2016).

[2] United States Geological Survey. http:// water.usgs.gov/watuse/wupt.html. (accessed Oct. 17, 2016).

[3] James E. McMahon and Sarah K. Price. Water and Energy Interactions. https:// publications.lbl.gov/islandora/object/ ir%3A158840/datastream/PDF/view. (accessed Oct. 17, 2016).

[4] Environmental Protection Agency. https://www.epa.gov/sites/production/ files/2015-08/documents/420r10006. pdf. (accessed Oct. 17, 2016).

[5] Harnessing the Power of Waves – Hemispheres Inflight Magazine. (2015). http://www.hemispheresmagazine. com/2015/06/01/harnessing-the-power-of-waves/ (accessed Oct. 17, 2016).

[6] California Turns to the Pacific Ocean for Water – MIT Technology Review. (2014). http://www.technologyreview. com/featuredstory/533446/desalination-out-of-desperation/. (accessed Oct. 17, 2016).

[7] Carnegie Wave Energy – CETO Overview. http://www.carnegiewave.com/ ceto-technology/ceto-overview.html. (accessed Oct. 17, 2016).

[8] Carnegie’s CETO 5 Operational. http:// http://www.wavehub.co.uk/latest-news/carnegies-ceto-5-operational. (accessed Oct. 17, 2016).

[9] Desalination using renewable energy sources. Retrieved August 2, 2015, from http://www.sciencedirect.com/science/ article/pii/0011916494000980

[10] How The Power Of Ocean Waves Could Yield Freshwater With Zero Carbon Emissions. (2013). http://thinkprogress.org/climate/2013/08/30/2554091/ ocean-waves-freshwater/. (accessed Oct. 17, 2016).

[11] Wave-powered desalination pump permitted in Gulf – CNET. http://www. cnet.com/news/wave-powered-desalination-pump-permitted-in-gulf/. (accessed Oct. 17, 2016).

Perspectives On Artificial Intelligence

By: Eric Sun

Artificial intelligence (AI) is a hot commodity in the modern world. Machines are now capable of reading and transcribing books, recognizing speech, analyzing big data, playing chess and Go at superhuman levels, and identifying objects through computer vision. Corporate giants like Google, Intel, and Amazon have poured hundreds of millions of dollars into AI research. Research centers and universities have made their own contributions to developing AI. However, some concerns still loom large: Is AI ethical? What are the dangers associated with “intelligent” machines? What kind of trajectory is the research following? I discuss these concerns alongside general artificial intelligence with two members of the Harvard University faculty: Dr. Venkatesh Murthy, a neuroscientist, and Dr. Barbara Grosz, a computer scientist.

Dr. Venkatesh Murthy is a professor of Molecular and Cellular Biology at Harvard. He specializes in neuroscience with an emphasis on information processing and adaptation in neural circuits. He has made significant contributions to the understanding of neural pathways in the olfactory system. Murthy is also interested in artificial intelligence research and teaches a freshman seminar on artificial and natural intelligence.

Dr. Barbara Grosz is the Higgins Professor of Natural Sciences at Harvard. She has made significant contributions to research in natural language processing and multi-agent systems and is currently conducting research in teamwork and collaboration. She teaches the course Intelligent Systems: Design and Ethical Challenges.

What was your path to academia like?

MURTHY: I started out as an engineering undergraduate student in India, but became interested in combining physical sciences and biology for my graduate work in the US. I learned about neural network research and AI during the end of my master’s degree in bioengineering and decided to pursue a PhD in neuroscience with some tangential work on neural networks. For my postdoctoral and early faculty work, I ended up doing purely experimental neuroscience, but recently I’ve rekindled my interest in computational neuroscience.

GROSZ: It was an unusual path. I started out as an undergraduate studying mathematics: there were no undergraduate computer science majors. I was, though, able to take a few courses in computer science, and then went to graduate school in computer science. In graduate school, I focused initially on numerical analysis and then I did some work in theoretical computer science.

Then, thanks to a part-time job at Xerox PARC and a conversation with Alan Kay, I began working in natural-language processing.

At the time, there were many people working on syntax and some on semantics. I was young, brave, and foolish and took on the challenge of building the first computational model of discourse as part of a speech processing project at SRI International. Later, I co-founded the Center on the Study of Language and Information at Stanford, and subsequently went to an academic position at Harvard. I always tell undergraduate students that you don’t need to know what you want to do in your first years of college, but can change paths many times.

What are your research interests?

MURTHY: I work in neuroscience and have an interest in artificial intelligence, but I’m not sure if I would pursue the mathematical, theoretical research in artificial intelligence. But I am very interested in understanding neural pathways in animals through neural networks and seeing if any similarities [to AI systems] exist.

GROSZ: Currently, my research focuses on modeling teamwork and collaboration in computer systems. Prior to this, I worked [for] many years on problems in dialogue processing. Dialogue participation requires more than simply understanding words and sentences; you need to understand the purposes and intentions of the people communicating with one another. This insight led to my working on modeling teamwork and collaboration. In the late 1980s and 1990s, I developed, with some colleagues, the first computational model of collaborative planning. While dialogue is between two people, teamwork often involves a larger number of people. It is much simpler to model the plans of an individual than to model the plans of a group, because teamwork and collaboration are not simply the sum of individual plans. The challenges teamwork raises include such questions as, what information has to be shared for teamwork to succeed? What compels individuals to participate in teamwork? How do you know what each component part is doing? Now, the focus of my research is on using these models of teamwork to build computer systems that improve healthcare coordination, which requires handling what we call loosely coupled teamwork. This work can be applied generally to many situations, including physician-physician networks.

How did you become interested in AI?

MURTHY: As I said, I was studying engineering but, for me, it was not very interesting… I read the books [on artificial intelligence] that everyone reads—Minsky’s Society of Mind and a few others. They presented lots of different ideas about artificial intelligence that were fascinating at that time.

GROSZ: When I was looking for a thesis topic, Alan Kay, the person who invented Small Talk, suggested that I [try] taking a children’s story and retelling it from the viewpoint of a side character. It turns out that this is a very difficult task. Instead, I did some work with Alan Kay on Small Talk and then went to SRI International to work on speech understanding research in the 1970s. For years, I worked with dialogue research and from there, I got interested in collaboration and teamwork, which is a subject that I have pursued for some time now.

What is AI?

MURTHY: That is a very good question. For me, AI has a very different meaning than it might have for some others. I feel that if you have one machine that is very good at one intelligent task, then that is artificial intelligence. For example, if a program can recognize faces very well or better than humans, than that would be artificial intelligence. Or if it could predict behaviors, or if it could discern sounds… Artificial intelligence can just be one very intelligent feature instead of a unit with lots of different complex functions.

GROSZ: AI was, until very recently, purely an academic field of study, which had two complementary goals. People in the field generally chose one of two different kinds of goals: using AI/scientific approaches to model intelligent behavior in humans or developing methods to make computer systems more “intelligent”. Recently, the field has evolved from a purely academic discipline and we are beginning to see increasingly many real-world applications through corporate efforts. It is no longer purely academic, which is great.

Which aspects of AI do you find to be the most promising?

MURTHY: I find the application of artificial intelligence to predicting behaviors in consumer markets very fascinating. Companies like Facebook, Google, and Amazon have amazing capabilities in predicting what a person may be interested in next… Even the behavior of a quirky person, or at least a person who thinks that they are quirky, may be predicted by artificial intelligence sometime in the future. I feel that this is a real possibility. “Even the behavior of a quirky person, or at least a person who thinks that they are quirky, may be predicted by artificial intelligence sometime in the future.”

GROSZ: I think that almost every aspect of AI has promise—promise for changing our lives, promise for affecting what we can do in the world. This ranges all the way from sensory processing—sensing for security, robotics, helping people with hearing disabilities — to complex tasks. Language processing has the potential to help people around the world to communicate through translation, and enables increasing web access. Multi-agent systems methods support teamwork and help people in many ways, from planning and plan recognition to coaching victims of stroke and protecting wildlife areas from poaching. Machine learning is currently big in the news with advances especially in the area of sensory processing. It’s an exciting time for AI.

What are the most important challenges that face AI today?

MURTHY: Right now, we have a lot of data and most people are interested in performing some form of statistical or mathematical analysis of the data—clustering, finding patterns. I’m not sure if we will be able to evolve from that framework… We might become very good at analyzing data but it may take a very long time before we are able to create real intelligence that understands the underlying causes of the observed data.

GROSZ: Two major challenges. One is how to incorporate artificial intelligence into computer systems in a way that they work well with people. Contrary to science fiction, where computer systems are portrayed to be exact replicas of human intelligence, machines don’t need to be replicate exact replicas of human intelligence, but can complement it. Instead, we need to assess where human intelligence falls short and develop computer systems that fill in those gaps and work closely with humans. The other challenge is to build systems that are capable of explaining their decisions to humans. This is not a simple challenge. For example, it is [difficult] for systems using deep neural networks to explain their results. These issues will need to be addressed in the future.

Where do you envision AI to be in 20 years?

MURTHY: Predictions are always very difficult to make and not very accurate. I would imagine that much would be the same as today but more advanced… Machine learning would still be used in research and there would likely be more advanced pattern recognition in artificial intelligence for recognizing items (visual, auditory or other) and predicting consumer behavior.

GROSZ: There is a project at Stanford called the One Hundred Year Study on Artificial Intelligence, colloquially known as AI100, which started about two years ago. Every five years, the Standing Committee for this project, which I currently chair, brings together a group of 15-20 experts in AI along with people who are social scientists or policymakers to assess the field (in light of prior reports) and predict where the field will be in 10-15 years. Our first study panel just issued a report on AI and life in 2030, which I highly recommend. (Access this report at https://ai100.stanford.edu/).

Any closing remarks?

MURTHY: Artificial intelligence is a very interesting field… I would recommend that more people become involved or learn more about AI because it is one of the technologies that is driving our future.

GROSZ: I teach a course about intelligent systems, design, and ethical challenges. I think that anyone interested in artificial intelligence or design should also be aware of the limitations of the technology and the ethical questions and design challenges that these limitations lead to.

Eric Sun ‘20 is a freshman in Hollis Hall.

Hyperthermophiles and Cryophiles: The World’s Most Extreme Organisms

By: Priya Amin

Imagine diving into the Gulf of California and reaching a 120 °C hydrothermal vent located deep on the seafloor. Rich in hot hydrogen and carbon dioxide gas, these vents seem to spell death for any creature that dares to swim by. But if you look closely, you’ll notice an organism that not only survives but also thrives in this seemingly toxic environment. How could anything live in such adverse conditions?

HYPERTHERMOPHILES: DON’T SWEAT THE HEAT

Meet Methanopyrus kandleri, Earth’s record-holder for hot temperature growth. Capable of reproducing at 122 °C, M. kandleri is a hyperthermophile, an organism that likes intense heat (1). Hyperthermophiles comprise some of the world’s most extreme life forms and were only discovered a few decades ago in 1965 when Thomas D. Brock isolated them from hot springs at Yellowstone National Park (2). Since then, we’ve been able to learn much more about these resilient organisms.

How have hyperthermophiles adapted to live in such extreme conditions? Normally, under extraordinary heat, a cell membrane disintegrates, allowing toxic chemicals into the cell. However, hyperthermophiles combat this by using high levels of saturated fatty acids to line their membranes (3). This type of structure is quite strong and stable, and it helps the cell stay intact. Another issue is that at such high temperatures, ordinary proteins denature, lose their shape, and cease to function. Hyperthermophiles have evolved to have hyperthermostable proteins that are compact and wound up in spiral-like helixes (4). These proteins can maintain their structure and function even in harsh environments. In fact, they use the high temperature to their advantage: the abundant heat energy makes chemical reactions proceed faster than usual, spurring on the processes that allow cells to proliferate and grow.

To survive, M. kandleri must use unique metabolic pathways tailored to the molecules in its environment. Interestingly, in addition to being a hyperthermophile, M. kandleri is also a methanogen, which means it gets its energy by producing methane in environments where oxygen is absent.5 Its metabolic process looks a little like this: CO2 + 4 H2 → CH4 + 2H2O. M. kandleri is remarkably resourceful. It consumes hydrogen and carbon dioxide, making it a perfect match for deep-sea hydrothermal vents! In addition, the metabolic process is anaerobic, with no need of oxygen! The result is energy in the form of ATP, a special molecule that can be later used to fuel the cell’s basic functions (5). Most hyperthermophiles have similar chemical reactions that use carbon dioxide, iron, or sulfur to anaerobically produce energy. This allows them to live in a vast array of hot environments, like deep-sea vents, hot springs, and terrestrial volcanoes.

M. kandleri’s habitat is characteristic of Earth’s early conditions. Therefore, many scientists claim that the Last Universal Common Ancestor (LUCA), the most recent ancestor of all organisms on Earth, was closely related to M. kandleri (6). Extraordinarily, by studying hyperthermophiles, we’ve been able to open a window into our evolutionary past.

CRYOPHILES: PLAYING IN THE COLD

Now, after taking a dive to visit the scorching hot hydrothermal vents in the Gulf of California, why don’t we cool down a bit? Let’s travel 3800 miles to Ellesmere Island in Antarctica to meet another extreme organism, Planococcus halocryophilus.

If you take a step into the permafrost on Ellesmere Island, you would probably get frostbite pretty quickly. The permafrost contains a lot of salt, which keeps it freezing over even at temperatures as cold as -25 °C. This is the environment in which the cold temperature growth record holder P. halocryophilus lives. At -16 °C it can reproduce, and at -25 °C it is still able to remain active (7). Discovered a century earlier than hyperthermophiles, cryophiles were first described by J. Forster in 1887 upon examining a sample of cold-preserved fish (8).

To adapt to below-freezing temperatures, cryophiles have developed unique structures and molecules. Unlike hyperthermophiles, cryophiles have high levels of polyunsaturated fatty acids, which form malleable and fluid structures that keep their cell membranes from freezing (9). They also have two special types of cold-active proteins: cold shock proteins and antifreeze proteins. Cold shock proteins ‘turn on’ once the temperature falls below a certain threshold. Because they’re designed for flexibility, they are able to help other proteins, such as those needed for DNA replication, function at less ideal temperatures (9). Antifreeze proteins help the cell avoid the harmful effects of thawing and freezing. They release chemicals outside the cell that effectively lower the freezing point of water (5). In this way, cryophiles manipulate the environment around them to survive.

While the exact metabolic pathway for P. halocryophilus is still being researched, it likely derives ATP energy from molecules in its environment in a process similar to that of M. kandleri. Given the low temperatures and thus low amount of energy in the environment, ATP’s role as an energy source becomes even more important than usual. Incredibly, the microbe can still synthesize and break down the molecules it needs at temperatures as low as -25 °C!

This cold temperature organism holds insights about the possibility of similar microbial life on other planets in our solar system. For instance, cryophiles have the ability to live in between ice crystals on Earth; similar environments are found on other celestial bodies, such as on Mars or on Saturn’s moon, Enceladus (10). Scientists are currently racing to discover the secrets these organisms hold about our world and the universe.

EXTREMOPHILES EVERYWHERE

M. kandleri and P. halocryophilus are just two examples of countless extremophiles. Thriving in harsh, adverse environments, extremophiles are like the daredevils of biology—wherever life has an opportunity to develop, extremophiles make a home. This group of organisms consists of life from two broad classifications: Archaea, like M. kandleri, and Bacteria, like P. halocryophilus. Extremophilic creatures from these domains can live in seemingly impossible environments, such as acidic pools, salty lakes, freezing brine water, or extremely hot hydrothermal vents! What makes archaea and bacteria best suited to handle extreme conditions?

The answer is quite simple: they are single-celled organisms. Archaea and Bacteria are the two great branches of life that encompass all prokaryotes. (Eukarya is the third domain, consisting of plants and animals.) When you think of single-celled prokaryotes, which do not have a nucleus, you might only picture a bacterium, like the E. coli that live in your gut. Scientists used to think in the same way, classifying all prokaryotic organisms as bacteria. That is until 1977, when RNA analysis revealed that many single-celled organisms were actually in a separate domain, namely Archaea (11).

But what does being a prokaryote have to do with being more likely to be an extremophile? How does being single-celled allow the possibility for an organism to survive harsh environments? The first reason is that these organisms are able to reproduce much more quickly than complex, multicellular organisms such as plants and animals. They are time and energy efficient. This proves especially advantageous for hyperthermophiles that need to rapidly proliferate across a large surface to capture more nutrients from a deep-sea vent before it collapses. Another reason is that archaea and bacteria do not have a membrane-bounded nucleus, which means that they are able to replicate DNA and create proteins in less time. In cold environments, for example, the cell would be able to efficiently make antifreeze proteins, which are crucial to the survival of cryophiles. It’s also important to note that the ability to quickly reproduce creates opportunities for rapid adaptations to environmental changes.

By studying these amazingly simple yet resourceful organisms, we’ve found the most disparate forms of life. Hyperthermophiles withstand the blistering heat of hydrothermal vents, similar to the environment when life was developing on Earth. Cryophiles survive the freezing cold of Antarctica, similar to environments out in our solar system. We have much to learn from extremophiles both as we seek to understand our evolutionary past and as we look to the future for life beyond Earth.

Priya Amin ’19 is a sophomore in Pforzheimer House concentrating in Integrative Biology.

WORKS CITED

[1] Morris, J. et al. Biology: How Life Works, 2nd ed.; Macmillian Learning: New York, 2016; p 545.

[2] Stetter, K. Extremophiles. 2006, 10, 357-62.

[3] Carablleira, N. J. Bacteriol. 1997, 179, 2766-768. [4] Sterner, R.; Liebl, W. Crit. Rev. Biochem. Mol. Biol. 2001, 36, 39-106.

[5] Methane-Producing Archaea: Methanogens. Boundless Microbiology [Online], May 26, 2016. https:// http://www.boundless.com/microbiology/textbooks/boundless-microbiology-textbook/microbial-evolution-phylogeny-and-diversity-8/ euryarchaeota-111/methane-producing-archaea-methanogens-576-10785/ (accessed Oct. 10, 2016).

[6] Yu, Z. et al. J. Mol. Evol. 2009, 69, 386- 394.

[7] Zimmer, C. Comfortable in the Cold: Life Below Freezing in an Antarctic Lake. NOVA Next [Online], June 11, 2013. http://www.pbs.org/wgbh/ nova/next/nature/seeking-psychrophiles-in-antarctica/ (accessed Oct. 10, 2016).

[8] Ingraham, J. L. J. Bacteriol. 1958, 76, 75-80.

[9] Darling, D. Psychrophile. Encyclopedia of Science: The Worlds of David Darling [Online]. http://www.daviddarling.info/encyclopedia/P/psychrophile.html (accessed Oct. 13, 2016).

[10] Bacterium Planococcus Halocryophilus Offers Clues about Microbial Life on Enceladus, Mars. Science News [Online], May 27, 2013. http://www. sci-news.com/space/article01105-planococcus-halocryophilus-bacterium. html (accessed Oct. 10, 2016).

[11] Archaea. New Mexico Museum of Natural History and Science, http:// treeoflife.nmnaturalhistory.org/archaea.html (accessed Oct. 9, 2016).