Traditional Medicine: A Historical Lens to Advance the Future

By: Helen Beilinson

Originally printed in Distilled

It’s an odd sensation, having honey in your nose. You lay still, waiting, staring at the ceiling while the viscous sweetness makes its way down your nasal cavity. It isn’t painful, but the obscure feeling is a tad discomforting. But endure this and ten minutes later, the runny nose you’ve been battling for days, has vanished.

Growing up, I dreaded getting sick because a runny nose meant enduring these several minutes of freshly-strained honey making its way to the back of my throat. As an adult, however, I find myself going back to this supine position every time I have a stuffy nose. Whether it is truly an ancient Russian medicinal treatment or a family remedy, I cannot say, but it is the best cure for a runny nose I have ever encountered. And, much to the content of my mother, whenever I find myself staring at the ceiling, I remember her advice to never forget “the treatments of the days of yore, because if they worked for centuries, why wouldn’t they work today?”

Outside of my family’s runny nose prescription, honey is a product that has had a long history in medicine. In addition to being a popular food amongst humans for millennia (the earliest evidence of humans collecting honey is dated to be 8000 years old), honey was used extensively in medicine in ancient Egypt, China, India, Greece, and a variety of Islamic countries as a potent antibiotic, wound healer, and preservative. In the last few decades, these ancient medical anecdotes of the power of honey have been substantiated. In fact, in regards to honey’s potent bactericidal activity, it has been shown that honey protects against nearly sixty species of infectious bacteria, including those that are antibiotic resistant. Typical antimicrobials target specific bacterial proteins to destroy their cellular barriers or to inhibit necessary metabolic pathways. To prevent the antibiotics from annihilating complete, bacterial populations, bacteria evolve modifications in their genome to render the antibiotics ineffective. Remarkably, in studies aimed to determine whether bacteria can develop resistance to honey, none have been found.

Honey is not the only natural product that has regained contemporary medical glory. The medical use of turmeric, a golden plant related to ginger, is dated back nearly 4000 years. It was historically used in South Asia and the Caribbean to treat a variety of conditions, from pain, fatigue, breathing problems, food poisoning, and a wide-range of infections and inflammations. Basic and clinical research has shown that turmeric is highly active in relieving pain, slowing the progression of cancers, promoting wound healing, minimizing inflammation, and aiding in cardiovascular performance, and has additional physiological effects. Ancient Egyptian doctors gave poppy seeds to patients as a means of pain relief. Poppy seeds contain small quantities of both morphine and codeine. Today, both of these ingredients are still actively used as pain-relieving drugs. Another notable historical antidote is the fecal microbiome transplant. Although this treatment has gained clinical popularity in the last decade, its initial use was in the 4th century BC in ancient China. In the past, it was used extensively as a means to combat diarrhea, intestinal infections, and other bowel syndromes. Today, it is used predominantly as a means to combat intestinal infections with Clostridium difficile, an opportunistic infection that is predominantly seen in hospitalized immunosuppressed patients.

Historically, natural products, or products synthesized by living organisms, have been used for curing many diseases and illnesses. The earliest evidence of the usage of natural products for medicinal purposes dates back to 2600 BC Mesopotamia. On clay tablets in cuneiform, the Mesopotamians described oils from a Mediterranean cypress and members of the Commiphora genus, such as myrrh and frankincense, to treat colds and inflammation. The Ebers Papyrus, dating to 1550 BC, is an Egyptian pharmaceutical record documenting over 700 plant-based drugs. From China, the Materia Medica from 1100 BC has 52 prescriptions, the Shennong Herbal from 100 BC has 365 drugs, and the Tang Herbal from 659 AD has 850 drugs. Many Greek physicians and philosophers, including Dioscorides and Theophrastus, recorded the use of hundreds of herbs, including how to collect and store them. Despite centuries of traditional medicine success, contemporary drug discovery predominantly focuses on developing novel, synthetic, highly-specific medications and using high-throughput screens to identify active compounds in specific disease settings.  It is undeniable that looking forward is critical to the advancement of medicine, but explorations of natural products for specific ailments can inform our current understanding of disease state and provide us with new alternatives to synthetic drugs.

The use of natural products in human medicine is not particularly surprising, given that the medicinal use of natural products is not unique to humans. Self-medication is often seen in animals through innate responses, instead of learned ones. To prevent microbial growth in wood ant colonies, worker ants incorporate conifer tree resin, which is a potent antibiotic, into their nests. When monarch butterflies have parasite infections, to prevent the spread to their offspring, mothers will lay their eggs on milkweed, which has anti-parasitic properties. Primates have been observed ingesting plant materials that have little or no nutritional value, but have high anti-parasitic properties.

Natural products have been the backbone of healing for thousands of years. Historically, herbs or plants with medically-active compounds were prescribed to patients without extensive processing, only grinded or boiled. However, starting in the 19th century, the advancement of biochemical techniques allowed for the isolation and characterization of active compounds from natural products. The identification and isolation of the active ingredient in these natural products facilitated large-scale synthesis and administration to patients in a dose-dependent manner. Today, nearly half of all available drugs are derived from natural products by this means, either from direct isolation from natural products or by synthesizing the active compounds in labs. However, the identification of novel, active natural products has decreased over the last decade.

The list of natural products currently available as drugs is extensive from morphine (derived from opium) to the anti-malarial drug quinine (derived from the chinchona plant) to the most prominent antibiotic available— penicillin (derived from a fungus). One of the best-known success stories of a natural product being used as a biomedical aid is the discovery of the most widely used breast cancer drug, paclitaxel. In the 1960s, the National Cancer Institute commissioned the United States Department of Agriculture to collect samples from plant species around the world and test them for anticancer activity. The bark from a single Pacific yew tree in the state of Washington, collected by botanist Arthur S. Barclay, was processed into an extract and showed a high efficacy in killing tumor cells. The effect of the isolated natural product was so robust that, in 1967, the active compound—paclitaxel—was isolated and is now the first-line of therapy for patients with ovarian, breast, non-small cell lung, and pancreatic cancers.

Cardiology has also been impacted by historically used natural products. Extracts from foxglove plants were first shown to be effective in treating heart conditions in 1785 by William Withering, after being told a long-kept secret by “an old woman in Shropshire who had sometimes made cures”. In 1930, after further investigation into the extract’s actions, digoxin was identified as the active molecule that is highly effective in controlling heart rate and increasing cardiac contractility and was subsequently approved in 1998 for the treatment of heart failure. Today, digoxin is on the World Health Organization’s List of Essential Medicines, considered to be the most effective and safe medicines needed in a health system.

Even with the success of many drugs derived from natural products, pharmaceutical companies have reduced their research investment and financial support for natural product discovery. The argument for this decision is two-fold: (1) natural product discovery and development is slow in comparison to the high-throughput screening of synthetic compounds because extracts must be tested before the active compounds can be isolated, and (2) it is thought that the most active biological compounds, and those that would most benefit society, have already been discovered, reducing the need to continue the search for more products.

As a result, the pharmaceutical industry has gravitated towards the biochemical synthesis of novel products or the modification and redevelopment of existing synthetic drugs. Drug products need to interact with their chemical targets precisely to optimize the reaction, be it inhibitory or activating. Computer visualization and biochemical techniques are used to design compounds that interact with their targets with optimal efficacy. However, the structural and chemical complexity exceeds that of synthetically made compounds with about 40% of chemical scaffolds found in natural products being unable to be made synthetically in labs. To find naturally made compounds that interact with targets, large-scale, high-throughput screenings are done with vast libraries of collected samples to test their reactivity in particular disease settings.

Although natural product discovery is a time investment, as one would have to screen many samples for activity in the context of many disorders, the success of discovery is highly promising. The synthesis of natural molecules in living organisms comes at a high metabolic and genetic cost, such that, all molecules in an organism are under high evolutionary pressure to be bioactive or to be eliminated altogether. Evolution serves as a natural means to edit molecules to most optimally pair with their targets. Drug development focuses on identifying molecules that most optimally interact with specific targets in a disease in order to either activate, inactivate, or otherwise modulate said target. Although screening biological samples is an investment, by identifying natural molecules that have been optimized under evolutionary pressure to interact with these targets, when found, the natural products are already active and typically require minimal additional modifications. However, only 10% of the world’s biodiversity has been evaluated for potential medicinal purposes. The remaining 90% of products, some of which have been historically used by our ancestors for various human ailments, have not reached the benches of scientists. In addition, many currently available naturally occurring products have been tested only within the context of particular diseases—of note, the collections of terrestrial plant samples (owned by the US National Cancer Institute) have been screened predominantly in anticancer screens and may be found to be reactive in other.

The continuation of isolating and screening naturally occurring products is critical, as they are a crucial source of novel pharmacologically active compounds. However, a stream-lined and targeted approach to maximize the time and money provided in drug discovery is missing. Traditional medicine may hold the key to this problem. For example, gastroenterologists seeking novel treatments for patients who may not be ideally responding to currently available drugs, could turn to the historical plants and herbs that have been documented across the world to be used to treat whatever the ailment may be. Generating a list of compounds that have been historically used will generate a smaller list relative to the largely available libraries and it is also a list of compounds that have been previously shown to be highly effective within the context of the gastrointestinal issue being treated, whereas most of the contents of the library have not been shown to be effective. Once one or more extracts are shown to be effective, the extracts can be further investigated to isolate the active compound within them. This targeted approach allows for the investigation and confirmation of previously used medicines. We learn what medicines to take for specific ailments from our family and doctors—why not look a bit further back to get advice from our ancestors?

As a scientist, I have been trained to think innovatively to find solutions to old problems. However, many of the ailments that require novel drugs, such as infections, wound healing, and bowel disorders, have been problematic for humans since prehistoric times. In fact, some argue that even our extinct species relative, Neanderthals, may have had precise medicinal practices. The medicines of the past did not depend on the synthesis of novel drugs with new biochemical structures by scientists, instead our prehistoric ancestors depended on the master craftsman of molecules—nature.

By assessing the specific disease contexts in which specific plants and herbs were used historically, one establishes a base ground for compounds to test for medical activity with modern biological and technological experiments. Seeking insight from the past is important in evolving medicine. However, natural product drug discovery should not be separated from synthetic chemistry—their marriage is important. If historians of medicine were to collaborate with biologists and synthetic chemists, the targeted testing of specific natural products in specific disease settings could accelerate drug discovery.  As different as the world we live in is to that of our ancestors, we are, unfortunately, still afflicted with many of the same ailments. And although there have been many questionable medicinal treatments throughout human medical history, there are medicines that have proven to be as effective today as they were when they were originally used.

By merging history, chemistry, and medicine, the identification of historical potent medicinal practices could lead to the identification of particular bioactive molecules that could then be screened for potency in a specific disease context. This method provides a means to identify natural products for a variety of ailments in parallel with the current means of drug development. If my great-great-grand-whomever was able to cure runny noses with honey better than any over-the-counter medication, who knows what other home remedies hold the key to other medical nuisances. 

No Gravity? Exhausted Immune System

By: Helen Beilinson

In 1970, astronaut Fred Haise fell ill thousands of miles from home aboard Apollo 13. The culprit? A bacterium that, on Earth, causes no symptoms in healthy individuals. The extended health screens taken before his flight indicated that Haise’s health was in impeccable condition and his doctors had no reason to believe that Haise’s immune system couldn’t battle the bacteria. But, as we have learned over the last fifty years, weak gravity, or microgravity, impairs the immune system. Although Haise survived and returned to Earth safely, his infection was one of the first indications that our immune systems function under different rules in space.

Strengthening weakened immune systems is not an impossible task on Earth, it has been done with everything from battling infections to battling cancer. However, to help individuals’ bodies fight infections in microgravity is still a difficult thing to do because it is still unclear how microgravity changes the way the cells of our immune systems function. Last year, a study published in Life Sciences in Space Research provides a basis for how future astronauts can aid their bodies function at full capacity by finding a unique feature of immune cells in microgravity.

Dr. Jillian Bradley and her colleagues studied an immune cell type called T cells, which are fundamental in fighting infections. When an infection occurs in the body, other immune cells sense the intrusion and signal to the T cells that they need to activate to fight off the infection. T cells are also capable of killing our own cells that aren’t ideal—for example, if they are infected with a virus or if they are cancer cells.

Bradley compared T cells grown in normal gravity conditions and in microgravity, by placing the T cells in a spinning chamber that drops the level of gravity the cells experience. T cells need to be turned on, or activated, to handle invading bacteria or other infectious agents. When Bradley starting the signaling to the T cells in microgravity, they became more active more quickly compared to the T cells receiving the same signals in normal gravity.

However, something surprising happened if she looked at the T cells after a few days of being activated. After three days, these activated T cells lose their muster. The longer duration in microgravity made these T cells activate more sluggishly, as if they couldn’t receive the signals to turn on.

Slow T cell responses are potentially dangerous and could be the reason Haise got sick in space. T cells are activated a few days after an infection starts, giving the bacteria time to multiply before these heavy hitting cells start attacking them. If the activated T cells are slow in their response time to the infection, the bacteria have more opportunity to expand and cause symptoms of being sick. Slow immune responses in space could put astronauts at risk for infections by bacteria that on Earth our immune responses respond rapidly to, like the bacteria Haise was infected with.

When investigating why microgravity is debilitating for T cells, Bradley found that T cells without gravity very quickly make a protein that inhibits their function. This protein is seen in the T cells that enter a state of exhaustion, where they have been told to activate to much that they lose the ability to quickly respond. Typically, this protein is seen when T cells are exposed to very extended periods of activation, however, in microgravity, the amount of time it takes to exhaust a T cell is significantly shorter.

It may not be all bad news, however, because this mark of immune response dampening is infamous in another location where turning T cells back into full swing is already being done—tumors.

In addition to their ability to eliminate infections, T cells are important in battling cancers. However, when tumors are large enough, they exhaust T cells, causing to express the same inhibitory protein as microgravity does. The newest wave of cancer treatments, termed cancer immunotherapies, are focused on eliminating or suppressing the function of the inhibitory proteins, causing the T cells to be more active and able to eliminate the cancer cells more readily.

Could the same technology be applied in space? Scientists don’t know yet. But it could be a potential way that astronauts can utilize existing technology to prevent them from getting sick in space.

With eyes on missions to Mars and beyond, understanding how microgravity affects our bodies is critical to ensure the health of those leaving our stratosphere and defining the changes occurring in our immune cells provides physicians with the information they need to design novel treatments for astronauts dealing with infections in space. Fascinatingly, the parallels between T cells in microgravity and in cancer could provide insights to keeping bugs at bay in space.

Warm and Wet vs. Cold and Icy Early Mars

By: Adeene Denton

Was early Mars warm and wet, or cold and icy?

For planetary scientists studying our sister planet this question plagues our research, because finding an answer has direct ramifications for society’s near future in space exploration and our understanding of the evolution of habitable worlds elsewhere in the galaxy.  Mars today is a hyperarid, hypothermal desert with mean annual temperatures of ~ 218 Kelvin (-55° C/-67°F, also known as really really cold), where liquid water cannot survive on the surface and a flimsy atmosphere continues to gradually disappear into the vacuum of space.  It’s a harsh place where most life, including us, can’t survive without the aid of advanced insulating technology.  However, the valley networks and basin lakes that crisscross the southern highlands suggest that at some point in the past the Martian climate permitted abundant liquid water to remain present on the surface over long timescales.  Mars may be a cold, windy desert now, but it probably looked very different 3 billion years ago when these features formed.

…and that’s what scientists agree on.

What we disagree on is basically everything else about Mars’s early history.  Such vigorous disagreement within the field comes from the lack of real information we can glean from the surface about what Mars may have been like 3 billion years ago, combined with the burning need to answer such a fundamental question.  When we, dreaming of our future as a multiplanetary species, look at Mars today we think: sure, Mars is inhospitable to life now, but could it have been habitable in the past? Could it be again?  Big questions like these are what drive NASA and other space agencies to shoot for manned missions to Mars in the 2030s and 2040s. And yet, there’s so much we don’t know about a planet that’s a year’s travel away.  We can’t afford to be unsure when our astronauts’ lives are on the line.

When planetary scientists try to decode the geologic and climate history of Mars, we have several basic tools at our disposal. First, the geologic record itself – we can look at Mars through our increasingly high-resolution images from decades of Martian spacecraft, observe the way different layers of material interact, and date the surface based on relative crater densities as well as speculate on formation mechanisms for various morphological features. Second, other datasets such as mineralogy from both spacecraft and the three rovers on the surface can tell us about the composition of these features. NASA’s rovers, the now-defunct Spirit as well as the still-operating Opportunity and Curiosity, are remotely operated science labs on wheels, doing their best to get information on the mineralogy of the rocks over which they drive. The minerals involved can indicate the presence or absence of water during their formation, and leave some hints about how long the water was there. And third, we can use our understanding of the physics of terrestrial planets and our knowledge of Earth to extrapolate to Mars using numerical modeling.  Climate modelers can incorporate atmospheric compositions, wind patterns, changing amounts of solar luminosity, and many other factors to reconstruct the evolution of the Martian atmosphere, while dynamicists look at the death of the Martian magnetic field and the growth of massive volcanic fields that occurred at the same time as valley networks formed on the surface.

The current problem faced by the field is that a) these methods are yielding a variety of information about the early Martian climate, and b) that information is being interpreted in a myriad of contradictory ways.  The two ends of the early Martian climate hypothesis spectrum, the “warm and wet” and “cold and icy” climate scenarios, hinge on vastly different interpretations of the geology, mineralogy, and climate models.  This is possible because the overland flow of water (one of the few things we know occurred) can be produced through both regional precipitation in a warm and wet scenario and meltwater off a large ice sheet in a cold and icy one.  The same mineralogies, including hydrated clays, can be formed through relatively short periods of water involvement and tell us less about their climatic source than we would like.  Additionally, the climate models span a broad parameter space and have many unknown variables – when we estimate the axial tilt of Mars 3 billion years ago, it is most definitely a guess. 

As such, in many cases we’re right back where we started. There was water on Mars – but where did it come from? How long was it there? What mean annual temperatures are needed to produce liquid water? Was there an ice sheet in the south, or an ocean in the north? We have so much data to look at, and somehow it’s not quite enough. 

The growth of two robust, yet contradictory hypotheses is the direct result of this conundrum.  The warm and wet climate hypothesis posits that early Mars contained a sizeable insulating atmosphere and had mean annual temperatures above 273 K, which in to permits the consistent present of regional precipitation across the southern highlands to create the valley networks. Some adherents of this scenario believe that a global ocean existed in the smooth, younger northern lowlands at multiple points in Mars’ history, despite the cold conditions.  While putative shorelines of an ocean have been mapped, the presence or absence of an ocean has yet to be definitively proved. And Curiosity itself, our newest rover, may be traversing what was once a lake-filled crater, the amount of water that lake contained and the timescales over which that water was present remain unknown.

Meanwhile, recent climate scientists have predicted that the dimmer light of a younger sun as well as an atmosphere that may have been thinner than we would like actually prevent warmer temperatures from persisting even on an early Mars.  In this cold and icy scenario, temperatures remain below freezing most of the time, with surface water locked up in an ice sheet covering the higher altitudes of the southern highlands.  Geologically brief periods of warming enable melting at the margins of this ice sheet, which can in turn also create the broad patterns of valley networks and pool in the basin lakes.  In this scenario there can be no ocean, and the presence of liquid water is transient at best.

For those of us working in the field of Martian planetary science today, our job is to continue to test these models, canvassing the surface of Mars for clues and making our models as realistic as possible.  While we hope to solve the problem of early Martian climate history sooner rather than later, we also look forward to the possibilities that human exploratory missions can offer for better sample collection and observation of the planet with human eyes.  It’s an exciting frontier of planetary science, and we continue to boldly go!

Warm and wet (Craddock and Howard 2002)...

Warm and wet (Craddock and Howard 2002)...

...and cold and icy (Forget et al., 2013, Wordsworth et al., 2013, Head and Marchant 2014) climate scenarios for early Mars. Are either of these correct? Only time will tell…

...and cold and icy (Forget et al., 2013, Wordsworth et al., 2013, Head and Marchant 2014) climate scenarios for early Mars. Are either of these correct? Only time will tell…

Knowledge and use of prevention measures for chikungunya virus among visitors to Virgin Islands National Park

By: Helen Beilinson

Reprinted with permission from Open Science DB. Open Science DB is a centralized database of scientific research. It is led by graduate students from Northwestern University and scientists from leading research universities/institutes contribute summaries of research papers to the database. Each summary is reviewed to ensure accuracy and accessibility. 


Pretravel health research— not something people typically think about when they’re packing their flip flops and sunscreen for a sunny vacation in the tropics. Unfortunately, mosquitos love the warm temperatures just as much as humans do. Without the knowledge of proper mosquito bite prevention strategies, vacationers are put at risk for catching viruses carried by mosquitos.
Chikungunya is caused by Chikungunya Virus that is transmitted to human by mosquitoes. Chikungunya outbreaks have been observed in countries in Africa, Asia, Europe, and the Indian and Pacific Oceans. But Chikungunya spread to Americas in 2013, and by the end of 2014, about 1 million suspected and confirmed cases of Chikungunya were reported across 43 countries in the Americas. Infected people start feeling symptoms about a week after being infected by a Chikungunya-carrying mosquito, the most common of which are a high fever and joint pain. Although most patients start feeling better within a week, many experience prolonged joint pain up to several months. There is no vaccine against Chikungunya, so the best way to prevent infection is making sure that people are educated about the virus and instilling mosquito bite prevention practices. However, there is no information regarding how many travelers are aware of Chikungunya and prevention methods.  
To answer this question, the Center for Disease Control and Prevention (CDC) investigated what percentage of travelers to the U.S. Virgin Islands are aware of mosquito-borne diseases (Chikungunya and other viruses) and mosquito bite prevention measures. Visitors to Virgin Islands National Park on St. John were asked to complete a questionnaire addressing knowledge related to mosquito-spread diseases and prevention measures .446 of 783 travelers completed the survey.
According to the survey results, more than half of respondents were unaware of Chikungunya virus. Moreover, the majority responded that they had not been wearing clothing treated with an insect repellent or long-sleeve shirts/pants, or using bed nets for the past three days.
Overall, this survey showed that most visitors arrive the U.S. Virgin Islands without adequate pre-travel research and knowledge about mosquito safety. As the number of international travelers increases each year, this survey data strongly emphasizes the urgent need for developing creative ways to encourage pre-travel health research among travelers. 

A New Science Podcast... By Yours Truly

Hi everyone!

Just wanted to pop in and promote a new science podcast that I have had the honor and privilege of creating with the Yale Journal of Biology and Medicine. It can be found on SoundCloud and iTunes. Please subscribe and listen in! Don't worry if you're not a scientist- we made this podcast specifically so we can talk about cool science that is happening without too just scientific jargon!

YJBM is a quarterly scientific journal edited by Yale medical, graduate, and professional students (me included!). Each of our issues is devoted to a focus topic that ranges from the aging brain to epigenetics to infectious diseases. With our podcast, we want audiences outside of those working in said focus topics to enjoy the research that is being done and appreciate the fields. We talk about the past, present, and future of the focus topic field. This month we're focused on the microbiome--we have cool facts (Did you know the first fecal transplant was performed in the 4th century? Or that people who live on farms are less likely to have allergies?), interviews with top clinicians and scientists, and random history facts that are great for cocktail hours. 

If you want to learn more or read about the articles in YJBM that we reference, check out the YJBM archives! They're open access so everyone can read them- you don't need special access!

The Evolution of Human Violence

By: Helen Beilinson

Why are humans violent?

This is a dense question that has been heavily debated for centuries. There are, very simply, two camps—nature and nurture. The former has been popularized by the seventeenth-century philosopher Thomas Hobbes, who argued that the natural state of man is one of violence and independent perpetuation, that humans are naturally, inherently violent. On the other hand, Jean-Jacques Rousseau retorted, nearly a century later, that individuals are not born violent or peaceful, but instead, are molded by their environments. Outside of philosophy, social scientists have tackled this distinction by focusing on the nurture side, analyzing how sex, age, race, and socio-economic status can influence an individual’s propensity towards violence. In a recent issue of Nature, Spanish scientist José Mariá Gómez and colleagues took to answering this question from a different, unique angle—evolution.

 Lethal conspecific violence, or violence occurring between members of the same species, is not unique to humans, from infanticide in primates and dolphins to horses and hamsters attacking their own. The prevalence of aggression throughout mammals, and its high inheritability, questions whether evolution has shaped human violence due to intraspecies violence being an adaptive strategy for survival. To address this question, the authors of this study used comparative methods from evolutionary biology to quantify the levels of violence in 1,024 mammalian species, including those that are currently extinct. They assessed causes of death in over 4 million instances, defining the level of lethal violence in a species as the probability of dying from intraspecific violence compared to other causes.

 Out of the analyzed species, nearly 40% had instances of conspecific lethal violence, with, on average, 0.30% of deaths within a population occurring at the hands of those in the same species. The authors calculated phylogenetic signals of related species to analyze the evolution of lethal aggression in mammals. This signal is essentially a measure of how lethally violent a particular species is in comparison to other closely related species. They found that lethal violence was entirely absent from some species, like bats and whales, and was more frequent in other groups, such as primates. Lethal violence was at similar levels in closely related species, speaking to its heritable aspect.

 Within primates, one of the most notoriously violent groups of animals, there were differences in levels of violence, indicating violence’s evolutionary flexibility. While chimpanzees were highly violent, bonobos were tamer. This observation drove the authors to ask whether other factors could influence violence within a species. The authors subsequently scored the analyzed species for territoriality and social behavior, two traits that could drive aggression. They found that social, territorial animals had high levels of lethal violence than solitary, non-territorial species.

 Studying the evolution of and phylogenetic signals for lethal violence in mammals as a whole provided a basis for studying the violence in humans. In addition to the animal species studied, 600 human populations were analyzed, ranging in time across human history, from the Paleolithic era (~2 million to 10,000 years ago) to the present.

 We emerged from the primate line, with a long evolutionary history of higher-than-average levels of conspecific lethal violence, so it is unsurprising that at the origin of our species, human lethal violence accounted for 2% of all deaths, six times higher than the reconstructed mammalian value. Additionally, we as Homo sapiens are both social and territorial, trains with stronger tendencies towards lethal violence in mammals.

 Over human history, the estimates of lethal violence vary greatly. Although Paleolithic estimates were close to 2% of deaths were due to lethal aggression, estimates rose as high as 15-30% in various times throughout history, peaking about 3,000-5,000 years ago. Today, levels of lethal violence have decreased markedly. The authors claim that socio-political organization was a significant factor in the changes in violence. They found that there was a correlated rise in violence when human moved from pre-societal organizations, including bands and tribes, to more modern organizations like chiefdoms and states. However, although high population densities in most mammalian species drive lethal aggression, in humans, population increases were consequences of successful pacification, leading to less violence.

 Although the news is packed daily with stories of human-on-human violence, today, less than 1 in 10,000 deaths (about 0.01%) are due to lethal violence. Based on the model put forth by Gómez, this translates to humans being about 200 times less violent today that can be predicted by our evolutionary past. Even a lethal violence rate of 0.01% is too high; there is a lot of social and political work that needs to be done to lower this incidence as much as possible, ideally to zero. A violent past and phylogenetically inherited lethal violence have set up modern humans to be naturally violent creatures, nevertheless, it is clear that culture, be it social or political, can strongly influence and modulate levels of aggression in a population.

Jellyfish and Glowing Proteins

By: Erica Gorenberg

From the stars on the ceiling of a childhood bedroom to the key chains brought home as souvenirs, some of the most memorable trinkets from youth are those that glow in the dark. For molecular biologists, like for kids, the ability to harness fluorescence and make molecules reminiscent of those glow-in-the-dark toys was one of the most useful and exciting innovations in modern science.

Green fluorescent protein (GFP) was first observed in 1962 in a bioluminescent jellyfish, Aequeorea victoria, as the molecule responsible for the animal’s ability to glow. The protein was isolated, and researchers demonstrated its ability to light up green under beams of specific colors of light. The use of GFP revolutionized biological research–in the time since GFP’s discovery and use, molecular biology has entered its golden age.

Osamu Shimomura, who first isolated the protein, Martin Chalfie, who first used GFP to track other proteins, and Roger Tsien, who discovered the properties that make GFP fluorescent and manipulated them to create a rainbow of fluorescence proteins, were awarded the 2008 Nobel Prize in Chemistry. Before their work with GFP, visualizing proteins within live cells was far more complex, and less dependable, as it relied on the insertion of fluorescent dyes into the cell where they could bind to the proteins of interest. These dyes were unreliable because they weren’t always specific to their proteins, and because the physiology of the cell had to be disrupted to add them.

Now, through the use of DNA modification technologies, the gene for GFP can be fused to genes of other proteins, allowing proteins to be produced with the fluorescent molecule attached. This is called "tagging" the protein with GFP. As Martin Chalfie demonstrated for the first time in 1994, GFP can be used to show specific proteins and cellular structures in living organisms, providing researchers with new insights into cellular function. For example, when the GFP gene is attached to the gene for a microtubule associated protein, MAP2, one molecule of GFP is produced and fused with every molecule of MAP2. By viewing cells that produce MAP2, such as neurons as shown in Figure 1, researchers can learn how much of the protein is expressed under different conditions, simply by measuring the intensity of GFP’s fluorescence. The more MAP2 that’s produced, the brighter the GFP signal will be.

Figure 1. Hippocampal neuron stained with GFP for MAP2. Actin filaments are in red and DNA is in blue. The protein tags allow researchers to see MAP2 in the neuron’s long projections, or dendrites, and see that actin localizes in clumps along the dendrites at structures called spines. The DNA stain shows the cell’s nucleus.   Image via Halpain lab at UCSD  .

Figure 1. Hippocampal neuron stained with GFP for MAP2. Actin filaments are in red and DNA is in blue. The protein tags allow researchers to see MAP2 in the neuron’s long projections, or dendrites, and see that actin localizes in clumps along the dendrites at structures called spines. The DNA stain shows the cell’s nucleus. Image via Halpain lab at UCSD.

With a GFP tag, researchers can also see where the protein is made, where it is transported, and under what conditions this changes. Previously cells had to be fixed, killing them and freezing them in time to view their components. One of the most compelling aspects of fused fluorescent tags is their ability to be viewed in real time within living cells. Live imaging allows researchers to manipulate the cells and understand how different environmental changes can affect cellular components over time. By using different colors of fluorescent proteins to label different cellular proteins and looking at where and when they overlap with one another, researchers can look within the cell to understand how different proteins might interact.

Within each tagged cell is a constellation of glowing proteins, like the glow-in-the-dark stars stuck to a childhood ceiling. These cellular constellations move and interact in breathtaking ways that scientists are only beginning to understand, thanks to the discovery of GFP.

The Nose Knows How to Keep Staph at Bay

By: Helen Beilinson

A year and a half ago, I had temperatures over 100°F, could barely concentrate, and couldn’t sleep for more than two hours without waking up covered in sweat. After three days of somewhat tolerating this out of body experience, I realized that this wasn’t some kind of horrible cold and went to the student health care center. I had pyelonephritis, a bacterial infection of the kidney. I spent the next week in the hospital being pumped with antibiotics. Scarily, the antibiotics I was initially given did nothing to the invasive E. coli trying to take over my body. They were resistant to the drug that was supposed to kill them. It took two days of lab tests to realize this fact, after which I was immediately switched to another antibiotic, to which my new bacterial friends were not resistant. Thankfully, I was infection free and out of the hospital soon after.

I was fortunate in that a simple switch to another antimicrobial cleared my unwelcome squatters. In many cases, infectious bacteria are resistant to multiple drugs, termed multi-drug resistant organisms (MDRO), or, more comic book-ly, ‘super-bugs’. Bacteria evolve very quickly; modifying their genomes to select what works best for their environment or even obtaining bits of DNA from other bacteria that would aid them in survival. These bits of DNA encode genes that mutate at some of the fastest rates and that are one of the most frequently transferred from one bacterium to the next. They also encode genes that induce resistance to antimicrobials. The longer certain antimicrobials are used in the clinic, the more bacteria are able to acquire resistance genes, spreading resistance from bacterium to bacterium, from species to species. The need for new antimicrobials has been terrifyingly high for the last few decades because there has not been a new class of antibiotics discovered for the treatment of bacterial infection since 1987. 
Many studies have been aimed at synthesizing new biomolecules in the lab in attempts to find a molecule never before encountered by bacteria that has antimicrobial potential. Other scientists have turned to a surprising place to find naturally occurring antimicrobials: bacteria.

Bacteria occupy what are called niches, a location that has a particular bacterium’s perfect temperature, humidity, and food. Although there are millions of niches that bacteria occupy, from lava pits to our pets’ guts, bacteria still have to compete with other bacteria for their niches. This is particularly true of locations that are not rich in nutrients—the bacteria not only have to fight for space, but also for food. To fight against invaders, bacteria use many strategies, including producing their own antimicrobials. One species of bacteria will produce molecules that harm other bacteria, while the producing species is unaffected by those molecules. These microbially-derived antimicrobial molecules have been the subjects of many scientific treasure hunts. Many niches have been explored, from the ground to the ocean (both of which have been very fruitful locations), but one recently published paper from the University of Tübingen looked for antimicrobials much closer… in their noses.

Any location on the human body that is exposed to the environment has a milieu of microbial life living in it. The nose is no exception. The nose, as well as the upper airway to which it is connected, is very nutrient poor, meaning that any bacteria that live there are in strong competition for space and food. Bacterial species from the human microbiota have been shown to produce bacteriocins, which are antimicrobial molecules. The authors of this study explored nasal commensals in attempts to identify antimicrobials capable of acting on Staphylococcus aureus, a bug that lives in the nose, respiratory tract, and on the skin. Although approximately 30% of the human population has S. aureus living in or on them, during times of immune suppression, S. aureus can cause skin and blood infections. Without antibiotic treatment, S. aureus bacteremia (bloodstream infection) has a fatality rate that ranges from 15 to 50 percent. Unfortunately, the prevalence of S. aureus in antimicrobial filled hospitals has lead the species to be highly resistant to many drugs. For example, MRSA is methicillin-resistant S. aureus, a highly difficult bacterium to treat. Many S. aureus strains are now classified as MDRO, as they are resistant to more drugs than just methicillin. Without novel antimicrobials to treat S. aureus infections, or any other MDRO infection, the fatality rates will only increase.

S. aureus is in the genus Staphylococcus and has many other genus family members, many of whom also live in the human nose. The authors of this study used a previously described collection of nasal Staphylococcus to screen the species’ ability to inhibit the growth of S. aureus. They identified one strain, called S. lugdunensis IVK28, which very strongly prevented the growth of S. aureus. In the hopes of identifying the specific molecule, or molecules, that S. lugdunensis uses as an antimicrobial against S. aureus, which could subsequently be used as clinical antimicrobials, the authors made a mutant library of S. lugdenensis. In essence, this means that they modified the entirety of the genome of S. lugdenensis randomly in many different ways. If a gene is mutated that is critical to repressing S. aureus growth, then that mutant would be unable prevent S. aureus growth. Then, they tested to see if any of the mutants were unable to repress S. aureus growth. When they found one mutant that did not have this antimicrobial property, they sequenced the bacteria’s genome to identify what gene was mutated. They found that the previously uncharacterized gene, which they named lugdunin, was mutated in this line of S. lugdunensis. This is the first clue showing that lugdunin could be a novel antimicrobial.

To test if lugdunin has antimicrobial properties outside of the context of S. lugdunensis, the authors isolated lugdunin on its own and found that it was able to independently act as an antimicrobial on S. aureus growth. This fact is important in that many molecules, particularly proteins, sometimes don’t function alone, working in conjunction with other proteins, or other biomolecules such as lipids or nucleic acids. An independently functioning molecule is much easier to work with, both in basic science characterization studies, as well as in the clinic. Lugdunin is a strong antimicrobial, with the ability to act against various strains of drug-resistant S. aureus and drug-resistant members of the Enterococcus genus, as well as many other bacteria. Importantly, lugdunin did not cause any damage to human cells (important when trying to develop a drug for human use). When the authors used a mouse skin infection model using S. aureus, lugdunin was able to eliminate most or all of the infection, a first critical experiment in demonstrating its potential as a clinically available antimicrobial.
The question of why some people are carriers of S. aureus, while others may go their entire lives without have one such bacterium ever live in or on them, has remained largely unanswered by scientists. To explore whether the presence of S. lugdunensis affects the presence of S. aureus, 187 patients’ nasal swabs were analyzed. A third of the patients had S. aureus, a number close to the national average, whereas a tenth of the patients had S. lugdunensis. The presence of S. lugdunensis, however, strongly decreased the likelihood that the patient had S. aureus. Although it couldn’t definitively be proved in humans, as human testing is strictly frowned upon by the higher powers that be in the scientific world, this finding was a critical step in understanding the relationship between the two Staphylococcus strains.

To further test this antagonized relationship, the authors asked whether lugdunin gave S. lugdunensis the capacity to outcompete S. aureus for nasal space. They found that this, in fact, is true. When they plated both bacteria on agar plates (basically thick jello with tons of nutrients), they found that S. lugdunensis always took over the plates within 72 hours. Even when the plate started as 90% aureus and 10% lugdunensis, three days later, there wasn’t a S. aureus bacterium to be found. When a mutated S. lugdunensis was used, a variant that lacked the lugdunin, S. aureus was able to take over the plate with ease. These findings show that S. lugdunensis is not just a member of many people’s nasal microbiota, but its ability to compete with S. aureus, thanks to its lugdunin molecule, can keep aureus at bay and prevent any potential infections it would cause.

The discovery of a potent antimicrobial that can act on drug-resistant bacteria is important. Of course, there is always the risk that bacteria will develop a resistance to this new antimicrobial, but when the authors of this study tested to see whether they can ‘force’ S. aureus to become lugdunin-resistant, they found that the rate of resistance development was minimal. Whereas S. aureus developed resistance to other drugs after even just a few days, lugdunin resistance wasn’t observed, even after a month. Lugdunin is an exciting new antimicrobial that hopefully will be able to treated MDRO-infected individuals soon. Additionally, as S. lugdunensis is a known safe nasal commensal, a fascinating potential of these findings is infection prevention, instead of treatment. Patients who are at a high risk for S. aureus infection can be colonized with S. lugdunenesis to make the bacteria work for us in exchange for the delicious mucus they feast on. The presence of this S. aureus fighter will lower the risk for S. aureus presence, even already drug-resistant S. aureus, in the nasal cavity, lowering the chance of a life threatening infection. Although it was a quiet field for a while, antimicrobial discovery has only been speeding up in the last few years. Exciting new discoveries are being published every few weeks and our ability to treat infections, as well as preventing them in the first place, is only getting better. Who knew that sharing boogers could save lives?


Early Grunter Gets The Worm

By: Helen Beilinson

Summer has finally settled upon Connecticut after a long winter. The undergrads are gone from campus, grad students are finding every excuse to go outside, outdoor seating at local breweries is constantly packed, and festivals are in full swing. Festivals in the United States are reaching far beyond the classic food and/or music type and into bizarre territories. There are road kill cook-off fests to days devoted to cow cake (read: dung) throwing contests. However, one fascinating festival, biologically speaking, happens every April in Sopchoppy, Florida: the Annual Worm Gruntin’ Festival.

To win this annual worm grunting contest, all you need to do is charm the greatest amount of earth worms out of the ground as possible, but you can only do so by making the ground vibratev. Using either hand tools or power equipment (although traditionalists prefer the former) one can easily cause earthworms to exit the ground by the thousands, making them very easy to collect. The most popular, and some argue most effective, way of creating the vibrations is using a stack of wood inserted into the ground that is vibrated by a flat iron slab being rubbed across the top. Although it has been known since the 1800’s that beating the ground forces earthworms to above ground, worm grunting as it is done today with wood and iron slabs originated in the 60’s and 70’s, first as a personal means to get worms for fishing, and then used on a more industrial scale for selling of worms. Over the years, the technique has been passed down and is still frequently being used for obtaining live earthworms as bait. Animals, such as herring gulls and wood turtles, have also been observed to use ground vibration to bring worms to the surface.

Despite its long history, the reason vibrating the ground charmed worms from the ground was largely a mystery before 2008. Earthworms live, well, in the earth. Leaving the ground poses two problems for worms. The first is aboveground predators. Birds and other small animals feast on worms. Second, earthworms have to continuously be in a moist environment, as they breathe air through their skin and must stay wet in order for oxygen to be exchanged through their skin (a reason why they come above ground at night or after a rainstorm—they can move faster on the cool, wet soil, while still being able to breathe).

In his last scientific book, The Formation of Vegetable Mould through the Action of Worms, Charles Darwin, knowing of the early observations that ground vibrations stirred earthworms (although he noted that he was personally unable to replicate them) commented on the strangeness of the worms’ ascending migration and offered a hypothesis that has for the last century and a half been the predominant theory for the worms’ movement. Although we are mostly familiar with worms’ above ground predators, they have underground predators as well –  chiefly, moles. One of the situations in which worms escape to the ground’s surface, presumably, is to escape the jaws of moles. When moles dig, the ground around them vibrates. Darwin proposed that the vibrations made by humans or other creatures mimic the vibrations caused by mole digging, inducing fear in the worms, making them surface. It has also been proposed that the vibrations mimic heavy rainfall, which also makes worm surface, as to prevent drowning.

One hundred and twenty seven years after the publication of Darwin’s Worms, Kenneth C. Catania, a scientist from Vanderbilt University, produced a study proving that the former’s hypothesis was, indeed, correct. Catania recorded the vibrations made by the traditional technique of driving a wooden stake into the ground and rubbing a flat iron slab lengthwise across the top of the stake. Vibrations were measured at intervals away from the stake and from multiple stake locations. The magnitude of the vibrations decreased as one got further away from the stake, as to be expected, and the intensity of vibrations depended on the soil composition, as different stake locations produced different vibration intensities. Accordingly, the number of worms that emerged also decreased with increasing distance from the stake. A year after Catania’s original paper, a Canadian research group published a study recapitulating the results showing ground vibrations cause earthworms to emerge from the soil.

Catania was collecting these data in Sopchoppy, FL, home of the worm grunting contest. The only mole living in Sopchoppy is the eastern American mole. These moles eat the equivalent of their body weight every day, with a diet consisting predominantly of earthworms mixed in with some vegetable matter. To test Darwin’s original hypothesis, Catania first confirmed that the presence of these moles in the forest in Sopchoppy. By studying mole tunnels, Catania showed that the moles are abundant in the area and that there is a clear overlap in the populations of earthworms and moles.

Then, studies were performed to test whether earthworms responded to simulated rain or to digging moles. To do these studies, fifty earthworms were placed in a large, container containing soil soil and allowed to burrow into the ground. Once the worms entered the soil, Catania studied their movements, specifically, how often the worms would exit the soil. The number of exiting worms was negligible, essentially one or zero over the course of a few hours. First, Catania studied the effects of simulated rain by placing the soil boxes under a sprinkler system. The number of exiting worms remained unchanged. However, when a mole was introduced into the bottom of the container and allowed to burrow through it, nearly half of the worms, on average 24, rapidly exited the soil. Catania even noted that many worms “crawl[ed] over the container walls”. Similar observations were seen when the same experiment was conducted in a much larger area, where rain did not influence the number of worms exiting the soil, but moles digging drove them out quickly.

A human observer can hear moles digging underground when standing several feet away because they dig so powerfully, particularly if they are digging through a root-filled environment. To study these vibrations and sounds more carefully, Catania recorded their digging vibrations and found that the amplitude of the vibrations was highly similar to that of the worm grunters. One difference was that the vibrations of the worm grunter were much more consistent throughout the vibrations, as opposed to the moles. This is unsurprising, as moles do not continuously burrow, often changing directions to avoid roots or stopping to munch on a worm. To test the worms’ responses to the mole’s vibrations, Catania took the recording of the mole’s digging and modified the sound file to simulate how the digging would sound if the mole were approaching the worm, such that it progressively grew louder. This experiment was conducted in the aforementioned container containing fifty earthworms. Fascinatingly, after the recording was played, an average of 16 worms exited the container compared to the one or zero that would exit with a nonspecific recording playing, mimicking the previous results. This finding was surprising as merely the sound of the vibrations, without any physical perturbations to the soil itself, was a strong enough force to drive many worms to the surface. The difference in the number of worms leaving the soil (a difference of about ten worms between the mole vs mole sound experiements) is probably due to the worms being able to detect compression of the soil. Without the mole digging, there is no change in compression. With only one of the two (or more) “predator approaching” signals, some worms probably don’t feel enough fear to leave the soil.

This study was conducted with one species of earthworm, Diplocardia mississippiensis, and one species of mole, Scalopus aquaticus. It is unknown whether this phenomenon is carried out throughout various worm and mole species or if this effect is specific for this predator-prey pair. However, detection of predators approaching is not uncommon in the animal kingdom. In the discussion section of this paper, Catania makes an interesting comparison between the vibration-avoiding mechanism of the worms to the ultrasound-avoiding mechanism used by flying insects preyed on by bats. As bats use ultrasound for echolocation, their prey have evolved the means to sense ultrasound and have learned to fly from it to avoid predation.

I highly recommend checking out Dr. Catania’s other work. It is highly unique and fascinating work, including a recent study exploring how electric eels attack predators not in the water. 


Sexism is in the AIRE

By: Jenna Pappalardo

Autoimmune disorders include a variable set of diseases in which a person’s immune system abnormally targets its own normal body tissue rather than an invading pathogen or other threat. Development of these diseases is a result of genetic and environmental factors synergizing to induce immune responses that would normally be prevented by a series of checkpoints in the immune system. These diseases disproportionately target the sexes, with estimates suggesting 78% of those affected by autoimmune diseases are women, but why this divide exists is unknown. When considering the biological differences between males and females, one obvious answer would be hormones—but how would they affect these processes? A recent study provides a new clue into how hormones could affect a barrier to autoimmunity that, when functioning normally, results in the death of self-reactive cells.

To understand this new link to hormones, I’ll pause to explain the very cool process that should normally happen in the immune system to prevent autoimmunity. T cells are part of the adaptive immune system that mounts specific and robust responses against pathogens. These cells work in various ways, including facilitating immune responses and directly killing infected cells, which makes them great defenders against invaders, but has severe consequences when aimed at their host’s tissues. Adaptive cells gain specificity for what they recognize through essentially random genetic recombination and mutations, so some cells develop to recognize a normal host protein and have the potential to attack it if they aren’t eliminated. T cells develop in a small organ by the heart called the thymus (hence their name), where they undergo stringent selection to delete any T cells that are targeted against normal tissue. But wait—I thought when I first learned of this process—how can their exposure only to the thymus ensure that T cells reactive to the brain or eye or pancreas are also killed before they’re released into the body? The elegant solution to this is the autoimmune regulator or AIRE. Similar to how differential expression of genes allows the same genome to give rise to hundreds of kinds of cells by different expression of proteins, AIRE allows for cells in the thymus to express proteins from other tissues throughout the body. AIRE induces the expression of proteins from all over the body (proteins that are normally expressed in the brain, kidney, liver, etc.) in the thymic stromal cells. T cells are then tested to see if they react against these proteins that represent other tissues and are killed if they recognize them too strongly. This process eliminates T cells that will be activated, and subsequently induce an active immune response, in response to a person’s own protein (ie autoimmune triggering T cells).

The authors of the previously mentioned study decided to investigate how AIRE expression differs in males and females and found that females have lower expression of AIRE, as well as the representative proteins (or tissue-specific antigens, TSAs) that it regulates. Recapitulating lowered estrogen levels in castrated male mice suggested that hormonal differences may be responsible. For more direct assessment of how hormones affected AIRE expression, the authors introduced either estrogen or DHT (basically activated testosterone) to human thymic epithelial cells in culture. Adding estrogen caused a decrease both in AIRE expression and an AIRE-dependent TSA, while testosterone actually caused a slight increase in both. If cells were given both estrogen and DHT, the effect of estrogen won out with an overall reduction in AIRE and its TSA. There are TSAs that do not depend on AIRE, which were measured in this study and not affected by estrogen. These results were mirrored in experiments where human thymus fragments were transplanted into mice. When estrogen was administered, there was lower AIRE and AIRE-dependent TSA expression compared to mice not receiving estrogen. The impact of estrogen was further confirmed by removing the estrogen receptor from cells in the thymus to prevent estrogen from acting on them, which restored AIRE and AIRE-dependent TSA expression to levels comparable to males.

The authors hypothesized that estrogen might be affecting AIRE expression by altering the areas of DNA that are available for transcription. DNA methylation can hide regions of DNA from transcription, which prevents those regions from being translated into the protein they encode. They found that adding estrogen increased the number of methylated sites in cultured human thymic epithelial cells, while DHT did not significantly change the level of methylation. This suggests that estrogen may affect how much AIRE is expressed by causing an increase in DNA methylation, turning off the gene that encodes it. These results were tied into susceptibility to autoimmunity using experimental autoimmune thyroiditis (EAT), which is an autoimmunity model in mice. In this model, an adaptive immune response is inappropriately launched against thyroglobulin, a protein expressed in the thyroid. As female mice are more prone to develop EAT, the next step was assessing if lower AIRE expression contributes to EAT susceptibility. Sure enough, there was more pronounced autoimmunity in males when AIRE expression was lowered to mimic levels in females by preventing the protein from being synthesized.

There are still many open questions about the interaction of genetics and environment in autoimmune susceptibility, but this study provides new insight into how sex contributes to that balance by proposing an estrogen-mediated process that could allow for the escape of more autoreactive T cells. These factors comprise such a complex interplay that at a recent dinner with a visiting speaker and prominent immunologist, he suggested that whoever untangles the mechanisms accounting for the sex disparity in autoimmunity deserves the Nobel Prize (and joked that he isn’t smart enough to tackle that challenge). Sex doesn’t just affect susceptibility to autoimmunity, but also its severity and immunological characteristics, making it a potential avenue for developing new preventatives and therapeutics.  

Treating Depression with Drugs

By: Helen Beilinson

[I would like to note that Because Science does not endorse the recreational use of drugs, psychedelic or otherwise. Please see your doctor before taking any new medications or changing your current regiment. All of the drugs mentioned below are illegal in the United States and were tested in experimental settings to ensure the safety of the volunteers.]

Refurbishing drugs fashioned for one therapy to treat another illness has been in practice for years. The anti-nausea drug, Thalidomide, was found to be efficacious in treating leprosy and multiple myeloma, and many therapies originally designed to fight tumors are currently being studied for efficacy against autoimmunity. Although many of these studies go under the media radar, an ever-growing group of studies of reusing drugs has raised a fair amount of controversy because the drugs that are being recycled are illegal, psychedelic drugs. The drugs they explore are felonious and their physiological and psychological effects are highly understudied. That being said, new effective and widely used treatments for depression have not been developed since the 1970’s and these studies hold important information in treating this disorder.

Last week, a group at the Imperial College London published a study in The Lancet Psychiatry exploring the effects psilocybin on depression. Psilocybin is the active, hallucinogenic molecule found in many toadstools, including magic mushrooms. Psilocybin is an alkaloid, a class of nitrogen-containing organic compounds found predominantly in plants. It also also includes morphine, a pain-reliving drug, and atropine, the poison found in deadly nightshade, but active as a muscle relaxant to dilate pupils and increase heart rate in small doses. Psilocybin is metabolized by the body to form psilocybin, which stimulates the serotonin receptor Serotonin works in many ways in many places in the body. It is believed to be critical in mood regulation, appetite, and sleep. Serotonin is a neurotransmitter, which is a molecule used by neurons to communicate with each other, relaying information from one end of the body to another. Many current available antidepressants, and other mood related disorders, act to increase the amount of serotonin, which subsequently increases its consequential signaling, lifting mood. Psilocybin acts kind of like serotonin in that it triggers the same receptor as serotonin, which stimulates the same chemical signaling that serotonin does. The biochemistry of psilocybin provides insight into its strong potential in treating depression.

 Although there is evidence that magic mushrooms have been used for religious, spiritual, and recreational purposes since 9000 BCE, they, as well as other psychedelics, only entered the academic and medical field in the late 1950’s. Backlash against the hippie culture of the 60’s and 70’s, however, halted research of hallucinogens. The last decade has brought back studies of these drugs and their effects on various human ailments, triggering molecular studies to elucidate the molecules responsible for hallucinations and changes in mood. 

In the aforementioned study, twelve clinically depressed patients, who were unresponsive to other treatments, were given two doses of psilocybin. The first was a low dose and the second, administered a week later, was higher dose. The patients were then questioned for the next three months and their mean depression severity scores were noted. Before treatment, all patients had scores reflecting severe depression. After the second dose of psilocybin, scores, on average, dropped to scores of mild depression and staying in that range three months after the treatments. Five of the twelve patients were in complete remission after three months, with all patients seeing a notable improvement. The study also noted that all patients experience side effects (including anxiety, confusion, and headaches), which, in all cases were mild and most symptoms passed within two hours of treatment.

 These results are very exciting for the field, as such success in an initial study, particularly for psychiatric disorders, is rare. Interestingly, this isn’t the first time psychedelic drugs have been used as a basis for depression therapy.

In March of 2015, researchers from Brazil published the first clinical trial exploring the potential therapeutic benefit of ayahuasca. Ayahuasca is a botanical hallucinogen used by indigenous groups of the Amazon for ritual and medicinal purposes. The ayahuasca beverage contains two ingredients. The first is a monoamine oxidase inhibitor (MAOI), which inhibits the breakdown of specific neurotransmittors, molecules used by neurons to communicate with each other, such that their effectiveness is increased. The second is dimethyltryptamine, or DMT, a psychedelic compound. Traditionally, the ayahuascan MAOI is from the bark of Banisteriopsis caapi, a jungle vine, and the DMT is from Psychotria viridis, a shrub common in the northwest of the Amazon. These plants are boiled together and concentrated over several hours. Interestingly, other MAOIs have been used for years in treating depression and Parkinson’s disease. For example, many MAOIs prevent serotonin degradation, increasing its signaling capacity. However, available MAOIs are not routinely used as they have a significant risk in interacting with over-the-counter medications and other prescription medicines and require strict diet restrictions, as they can cause high blood pressure.

In the study, six patient volunteers diagnosed with recurrent major depression were given ayahuasca prepared by members of the Santo Daime community in Brazil. Patients’ moods were analyzed for two weeks prior to drug administration, as well as at multiple intervals after drug administration. Three weeks after drinking the ayahuasca, almost all patients had reduced depressive symptoms. Not all patients saw dramatic decreases and throughout the course of the three weeks, and in some, moods did fluctuate from above the initial scores to below the scores seen at the conclusion of the three weeks. Although some patients experienced vomiting that is known to occur after consumption of ayahuasca, no other adverse side effects were noted. It is important to note that the sample size used in this study was very small, but the results are interesting.

 Magic mushrooms and ayahuasca have only recently entered the medical sphere as potential depression treatments. Ketamine is an anesthetic, used to treat chronic pain and has the potential for addiction and abuse. It also can cause severe confusion or hallucinations. It has long been known that ketamine, a club drug commonly referred to as “special K,” acts as an antidepressant at a surprisingly rapid rate, in comparison to other antidepression treatments. Unlike the current available treatments that require several weeks or months that take effect, ketamine has been found to suppress depressive symptoms after a single dose, occurring within hours of drug administration and lasting about a week. It is not approved by the FDA as a treatment, however, due to its side effects, which include blurred or double vision, jerky movements, including muscle tremors, and vomiting, in addition to its addictiveness. The side effects of ketamine are dangerous, but its beneficial actions have prompted it to be used as a last resort in patients with depression that have not responded to other treatments; it has been used to treat suicidal patients in emergency rooms, and there are ketamine clinics that have begun to appear to administer the drug off-label.

As most molecules, ketamine, scientifically known as N-methyl-D-aspartate receptor antagonist (NMDA) (R,S)-ketamine, is metabolized, or broken down into multiple components, by various enzymes once it has been digested. The components into which ketamine is broken down have different effects on the organism, which partially explains the broad set of reactions one can experience after ketamine consumption. NMDA receptors are found on nerve cells and signaling through these receptors is important for synaptic plasticity, which is the ability for synapses (the structure on a neuron that releases and captures neurotransmitters (electrical or chemicals signals, such as serotonin) allowing for neurons to communicate) to get stronger or weaker, changing the speed at which neurons can communicate. NMDA receptor agonists, such as ketamine, block the signaling through these receptors. This allows for anesthetic effects (as pain is felt through neurons), as well as hallucinogenic effects, due to signaling that is offset from baseline. As the signaling balance is complex, and ketamine can be broken down into so many different components, a study published early this month attempted to elucidate whether there were distinct chemicals in ketamine involved in depression suppression and side effect induction. This study was done to understand whether the molecules involved in the former could be isolated for depression treatment without the negative side effects.

 This paper showed that one of the molecules into which ketamine is degraded, specifically (2S,6S;2R,6R)-hydroxynorketamine (HNK) is responsible for the drug’s antidepressant effects. After doing multiple biochemical assays to study the degradation patterns of ketamine, the group assayed the physical, psychological, and behavioral effects of ketamine treatment as a whole versus treatment with its degraded forms, such as HNK, in mice. They found that although ketamine treatment suppressed depressive symptoms, it additionally induced motor incoordination, hyperactive locomotive activity, and other similar side effects that are seen in humans. In comparison, HNK treatment alone similarly suppressed depressive symptoms, but it did not induce the noted side effects.

This study was conducted in mice, meaning it still needs to undergo multiple rounds of testing before it reaches the potential for human treatment. However, the importance of the study lies in that the antidepression molecule of ketamine was separated from the whole of the drug. This molecule, in portions of the study I did not speak to, elucidated nuances of neuronal signaling that were previously unknown, uncovering potential treatment targets of depression. The study of recreational drugs, particularly with human volunteers, is, indeed, controversial. However, research is a step-wise progression. In order to unravel the mechanism by which these drugs affect mood, and subsequently how we can take advantage of these pathways pharmacologically to treat depression, research must start at the top, proving that these drugs truly have a true effect on mood. From there, detailed biochemical work can be done to chemically tease apart the drugs, eventually leading to the discovery of particular molecules that have beneficially effects without negative side effects, as was done with ketamine. From understanding how, biochemically, recreational drugs effect mood, both in positive and negative ways, scientists can develop novel drugs that target the former without inducing the latter. 

Two Cells Enter, One Cell Leaves

By: Charles Frye

In addition to constructing a miniature model of the world inside your skull for you to inhabit, the brain is also tasked with generating sequences of actions in the real world – breathe in, breath out; lather, rinse, repeat; stop, drop, and roll.  The brain performs these actions using the skeletal muscles, more commonly known as the muscles. When you feel the desire to take a step forward, reach for an object, or scratch an itch, the motor cortex must determine how to tug on these big bundles of springs in order to swing the bones to which they are attached in precisely the correct fashion to produce the desired movement. To gain an appreciation for just how hard this is, check out this compendium of robot fail gifs. Walking isn’t so easy after all!

These commands rely on a well-made interface between the nervous system and the muscles. Each muscle fiber needs to be matched to exactly one neuron, and all of the motor neurons need to be matched to at least one muscle fiber.  To complicate matters further, the neurons in question are born inside the spinal cord, while the muscle cells are born far away. In one final twist of complexity, large collections of individual muscle cells combine together, assembling themselves, Voltron-style, into a single, more powerful unit called a muscle fiber, which has many nuclei and many mitochondria.

So how are we to ensure that our motor neurons and our muscle fibers are well matched?  One modest proposal is to generate far more neurons than you need, and any that don't manage to find a motor neuron can just be killed. In order to ensure that this diktat is followed, nature adopts a strategy straight out of Saw II: motor neurons are, from the moment they are born, searching frantically for the antidote to a poison that will kill them when a timer runs out. They are, like Biggie Smalls, born ready to die.  The antidote is released by muscle fibers, but it is only released in small quantities and to directly-connected neurons. 

So, the motor neurons rush out from the spinal cord, making a mad dash for the nearest muscle fiber. Some cells find a partner and begin to form connections, also called neuromuscular junctions, but others are not so lucky. 

These unlucky cells are drawn to the “scent” of the antidote as it diffuses away from these immature connections – the table scraps, if you will. In a desperate attempt to survive, these cells become locked in a duel to the death with the original tenants -- whoever can make a stronger connection faster will choke the other one out. When all is said and done, only about half of all the motor neurons will survive to become functional. 




If you enjoyed this, check out more explanations of the foundational concepts of neuroscience at Charles' website!

Friends & Foes: Immunology, Neurology, and Schizophrenia

By: Helen Beilinson

At the end of the nineteenth century, Ilya Metchnikoff discovered phagocytes, a subset of cells that ingests and digests foreign particles and cells. This Nobel-winning finding spearheaded the study of immunobiology. The twentieth century brought innumerable basic biological discoveries in how the immune system works— from how it battles and eliminates unwanted invaders to what causes its functions to go awry and induce autoimmunity. The last decade has brought yet another layer into immunology research. Advances in studies of immunology and studies of other organ systems have become integrated to understand how these systems work together and influence each other. Each system is not isolated from the rest of the body; they function in unison, often with overlapping functions, to ensure the health of the whole body.

Phagocytes play a crucial role in immune responses as they work to remove invading pathogens before they are able to harm the host. They are also vital in eliminating debris that is formed during the development and day-to-day maintenance of an organism. Multicellular organisms often have to eliminate unwanted cells, and do so using a type of programmed cell death, termed apoptosis. Swift removal of dying and dead cells, also called apoptotic cells, is necessary for the maintenance of the health and homeostasis of the organism. As opposed to living cells, apoptotic cells display “eat me” signals on their surface as a label for phagocytes to distinguish which cell they should be eliminating. Numerous of these “eat me” signals have been identified and come in many forms, from changes to the sugars attached to surface proteins, to the exposure of new proteins or lipids (also known as fats). The signals can also be derived from the apoptotic cell itself or be attached to the cell after the induction of apoptosis.

One system that plays a notable role is tagging cells for elimination is called complement. The complement system is made up of numerous proteins that have a multitude of functions to strengthen the immune response against pathogens. One of its roles is to deposit specified proteins on bacterial cells to mark them as foreign for their enhanced uptake and elimination by phagocytes. Although complement has been traditionally thought to act in combating infectious agents, there has been an increased appreciation for its role in the removal of apoptotic cells. Throughout the course of apoptosis, the composition of a cell’s outer membrane changes, such that they gain the capacity to bind to complement proteins, marking them for uptake by phagocytes.

For many years, it was believed that certain organs are sites of immune privilege—free of inflammation and immune cells, including phagocytes. Too much uncontrolled inflammation can cause permanent damage to the tissues surrounding the inflamed location. Immune privilege was believed to be an evolutionary adaptation that added an extra layer of protection to critical sites, such as the brain, to prevent organ failure. Recently, the converged studies of neurological and immunological research have brought to light the intricate relationship between these two organ systems, revealing that, in fact, the brain is not a site of immune privilege. In fact, although neuroimmunological research is still in its adolescent stages, it has shown that the immune system plays a heavy role in the development, regulation, and maintenance of the nervous system, particularly of the brain.

Between birth and the onset of puberty, neurons undergo a process called synaptic pruning, or the targeted elimination of the structures that allow neurons to communicate to each other using electrical and chemical signals. Targeted pruning and apoptosis eliminate imperfect neuronal connections and those unnecessary for an adult organism, allowing for the maturation of neuronal circuitry. In complete opposition to the idea that the brain is immune privileged, both of these processes rely on brain-specific phagocytes, called microglia, to eliminate the unwanted synapses and dying cells.

Apoptotic neurons are marked, for the most part, the classic “eat me” signals that are traditionally associated with dying cells, mostly processes that are driven by the cell itself. The “eat me” signals of synapses were a bit more surprising. A finding made nearly a decade ago showed that complement proteins are deposited on synapses during synaptic pruning, targeting them for elimination by the microglia. This finding was unexpected, as it was one of the first papers showing the importance of the complement system in neuronal development. It also emphasized the extent of the complex relationship between the nervous and immune systems. The cells of the immune system provide an invaluable service in the proper maturation of the brain; however, growing research in neuroimmunology has revealed an unfortunate side effect of having immune cells involved heavily in the nervous system.

Scientific and anecdotal evidence has shown for centuries that the immune system loses its strength throughout aging, not only working less effectively, but also working in a less targeted manner, increasing the chance of immunopathology, or damage done to an organism by its own immune system. Immunopathology is caused when the immune cells of an organism begin to attack ‘self’ cells and molecules. Many aging-associated diseases are now believed to be driven, at least to some extent, by the loss of control of the immune system—including neurodegenerative diseases. For example, Alzheimer’s and Parkinson’s diseases have both been linked to increased and mistargeted neuroinflammation. Both have also been associated with elevation of complement proteins and inappropriate loss of mature synapses, as well as the loss of proper function of microglial cells, the phagocytic cells of the brain. Biomedical research has begun to explore how to target neuroinflammation in patients, in an attempt to target the source of the disease, as opposed to current medications, which predominantly work to alleviate symptoms.

Fascinatingly, psychiatric diseases, diagnosed in significantly younger patients than most neurodegenerative diseases, have been increasingly linked to increased neuroinflammation as well. Schizophrenia is a serious psychotic disorder affecting a patient’s cognition, behavior, and perception. Its age of onset is, on average, 18 in men and 25 in women, much younger ages than most neurodegenerative diseases associated with aging. Although there is a strong heritability associated with schizophrenia, the specific genes involved in the disease, and the mechanism by which they do this, has for a long time been only speculative and correlative. In 2011, a Scandinavian study linked complement control-related genes to the heritability of schizophrenia. These genes are involved in regulating the level of complement activity. The study found that schizophrenic patients were more likely to have variants of these genes that were unable to control the level of complement proteins, such that, those patients would have increased levels of complement proteins in their brains. This research, however, was correlative, looking only at the genetics of the patients.

A paper published a few months ago, however, sought to find whether this correlation, and other correlations with similar findings found by other labs, had a biological basis. The authors looked at the presence of complement proteins in human patients with schizophrenia. They first confirmed other groups’ findings that there is a correlation of increased complement activity and schizophrenia. Further, they found that the genetic correlation also manifested in an increase in complement protein expression in the brains of schizophrenic patients. Human complement proteins localized specifically to neuronal synapses and neurons. In mice, they found that the same complement proteins found to be highly elevated in their human patients were responsible for synaptic pruning and neural development. Schizophrenia, as well has other psychiatric diseases, is an incredibly difficult disease to replicate in mice, making it difficult to definitively prove that complement-mediated synaptic pruning and neuron elimination by microglia is the major mechanism driving disease. However, the evidence for this has only been increasing.

Millions of years of evolution have driven our neuronal and immune systems to be dependent on each other. Unfortunately, as regulated as these systems are, imperfections in their regulation can lead to many diseases. Neuroimmunology research is a quickly expanding field working to explore the relationship between these two fields to find new and innovative ways to treat not only neurodegenerative diseases, but also psychiatric diseases, both of which that have been surprisingly linked to a loss of immune regulation. 

A Friendship Threatening Our Honey Supply

By: Helen Beilinson

The Araña Caves in Valencia, Spain are famous for the rock art left by prehistoric people. Aside from more traditional images featuring human figures hunting with bows and knives, there is a portrait of a human gathering honey from a beehive high in a tree, surrounded by a swarm of honeybees. Estimated at 8,000 years old, it is the oldest known depiction of humans consuming honey. Millennia later, we are still eating honey, although our methods for obtaining honey have gotten much simpler and safer. However, the last three decades have been harsh for the apiculture (beekeeping) industry, with our honey supplies diminishing frightfully rapidly. The problem lies in honeybee populations being threatened, but fortunately, research aimed at understanding why honeybee death is at such a high point and how it can be stopped.

Honey is a sweet, thick liquid food made by various species of bees foraging nectar from various species of flowers. Distinct kinds of honey, differing in taste, viscosity, and other properties, arise from varying combinations of bee species feasting on different flowers. After collecting nectar from flowers, honeybees convert it to honey by regurgitating the nectar and allowing the liquid within it to evaporate, while it is stored in wax honeycombs that the bees build within their beehives. Although it is incredibly sweet and delicious for humans and many other animals, its acidity, lack of water (thanks to the evaporation process by which it is made), and low presence of hydrogen peroxide, mean that most microorganisms cannot live in honey. In fact, when burial chambers of Egyptian royals were discovered, the pots of honey they had buried with them (to ensure a sweet transition into the afterlife) were entirely unspoiled, and just as delicious, after thousands of years.

Aside from being a delectable addition to tea, Greek yogurt, and Nutella sandwiches, honey has medicinal applications, thanks again to its biochemical properties. In 220 BCE during the Qin dynasty, a Chinese medicine book was published praising the ability of honey to cure indigestion. Folk healers in Mali use it topically to treat measles, and my dad used to put honey in my nose when I was a kid because according to Russian folk medicine, if you let honey flow through your nose to your mouth, you can get rid of a stuffy nose. I cannot speak to honey’s curative abilities in indigestion and against measles, but I can say that for at least a day after honey being put in my nose, I didn’t need to blow my nose even once.

Since the 1980’s, the honeybee population has been drastically declining, nearly halving in those years. Not only does this pose a threat to the apiculture industry, it also means that any foods pollinated by bees are also facing the prospect of being threatened. According to the United States Department of Agriculture, one in three foods directly or indirectly benefit from honey bee pollination. The loss of honeybees has been linked to various causes, particularly to infection. Bees have very strong and interesting immune systems, but bee populations are often being infected with many new emerging pathogens that lead them to die more quickly. Additionally, Colony Collapse Disorder (CCD) has also been connected to the loss of honeybees. This is a mysterious phenomenon in which worker bees, who physically collect pollen and nectar and make honey, leave their hives and queen bees behind. In essence, this renders the hive nonfunctional. It is not known what exactly causes CCD, but many believe that when worker bees get infected, they will leave their hives to die independently, preventing the risk of getting their queen bee sick.

One of the biggest threats to the beekeeping community is the parasitic mite, Varroa destructor. This mite reproduces in honeybee colonies, sucking the circulating fluids of adult bees for food. If the mite is infected with a microorganism and this microorganism is present in the saliva, this microorganism can spread to the honeybee. Recently, a group of scientists published their discovery of the mechanism by which a virus takes advantage of this means of transmission.

Deformed wing virus (DMV) causes wing and abdominal deformities, as well as affects the cognitive functions, in its bee hosts. Infected bees not only have a drastically reduced lifespan, they are thrown out of their hives in an attempt to prevent the spread of the disease to other individuals. Because of this innate mechanism bees have to eliminate sick bees from their hives, DMV is not an exceptionally good virus at spreading. In fact, only about one in ten colonies are affected by DMV, and those colonies infected tend to eliminate the virus quite readily. Unfortunately, DMV not only can replicate within honeybees, but it can also quite readily expand in the mite, V. destructor. The mite acts as a species in which the viral population can be concentrated and also makes viral spread much faster and more efficient. When mites are also infected with DMV, frequency of the virus in colonies increases from 10 to 100 percent. This relationship is arguably the single greatest inducer of CCD. Although the relationship between DMV and mites was previously known, the details of how these two species work together to aid each other’s replication were not well understood.

It was known that DMV suppresses the immune system of honeybees. To gain an understanding of how the virus affects the bees, the authors of the aforementioned study assessed how the bee larvae respond to different levels of virus infection, without the presence of the dust mite. They found that with increasing levels of virus, the larvae had lower melanization and encapsulation indexes. Melanization is the process by which melanin, the dark pigment in skin, is concentrated, and encapsulation is the process by which the larvae can uptake things, like pathogens to neutralize, from their environment. These processes are linked in that when foreign objects occur in the larva, they are encapsulated and these capsules are subsequently deposited with melanin (melanization) and other toxic molecules to mark them for elimination. The genes responsible for these processes are genes involved in the immune responses of these honeybees, controlled by a factor called NF-κB.

The authors found that in honeybees with more virus particles, there was a greater effect on the expression of their immune genes: the more infected the bees are, the less NF-κB they express. Less NF-κB means less immune genes being expressed, leading to decreased immune responses, such as melanization and encapsulation. The authors observed that these responses are also increasingly dampened with more viral particles.

From the observation that the dampening of the immune response was proportional to virus presence, the authors hypothesized that mites would replicate better on honeybees with more virus and would replicate worse on honeybees with less virus. To test this, the scientists infected the larva first with DMV. After some stages of development, they placed only one mite on each bee. After the honeybees were able to grow independently, they assessed how many mites were on each bee. The proportion of mites on an individual bee correlated with the amount of virus in each bee, such that if a honeybee had lots of virus, the honeybee was covered in tons of mites. Any lucky honeybee to have only a few viruses or none at all had practically no mites living on it.

The close relationship between the Varroa mite and DMV has been a major cause of CCD in honeybees around the world. Many current treatments and prevention techniques against this disease have been targeted at eliminating the mite from bee colonies. However, this study has shown that by reducing the viral load in a bee population, it could directly reduce the mite burden, as well. Studying the basic biology of this complex relationship has shown that the current methods of treating honeybees may not be the best way to tackle the problem, highlighting the importance of basic science. Without the virus suppressing the immune system of the bees, the mites are not as able to feed on their honeybee hosts. Not only will targeting DMV help the honeybees combat the dust mites, but it will also maintain the strength of their immune systems to fight off any other pathogens that enter their colonies and keep honey a staple in many dishes around the world. 

The Parasite Manipulation Hypothesis: How To Get Where You Need To Be

By: Helen Beilinson

           In 1990, after students proposed a project asking whether frogs can hop in zero gravity, six Japanese tree frogs went to space. This question, as well as many others, was answered in the “frog in space” experiment (FRIS) of the early 1990’s. Two decades later, the mating calls of male Japanese tree frogs were the inspiration for an algorithm to create efficient wireless networks. Recently, these frogs, and their mating calls, have made it into the news again when a group from Korea showed that when these male frogs are infected with the fungus Batrachochytrium dendrobatidis, their mating calls become ‘sexier’.

            B. dendrobatidis infects various amphibian species, including the Japanese tree frog. This fungus causes a wide range of changes to the bodies of its host, including electrolyte and fluid imbalance, leading to heart failure and rapid death of immune cells. While some amphibians are susceptible to B. dendrobatidis and will die when infected, some, including the Japanese tree frog, are not. The Japanese tree frog is tolerant to the infection, meaning that after being infected, instead of destroying the pathogen (this occurs in resistant hosts), the pathogen remains within the host, but does not cause significant damage to the host (as occurs in susceptible hosts). Interestingly, even though no detectable changes occur in infected male Japanese tree frogs, other than very slight weight gain and lethargy, their mating calls change.

            After collecting and analyzing mating calls from male Japanese tree frogs, the authors found that those frogs infected with B. dendrobatidis had calls that made them more attractive to females. The scientists analyzed the calls for number of pulses per note, the repetition rate of the pulses, the number of notes, and the duration of the calls. The infected males’ calls were faster and longer, traits female frogs are known to find more attractive. The fungus and the tree frogs have evolved a relationship that presumably increases the fungus’ ability to spread, as the more females their host interacts with as a result of their more sultry call, the more new hosts the fungus can spread to.

            The manipulation of host behavior by fungi and other parasites in order to facilitate transmission to new hosts is not a new idea. The ‘parasite manipulation hypothesis’, first described in the early twentieth century, describes this phenomenon in which parasites purposefully alter the behavior of their host to increase the probability that they interact with a new potential host. A well-known example of such a parasite is Toxoplasma gondii, a protozoan that infects a broad spectrum of warm-blooded animals.

            T. gondii is a protozoan (a unicellular eukaryotic organism) whose life cycle has two components. The first is asexual, where it replicates by fission, and can happen in almost all warm-blooded species. The second is sexual, where two individual T. gondii ‘mate’ to form genetically different progeny, and only can occur in feline species’ intestinal cells. Famously, mice infected with T. gondii no longer have an innate aversion for cat urine odor, making them more likely to be caught, and eaten, by cats. It is thought that this behavior change makes it easier for T. gondii to spread to cats, their preferred host and the only host in which they can sexually replicate (sexual reproduction is preferred because it increases the genetic diversity of the species). Humans can also be a host for T. gondii; in fact, it’s one of the most common parasites in the western world, with nearly half of the population being infected. Fortunately, the infection does not seem to induce disease (toxoplasmosis) unless the infected human is immunocompromised (like infants, AIDS patients, and patients on chemotherapy). However, there are some interesting correlation studies showing that infected human men no longer find the smell of cat urine unpleasant.

            Humans are not a good intermediate host for T. gondii to infect, because we no longer have natural feline predators. Chimpanzees, however, have one known feline predator… the leopard. When scientists studied the influence of T. gondii infection on chimpanzee behavior, they found similar results as has been noted for years in mice: infected chimps lost their innate aversion to leopard urine. Presumably, the protozoan induces this phenomenon to increase the probability that its chimpanzee host is predated by leopards, such that the protozoan can replicate in the leopard. Interestingly, when the scientists studied the chimps’ attraction or aversion to another feline’s urine compared to leopard urine, they found that the affect of T. gondii only affected the chimps’ attraction to leopard urine, not lion urine. This result indicates that the lack of aversion to urine induced by T. gondii in chimps is specific to the urine of felines residing in proximity to their hosts. Additionally, in the previous study mentioned where infected human men do not find cat urine unpleasant, they still found tiger urine to have an irksome smell. The studies done in T. gondii infected chimps and humans were correlative, but they do produce stimulating evidence for the parasite manipulation hypothesis.

            B. dendrobatidis and T. gondii are nowhere near the only parasites able to manipulate the behavior of their hosts. A tapeworm infection in stickleback fish, native to cold saltwater regions, and malaria infection in female great tits, a common bird species in Europe, Central Asia, and North Africa, causes the species more bold in exploring new territories, making them more susceptible to predation. In humans, parasite manipulation may not be of concern, as we are no longer prey to other animals, but it is a predominant effect in the animal world. Not only does this effect point to the incredibly intricate relationships that are formed between host and parasite, but also show the importance of innate animal behaviors keeping them away from potentially dangerous situations.

Grapefruits & Drug Metabolism

By: Rose Al Abosy

Every time I am prescribed a medication, I read through the information packet. The last medication I picked up at Walgreens had one line that stuck out: “Avoid eating grapefruit or drinking grapefruit juice...these can affect the amount of [the drug] in the blood.” I was immediately reminded of brunch I had one Saturday morning last year with a close friend. I wanted a glass of fresh squeezed juice, either orange or grapefruit, and my friend told me a quick little anecdote about grapefruit juice. They never had it in the house because everyone in his family had high blood pressure and “you can’t drink grapefruit juice on blood pressure meds.”

I just took his word for it at the time, but reading it in the information packet for my own meds made me investigate. Turns out, it comes down to drug metabolism. Cells that line the lumen of the gut have to absorb drugs and pass them to the liver so they can enter the blood and be circulated systemically. This process can rely on specific transporters at each stage, from absorption to re-circulation. The cells that absorb materials from the gut have enzymes that begin the metabolism of the drug. As a result, the amount of the drug that circulates is less than what you originally took orally. Doctors take this reduction in drugs into account when prescribing the amount of medication a patient should take. Calculating drug dosage is always based on how much active drug eventually ends up in the blood and takes into consideration how strong the degradation response against the drug is.

One important family of enzymes that are involved in this process are the cytochrome P450 enzymes (CYPs), one of which is CYP3A4. This enzyme is found in the liver and the epithelial cells of the small intestine where it metabolizes a lot of commonly prescribed drugs, like statins (which are for high cholesterol, but can also be used to treat high blood pressure).

So what does this have to do with grapefruit? CYP3A4 is inhibited by molecules called furanocoumarins, like bergamottin and dihydroxybergamottin, and with CYP3A4, the drug undergoes less processing in the gut and liver. It can take up to 72 hours to regain full enzyme activity after inhibition, so consuming furanocoumarins can inhibit drug processing for well over a day. Of course, these furanocoumarins just happen to be found in grapefruits. That means drinking grapefruit juice along with taking a drug that is metabolized by CYP3A4 can lead to a higher dose of the drug in your blood than intended by your doctor, which in some cases can be fatal. And remember those transporters that aid in moving the drug into the cells of your body? Those transports are also affected by grapefruit juice, either blocking their activity or downregulating their expression, which can lead to a lower dose of the drug in your blood. Thus, medications can greatly affect the final concentration of our medications in our blood—either by increasing it by blocking natural decomposition of it in our guts, leading to an increase in the final drug concentration, or by blocking transporters of the drug, leading to a decrease in the final drug concentration.

Even abundant and naturally found chemicals like the furanocoumarins in grapefruit can modulate drug metabolism, causing a higher or lower concentration of the drug than expected. For your own health, verify whether your medications are affected by grapefruit consumption, and always talk to your doctor if you have any concerns.

The reason our bodies have these systems in place is to make sure nothing dangerous enters our bodies. As medical drugs we take are foreign to our systems, our bodies automatically begin the process of degrading them. Of course, these degrading processes take time and have a certain threshold of how many molecules they can process, meaning that we can override these systems to ensure drug delivery. Additionally, drug development can also take advantage of these pathways, designing drugs that are usually inactive, until they go through lumen cells that cleave the drug into its active form.

Curing Coughs with Chocolate

By: Helen Beilinson 

   Occasional coughing is completely normal. It’s one mechanism our bodies use to remove foreign objects or accumulated secretions from our lungs and throat. However, when a cough becomes chronic (defined as lasting at least two months in adults or one month in children), this could be the sign of something more serious. Chronic coughing is not just annoying and uncomfortable; it can cause exhaustion by keeping you up at night, lightheadedness, and even rib fractures in severe cases. Chronic choughs can be caused by a variety of things; the most prominent sources are tobacco smoking, asthma, acid reflux (also as gastroesophageal reflux disease (GERD)), and postnasal drip. However, various respiratory infections, damage caused by past infections or chronic inflammation, such as chronic bronchitis, and blood pressure drugs have also been linked to chronic coughing. Of course, the best way to treat chronic coughing would be to battle the cough at the source of the problem to try to eliminate what was causing the coughing. But sometimes patients just want to subdue the cough, to find some relief, and a get good night’s sleep.

   The most common method of treating coughing is by using cough suppressants, or antitussives, containing codeine, a mild, plant-derived opiate. Codeine is a fantastic cough suppressant, but using it in large doses is unhealthy due to the side effects, which include drowsiness, vomiting, constipation, and addiction. Although this article may have started on a slightly sour note, I come bearing good news: a substance in cocoa and chocolate has been shown to suppress coughing more effectively than codeine, without the unwanted side effects. Which means if you have a persistent cough—eat chocolate!

   In the 1970’s, asthma was being treated with a compound called methylxanthine theophylline, a synthesized molecule. Authors of a study published in 2004 wanted to explore whether a naturally occurring substance very close in structure to methylxanthine theophylline also had the same antitussive properties. This naturally occurring substance was theobromine—the bitter alkaloid of the cocoa plant, as well as a component found in tealeaves and the kola nut.

   To first test whether theobromine has cough suppressive properties, the authors turned to a cough model in guinea pigs. To give the guinea pigs coughs, the scientists microinjected small amounts of citric acid into the larynx of guinea pigs. The larynx is the hollow tube forming an air passage to the lungs from the mouth and holds the vocal cords. Citric acid treatment gives guinea pigs a cough that lasts about 24 hours. When the guinea pigs were treated with theobromine, their coughs were suppressed for 4 hours at a time.

   Once they had this preliminary evidence that theobromine acted as an antitussive, the authors examined if it could also inhibit induced coughs in human subjects. The volunteers were first given tablets with theobromine, codeine, or a placebo. Then, they inhaled capsaicin, the active component of chili peppers, which induces coughing. To measure the effectiveness of the cough medicines, the scientists measured the amount of capsaicin needed to induce coughing in the volunteers who had taken one of the three types of pills. The more capsaicin needed to induce coughing, the more effective the medicine is as a cough suppressant. Surprisingly, the volunteers that took theobromine required about one-third more capsaicin to start coughing as the volunteers who took codeine, meaning that theobromine was a more effective at suppressing a coughing reflux than codeine.

   Since this discovery in 2004, there have been more reports and clinical trials that have explored theobromine as an alternative cough suppressant to codeine. One study, presented at the British Thoracic Society’s winter meeting in 2012, found that of 300 patients with persistent coughs at 13 hospitals that were given theobromine, 60% found great relief. It has also since been shown the mechanism by which theobromine works: it blocks the sensory nerves that cause coughing, preventing them from inducing the cough reflex.

   The coldest weekend of this winter is upon us here on the east coast of the United States and I have a feeling another wave of colds is upon everyone. Thankfully, I have scientific evidence that hot cocoa and chocolate bars can keep me feeling better... or at least suppress my coughing.

Timing Decomposition with Microbes

By: Helen Beilinson

The last decade has seen many new discoveries that have revolutionized science. Arguably, one of the most influential of these advances has been the appreciation of the impact of microorganisms on human health. In particular, the important roles played by the bacteria, viruses, and other bugs (collectively called the microbiota) that live in or on us, continue to be enumerated. Numerous elegant studies have characterized changes that happen in a person’s microbiota throughout the course of a regular year, during the progression of an infection, or even how a space environment can affect the composition of bacteria in our gut. Recently, a group from the University of California, San Diego explored the changes in bacterial composition during a different phase of life—corpse decomposition.

It might sound a bit gruesome, but the decay of once living things is critical for the cycling of nutrients on earth. The completion of this task requires an extensive arsenal of microbial and biochemical activity. Previous studies had shown that decomposition occurs in a somewhat predictable, stepwise fashion. It was also known that bacteria and other microorganisms are critical for this natural process to occur properly. However, the details of this process were not well understood. The authors of this specific study wanted to know if the environment an organism inhabits dictates the microbial decomposers, whether these microbes come from the host or the environment, and how a decomposed organism changes the environment around it.

To answer these questions, the authors determined the composition of the communities of microorganisms in decaying mice and humans in various environments. Using human cadavers might sound a bit grisly, but it’s important. Mice are good models of various human diseases and are great tools to study many aspects of biology and organismal biochemistry. However, human and mice are still two different organisms, and human subjects were required in this study to verify that their findings in mice matched what occurs in humans. The use of human cadavers is important for the implications and potential applications of this study, as it may be the newest tool in forensic science... but I’m getting ahead of myself.

To identify the families that make up the microbial communities in their samples, the authors utilized a precise technique, 16S rRNA sequencing. This technique takes advantage of the fact that there are some genes that are pretty similar in bacteria that are closely related, and grow more different the less related they are. By sequencing all the microbes, the authors are able to group them into their families and compare how similar or different the microbial populations are between different experimental groups.

An exciting preliminary piece of evidence these authors observed is that the previously described stages of decomposition followed hand in hand with a very precise and dynamic microbial community. The microbes present on day 1 are different from those that emerge on day 4 which again are different from those on day 10. At every stage, between day 1 and day 71, the microbial communities were unique. Perhaps surprisingly, when the authors changed the location of where their mouse specimen was decomposing, there was no effect on the microbial decomposers! A microbial community from a mouse decomposing in a desert environment on day 7 was almost identical to that from a mouse decomposing in a forest on day 7. Seasons also did not significantly impact microbial populations. These same data were obtained using human cadavers.

Based on the latter piece of information, one might expect that if microbial decomposers are more or less the same in different environments, which these decomposers would come from within the host itself. However, the authors found that the soil is the primary source of the microbes, even if the soil type and environment is different. It’s important to note that 16S rRNA sequencing is not the best technique for identifying specific microbes. It is mostly used to identify families of microbes that are closely related. Families of microbes tend to have similar functions, meaning they can carry out similar reactions. These data together imply that because specific microbes carry out specific reactions and that there is a predictable change in microbes over the course of decomposition, one could predict that the biochemical changes carried out by microbes can be tracked in sequential steps during the course of decomposition.

To further explore this question, the authors examined biochemical reactions taking place in the abdomen of the decaying specimen. They found that indeed, throughout the process of decomposition, there are specific reactions that can be detected at each step. The biochemical reactions that take place correlate almost perfectly with the presence of particular microbes that can carry out this process. Interestingly, the authors were also able to show that the soil around decomposing organisms also has such post-death dating properties, in that the products of the biochemical reactions occurring in the organism seep into the soil surround it, changing its chemical properties. The products produced, such as nitrate and ammonium, are used by plants to grow. Although it is a tad ghastly to think about, mammalian decomposition is important for the cycle of life on Earth. The products of decomposition allow for plants to grow which feed the living mammals.

Although now a fairly simple technology, sequencing of the microbiota of an organism has been an incredibly powerful tool in the biomedical sciences. This study has shown that another, perhaps surprising, field that may benefit from this technology is forensic science. Although there are technologies in place to help forensic scientists identify when a person has died, these are often not very precise. This paper’s methods were able to distinguish time of death with a one to two day accuracy based on the microbes found in that body and the biochemical reactions occurring. The microbes around us clearly have a constant influence on our lives: from our birth to our death, microorganisms make us who we are.

Sex > Food for Male C. Elegans

By: Helen Beilinson

Caenorhabditis elegans, or simply C. elegans, are small nematodes (worms) that are one of the most popular organisms used to study animal biology. There are many reasons for this: they replicate quickly and they are easy and inexpensive to care for. But the most fascinating fact is that each individual worm has a set number of cells that each has a specific position and function. Each of the cells can be followed from its conception to its final location. In the 1980’s, Sir John Sultson was one of the first scientists to track each of the worm’s cells throughout development to create one of the first maps of cell lineage. Since then, many researchers have continued to follow up on Sultson’s studies, leading to the belief that C. elegans’ cell lineage map had been completed. So, it came as a big surprise to the field when a group out of London discovered two new neuronal cells in the male worms.

Although C. elegans are sexually dimorphic, like humans, they are not divided into males and females. Instead, they are made up of hermaphrodites that can self-fertilize and males that can only fertilize the hermaphrodites. Males and hermaphrodites have different reproductive behaviors to reflect their reproductive patterns. Male worms need to learn how to optimally locate mating partners, which they accomplish through a process called sexual conditioning. It was previously known that males are attracted to hermaphrodites by sensing their pheromones or by directly sensing them with their tails. A recent study in the journal Nature identified two previously undescribed male-specific neurons that are necessary for sexual maturation.

This finding came as a surprise. Because they are self-perpetuating, hermaphroditic worms are easy to maintain, and are consequently more widely studied than male worms. When studying males, most scientists have focused on physically obvious attributes, such as the worms’ tails, not their brains. However, when these authors looked more closely at male worms’ brains, which had been thought to contain 383 neurons, they found that they contained 385. They called these neurons mystery cells of the male, or MCMs. To identify the function of MCMs, the authors explored other cells that the MCMs interact with. They found that these cells are a component in a loop of interactions between neurons that function in regulating mating experiences by modulating behavior. Specifically, the MCMs are necessary for a male-specific switch in puberty, in which they respond to chemical signals differently after sexual conditioning. This sexual conditioning functions to make the males suppress cues from the environment that indicates presence or absence of food in favor of sex. While hermaphrodites will always migrate towards areas with good food and migrate away from dangerous areas without food, sexual conditioning causes males to go away from areas with good food or go towards areas with bad food if there are potential mates in those locations. In effect, males prioritize sex over food.

To test this hypothesis, the authors set up the following experiment. C. elegans tend to avoid salt-rich environments, because high salt is usually an indication of food scarcity. The authors placed potential mates into a salt-rich environment and placed either hermaphrodites or males outside of this salt-rich location. They found that hermaphrodites, before and after sexual conditioning, will always avoid salt locations. However, the males will avoid the salt location before sexual conditioning, and will enter the salt-rich area after sexual conditioning. If the authors remove the MCMs from sexually conditioned males, they no longer enter the salt-rich area after sexual conditioning. The authors conclude that male C. elegans suppress their knowledge of the risk of no food for the benefit of potentially mating.

This phenomenon makes sense. Hermaphrodites are capable of self-fertilization, so in order to procreate, they need any other worms around and need to prioritize their health to be good parents. Males, on the other hand, absolutely need another partner to reproduce. The health of the male is not as critical in producing viable children as is their partners, the hermaphrodites. Thus, they can risk putting sex before food.

When the authors tried to find the origin of the MCMs during C. elegans’ development, they found that they arise from glial cells. Glia are cells that reside next to neurons and provide structural and functional support to neurons with which they are associated. However, during sexual maturation, some of the worms’ glial cells begin to start expressing neuronal proteins and develop into MCMs. Hermaphrodites do not have the glial precursors of the MCMs, so these cells, from the beginning of the male worms’ lives, are male-specific. This is the first case found in non-vertebrates where neurons develop from glial cells.

The discovery of these new neurons links developmental and anatomical differences between males and hermaphrodites to their sex-specific behaviors. It’s fascinating that the behavioral patterns of these worms is quite literally hard-wired in their minds, as opposed to something they have learned and apply to a situation. These findings are also a testament to how many new discoveries are happenstance and often come from re-observing something that’s right under your own nose.