No Gravity? Exhausted Immune System

By: Helen Beilinson

In 1970, astronaut Fred Haise fell ill thousands of miles from home aboard Apollo 13. The culprit? A bacterium that, on Earth, causes no symptoms in healthy individuals. The extended health screens taken before his flight indicated that Haise’s health was in impeccable condition and his doctors had no reason to believe that Haise’s immune system couldn’t battle the bacteria. But, as we have learned over the last fifty years, weak gravity, or microgravity, impairs the immune system. Although Haise survived and returned to Earth safely, his infection was one of the first indications that our immune systems function under different rules in space.

Strengthening weakened immune systems is not an impossible task on Earth, it has been done with everything from battling infections to battling cancer. However, to help individuals’ bodies fight infections in microgravity is still a difficult thing to do because it is still unclear how microgravity changes the way the cells of our immune systems function. Last year, a study published in Life Sciences in Space Research provides a basis for how future astronauts can aid their bodies function at full capacity by finding a unique feature of immune cells in microgravity.

Dr. Jillian Bradley and her colleagues studied an immune cell type called T cells, which are fundamental in fighting infections. When an infection occurs in the body, other immune cells sense the intrusion and signal to the T cells that they need to activate to fight off the infection. T cells are also capable of killing our own cells that aren’t ideal—for example, if they are infected with a virus or if they are cancer cells.

Bradley compared T cells grown in normal gravity conditions and in microgravity, by placing the T cells in a spinning chamber that drops the level of gravity the cells experience. T cells need to be turned on, or activated, to handle invading bacteria or other infectious agents. When Bradley starting the signaling to the T cells in microgravity, they became more active more quickly compared to the T cells receiving the same signals in normal gravity.

However, something surprising happened if she looked at the T cells after a few days of being activated. After three days, these activated T cells lose their muster. The longer duration in microgravity made these T cells activate more sluggishly, as if they couldn’t receive the signals to turn on.

Slow T cell responses are potentially dangerous and could be the reason Haise got sick in space. T cells are activated a few days after an infection starts, giving the bacteria time to multiply before these heavy hitting cells start attacking them. If the activated T cells are slow in their response time to the infection, the bacteria have more opportunity to expand and cause symptoms of being sick. Slow immune responses in space could put astronauts at risk for infections by bacteria that on Earth our immune responses respond rapidly to, like the bacteria Haise was infected with.

When investigating why microgravity is debilitating for T cells, Bradley found that T cells without gravity very quickly make a protein that inhibits their function. This protein is seen in the T cells that enter a state of exhaustion, where they have been told to activate to much that they lose the ability to quickly respond. Typically, this protein is seen when T cells are exposed to very extended periods of activation, however, in microgravity, the amount of time it takes to exhaust a T cell is significantly shorter.

It may not be all bad news, however, because this mark of immune response dampening is infamous in another location where turning T cells back into full swing is already being done—tumors.

In addition to their ability to eliminate infections, T cells are important in battling cancers. However, when tumors are large enough, they exhaust T cells, causing to express the same inhibitory protein as microgravity does. The newest wave of cancer treatments, termed cancer immunotherapies, are focused on eliminating or suppressing the function of the inhibitory proteins, causing the T cells to be more active and able to eliminate the cancer cells more readily.

Could the same technology be applied in space? Scientists don’t know yet. But it could be a potential way that astronauts can utilize existing technology to prevent them from getting sick in space.

With eyes on missions to Mars and beyond, understanding how microgravity affects our bodies is critical to ensure the health of those leaving our stratosphere and defining the changes occurring in our immune cells provides physicians with the information they need to design novel treatments for astronauts dealing with infections in space. Fascinatingly, the parallels between T cells in microgravity and in cancer could provide insights to keeping bugs at bay in space.

Warm and Wet vs. Cold and Icy Early Mars

By: Adeene Denton

Was early Mars warm and wet, or cold and icy?

For planetary scientists studying our sister planet this question plagues our research, because finding an answer has direct ramifications for society’s near future in space exploration and our understanding of the evolution of habitable worlds elsewhere in the galaxy.  Mars today is a hyperarid, hypothermal desert with mean annual temperatures of ~ 218 Kelvin (-55° C/-67°F, also known as really really cold), where liquid water cannot survive on the surface and a flimsy atmosphere continues to gradually disappear into the vacuum of space.  It’s a harsh place where most life, including us, can’t survive without the aid of advanced insulating technology.  However, the valley networks and basin lakes that crisscross the southern highlands suggest that at some point in the past the Martian climate permitted abundant liquid water to remain present on the surface over long timescales.  Mars may be a cold, windy desert now, but it probably looked very different 3 billion years ago when these features formed.

…and that’s what scientists agree on.

What we disagree on is basically everything else about Mars’s early history.  Such vigorous disagreement within the field comes from the lack of real information we can glean from the surface about what Mars may have been like 3 billion years ago, combined with the burning need to answer such a fundamental question.  When we, dreaming of our future as a multiplanetary species, look at Mars today we think: sure, Mars is inhospitable to life now, but could it have been habitable in the past? Could it be again?  Big questions like these are what drive NASA and other space agencies to shoot for manned missions to Mars in the 2030s and 2040s. And yet, there’s so much we don’t know about a planet that’s a year’s travel away.  We can’t afford to be unsure when our astronauts’ lives are on the line.

When planetary scientists try to decode the geologic and climate history of Mars, we have several basic tools at our disposal. First, the geologic record itself – we can look at Mars through our increasingly high-resolution images from decades of Martian spacecraft, observe the way different layers of material interact, and date the surface based on relative crater densities as well as speculate on formation mechanisms for various morphological features. Second, other datasets such as mineralogy from both spacecraft and the three rovers on the surface can tell us about the composition of these features. NASA’s rovers, the now-defunct Spirit as well as the still-operating Opportunity and Curiosity, are remotely operated science labs on wheels, doing their best to get information on the mineralogy of the rocks over which they drive. The minerals involved can indicate the presence or absence of water during their formation, and leave some hints about how long the water was there. And third, we can use our understanding of the physics of terrestrial planets and our knowledge of Earth to extrapolate to Mars using numerical modeling.  Climate modelers can incorporate atmospheric compositions, wind patterns, changing amounts of solar luminosity, and many other factors to reconstruct the evolution of the Martian atmosphere, while dynamicists look at the death of the Martian magnetic field and the growth of massive volcanic fields that occurred at the same time as valley networks formed on the surface.

The current problem faced by the field is that a) these methods are yielding a variety of information about the early Martian climate, and b) that information is being interpreted in a myriad of contradictory ways.  The two ends of the early Martian climate hypothesis spectrum, the “warm and wet” and “cold and icy” climate scenarios, hinge on vastly different interpretations of the geology, mineralogy, and climate models.  This is possible because the overland flow of water (one of the few things we know occurred) can be produced through both regional precipitation in a warm and wet scenario and meltwater off a large ice sheet in a cold and icy one.  The same mineralogies, including hydrated clays, can be formed through relatively short periods of water involvement and tell us less about their climatic source than we would like.  Additionally, the climate models span a broad parameter space and have many unknown variables – when we estimate the axial tilt of Mars 3 billion years ago, it is most definitely a guess. 

As such, in many cases we’re right back where we started. There was water on Mars – but where did it come from? How long was it there? What mean annual temperatures are needed to produce liquid water? Was there an ice sheet in the south, or an ocean in the north? We have so much data to look at, and somehow it’s not quite enough. 

The growth of two robust, yet contradictory hypotheses is the direct result of this conundrum.  The warm and wet climate hypothesis posits that early Mars contained a sizeable insulating atmosphere and had mean annual temperatures above 273 K, which in to permits the consistent present of regional precipitation across the southern highlands to create the valley networks. Some adherents of this scenario believe that a global ocean existed in the smooth, younger northern lowlands at multiple points in Mars’ history, despite the cold conditions.  While putative shorelines of an ocean have been mapped, the presence or absence of an ocean has yet to be definitively proved. And Curiosity itself, our newest rover, may be traversing what was once a lake-filled crater, the amount of water that lake contained and the timescales over which that water was present remain unknown.

Meanwhile, recent climate scientists have predicted that the dimmer light of a younger sun as well as an atmosphere that may have been thinner than we would like actually prevent warmer temperatures from persisting even on an early Mars.  In this cold and icy scenario, temperatures remain below freezing most of the time, with surface water locked up in an ice sheet covering the higher altitudes of the southern highlands.  Geologically brief periods of warming enable melting at the margins of this ice sheet, which can in turn also create the broad patterns of valley networks and pool in the basin lakes.  In this scenario there can be no ocean, and the presence of liquid water is transient at best.

For those of us working in the field of Martian planetary science today, our job is to continue to test these models, canvassing the surface of Mars for clues and making our models as realistic as possible.  While we hope to solve the problem of early Martian climate history sooner rather than later, we also look forward to the possibilities that human exploratory missions can offer for better sample collection and observation of the planet with human eyes.  It’s an exciting frontier of planetary science, and we continue to boldly go!

 Warm and wet (Craddock and Howard 2002)...

Warm and wet (Craddock and Howard 2002)...

 ...and cold and icy (Forget et al., 2013, Wordsworth et al., 2013, Head and Marchant 2014) climate scenarios for early Mars. Are either of these correct? Only time will tell…

...and cold and icy (Forget et al., 2013, Wordsworth et al., 2013, Head and Marchant 2014) climate scenarios for early Mars. Are either of these correct? Only time will tell…

Knowledge and use of prevention measures for chikungunya virus among visitors to Virgin Islands National Park

By: Helen Beilinson

Reprinted with permission from Open Science DB. Open Science DB is a centralized database of scientific research. It is led by graduate students from Northwestern University and scientists from leading research universities/institutes contribute summaries of research papers to the database. Each summary is reviewed to ensure accuracy and accessibility. 


Pretravel health research— not something people typically think about when they’re packing their flip flops and sunscreen for a sunny vacation in the tropics. Unfortunately, mosquitos love the warm temperatures just as much as humans do. Without the knowledge of proper mosquito bite prevention strategies, vacationers are put at risk for catching viruses carried by mosquitos.
Chikungunya is caused by Chikungunya Virus that is transmitted to human by mosquitoes. Chikungunya outbreaks have been observed in countries in Africa, Asia, Europe, and the Indian and Pacific Oceans. But Chikungunya spread to Americas in 2013, and by the end of 2014, about 1 million suspected and confirmed cases of Chikungunya were reported across 43 countries in the Americas. Infected people start feeling symptoms about a week after being infected by a Chikungunya-carrying mosquito, the most common of which are a high fever and joint pain. Although most patients start feeling better within a week, many experience prolonged joint pain up to several months. There is no vaccine against Chikungunya, so the best way to prevent infection is making sure that people are educated about the virus and instilling mosquito bite prevention practices. However, there is no information regarding how many travelers are aware of Chikungunya and prevention methods.  
To answer this question, the Center for Disease Control and Prevention (CDC) investigated what percentage of travelers to the U.S. Virgin Islands are aware of mosquito-borne diseases (Chikungunya and other viruses) and mosquito bite prevention measures. Visitors to Virgin Islands National Park on St. John were asked to complete a questionnaire addressing knowledge related to mosquito-spread diseases and prevention measures .446 of 783 travelers completed the survey.
According to the survey results, more than half of respondents were unaware of Chikungunya virus. Moreover, the majority responded that they had not been wearing clothing treated with an insect repellent or long-sleeve shirts/pants, or using bed nets for the past three days.
Overall, this survey showed that most visitors arrive the U.S. Virgin Islands without adequate pre-travel research and knowledge about mosquito safety. As the number of international travelers increases each year, this survey data strongly emphasizes the urgent need for developing creative ways to encourage pre-travel health research among travelers. 

A New Science Podcast... By Yours Truly

Hi everyone!

Just wanted to pop in and promote a new science podcast that I have had the honor and privilege of creating with the Yale Journal of Biology and Medicine. It can be found on SoundCloud and iTunes. Please subscribe and listen in! Don't worry if you're not a scientist- we made this podcast specifically so we can talk about cool science that is happening without too just scientific jargon!

YJBM is a quarterly scientific journal edited by Yale medical, graduate, and professional students (me included!). Each of our issues is devoted to a focus topic that ranges from the aging brain to epigenetics to infectious diseases. With our podcast, we want audiences outside of those working in said focus topics to enjoy the research that is being done and appreciate the fields. We talk about the past, present, and future of the focus topic field. This month we're focused on the microbiome--we have cool facts (Did you know the first fecal transplant was performed in the 4th century? Or that people who live on farms are less likely to have allergies?), interviews with top clinicians and scientists, and random history facts that are great for cocktail hours. 

If you want to learn more or read about the articles in YJBM that we reference, check out the YJBM archives! They're open access so everyone can read them- you don't need special access!

The Evolution of Human Violence

By: Helen Beilinson

Why are humans violent?

This is a dense question that has been heavily debated for centuries. There are, very simply, two camps—nature and nurture. The former has been popularized by the seventeenth-century philosopher Thomas Hobbes, who argued that the natural state of man is one of violence and independent perpetuation, that humans are naturally, inherently violent. On the other hand, Jean-Jacques Rousseau retorted, nearly a century later, that individuals are not born violent or peaceful, but instead, are molded by their environments. Outside of philosophy, social scientists have tackled this distinction by focusing on the nurture side, analyzing how sex, age, race, and socio-economic status can influence an individual’s propensity towards violence. In a recent issue of Nature, Spanish scientist José Mariá Gómez and colleagues took to answering this question from a different, unique angle—evolution.

 Lethal conspecific violence, or violence occurring between members of the same species, is not unique to humans, from infanticide in primates and dolphins to horses and hamsters attacking their own. The prevalence of aggression throughout mammals, and its high inheritability, questions whether evolution has shaped human violence due to intraspecies violence being an adaptive strategy for survival. To address this question, the authors of this study used comparative methods from evolutionary biology to quantify the levels of violence in 1,024 mammalian species, including those that are currently extinct. They assessed causes of death in over 4 million instances, defining the level of lethal violence in a species as the probability of dying from intraspecific violence compared to other causes.

 Out of the analyzed species, nearly 40% had instances of conspecific lethal violence, with, on average, 0.30% of deaths within a population occurring at the hands of those in the same species. The authors calculated phylogenetic signals of related species to analyze the evolution of lethal aggression in mammals. This signal is essentially a measure of how lethally violent a particular species is in comparison to other closely related species. They found that lethal violence was entirely absent from some species, like bats and whales, and was more frequent in other groups, such as primates. Lethal violence was at similar levels in closely related species, speaking to its heritable aspect.

 Within primates, one of the most notoriously violent groups of animals, there were differences in levels of violence, indicating violence’s evolutionary flexibility. While chimpanzees were highly violent, bonobos were tamer. This observation drove the authors to ask whether other factors could influence violence within a species. The authors subsequently scored the analyzed species for territoriality and social behavior, two traits that could drive aggression. They found that social, territorial animals had high levels of lethal violence than solitary, non-territorial species.

 Studying the evolution of and phylogenetic signals for lethal violence in mammals as a whole provided a basis for studying the violence in humans. In addition to the animal species studied, 600 human populations were analyzed, ranging in time across human history, from the Paleolithic era (~2 million to 10,000 years ago) to the present.

 We emerged from the primate line, with a long evolutionary history of higher-than-average levels of conspecific lethal violence, so it is unsurprising that at the origin of our species, human lethal violence accounted for 2% of all deaths, six times higher than the reconstructed mammalian value. Additionally, we as Homo sapiens are both social and territorial, trains with stronger tendencies towards lethal violence in mammals.

 Over human history, the estimates of lethal violence vary greatly. Although Paleolithic estimates were close to 2% of deaths were due to lethal aggression, estimates rose as high as 15-30% in various times throughout history, peaking about 3,000-5,000 years ago. Today, levels of lethal violence have decreased markedly. The authors claim that socio-political organization was a significant factor in the changes in violence. They found that there was a correlated rise in violence when human moved from pre-societal organizations, including bands and tribes, to more modern organizations like chiefdoms and states. However, although high population densities in most mammalian species drive lethal aggression, in humans, population increases were consequences of successful pacification, leading to less violence.

 Although the news is packed daily with stories of human-on-human violence, today, less than 1 in 10,000 deaths (about 0.01%) are due to lethal violence. Based on the model put forth by Gómez, this translates to humans being about 200 times less violent today that can be predicted by our evolutionary past. Even a lethal violence rate of 0.01% is too high; there is a lot of social and political work that needs to be done to lower this incidence as much as possible, ideally to zero. A violent past and phylogenetically inherited lethal violence have set up modern humans to be naturally violent creatures, nevertheless, it is clear that culture, be it social or political, can strongly influence and modulate levels of aggression in a population.

Jellyfish and Glowing Proteins

By: Erica Gorenberg

From the stars on the ceiling of a childhood bedroom to the key chains brought home as souvenirs, some of the most memorable trinkets from youth are those that glow in the dark. For molecular biologists, like for kids, the ability to harness fluorescence and make molecules reminiscent of those glow-in-the-dark toys was one of the most useful and exciting innovations in modern science.

Green fluorescent protein (GFP) was first observed in 1962 in a bioluminescent jellyfish, Aequeorea victoria, as the molecule responsible for the animal’s ability to glow. The protein was isolated, and researchers demonstrated its ability to light up green under beams of specific colors of light. The use of GFP revolutionized biological research–in the time since GFP’s discovery and use, molecular biology has entered its golden age.

Osamu Shimomura, who first isolated the protein, Martin Chalfie, who first used GFP to track other proteins, and Roger Tsien, who discovered the properties that make GFP fluorescent and manipulated them to create a rainbow of fluorescence proteins, were awarded the 2008 Nobel Prize in Chemistry. Before their work with GFP, visualizing proteins within live cells was far more complex, and less dependable, as it relied on the insertion of fluorescent dyes into the cell where they could bind to the proteins of interest. These dyes were unreliable because they weren’t always specific to their proteins, and because the physiology of the cell had to be disrupted to add them.

Now, through the use of DNA modification technologies, the gene for GFP can be fused to genes of other proteins, allowing proteins to be produced with the fluorescent molecule attached. This is called "tagging" the protein with GFP. As Martin Chalfie demonstrated for the first time in 1994, GFP can be used to show specific proteins and cellular structures in living organisms, providing researchers with new insights into cellular function. For example, when the GFP gene is attached to the gene for a microtubule associated protein, MAP2, one molecule of GFP is produced and fused with every molecule of MAP2. By viewing cells that produce MAP2, such as neurons as shown in Figure 1, researchers can learn how much of the protein is expressed under different conditions, simply by measuring the intensity of GFP’s fluorescence. The more MAP2 that’s produced, the brighter the GFP signal will be.

 Figure 1. Hippocampal neuron stained with GFP for MAP2. Actin filaments are in red and DNA is in blue. The protein tags allow researchers to see MAP2 in the neuron’s long projections, or dendrites, and see that actin localizes in clumps along the dendrites at structures called spines. The DNA stain shows the cell’s nucleus.   Image via Halpain lab at UCSD  .

Figure 1. Hippocampal neuron stained with GFP for MAP2. Actin filaments are in red and DNA is in blue. The protein tags allow researchers to see MAP2 in the neuron’s long projections, or dendrites, and see that actin localizes in clumps along the dendrites at structures called spines. The DNA stain shows the cell’s nucleus. Image via Halpain lab at UCSD.

With a GFP tag, researchers can also see where the protein is made, where it is transported, and under what conditions this changes. Previously cells had to be fixed, killing them and freezing them in time to view their components. One of the most compelling aspects of fused fluorescent tags is their ability to be viewed in real time within living cells. Live imaging allows researchers to manipulate the cells and understand how different environmental changes can affect cellular components over time. By using different colors of fluorescent proteins to label different cellular proteins and looking at where and when they overlap with one another, researchers can look within the cell to understand how different proteins might interact.

Within each tagged cell is a constellation of glowing proteins, like the glow-in-the-dark stars stuck to a childhood ceiling. These cellular constellations move and interact in breathtaking ways that scientists are only beginning to understand, thanks to the discovery of GFP.

The Nose Knows How to Keep Staph at Bay

By: Helen Beilinson

A year and a half ago, I had temperatures over 100°F, could barely concentrate, and couldn’t sleep for more than two hours without waking up covered in sweat. After three days of somewhat tolerating this out of body experience, I realized that this wasn’t some kind of horrible cold and went to the student health care center. I had pyelonephritis, a bacterial infection of the kidney. I spent the next week in the hospital being pumped with antibiotics. Scarily, the antibiotics I was initially given did nothing to the invasive E. coli trying to take over my body. They were resistant to the drug that was supposed to kill them. It took two days of lab tests to realize this fact, after which I was immediately switched to another antibiotic, to which my new bacterial friends were not resistant. Thankfully, I was infection free and out of the hospital soon after.

I was fortunate in that a simple switch to another antimicrobial cleared my unwelcome squatters. In many cases, infectious bacteria are resistant to multiple drugs, termed multi-drug resistant organisms (MDRO), or, more comic book-ly, ‘super-bugs’. Bacteria evolve very quickly; modifying their genomes to select what works best for their environment or even obtaining bits of DNA from other bacteria that would aid them in survival. These bits of DNA encode genes that mutate at some of the fastest rates and that are one of the most frequently transferred from one bacterium to the next. They also encode genes that induce resistance to antimicrobials. The longer certain antimicrobials are used in the clinic, the more bacteria are able to acquire resistance genes, spreading resistance from bacterium to bacterium, from species to species. The need for new antimicrobials has been terrifyingly high for the last few decades because there has not been a new class of antibiotics discovered for the treatment of bacterial infection since 1987. 
Many studies have been aimed at synthesizing new biomolecules in the lab in attempts to find a molecule never before encountered by bacteria that has antimicrobial potential. Other scientists have turned to a surprising place to find naturally occurring antimicrobials: bacteria.

Bacteria occupy what are called niches, a location that has a particular bacterium’s perfect temperature, humidity, and food. Although there are millions of niches that bacteria occupy, from lava pits to our pets’ guts, bacteria still have to compete with other bacteria for their niches. This is particularly true of locations that are not rich in nutrients—the bacteria not only have to fight for space, but also for food. To fight against invaders, bacteria use many strategies, including producing their own antimicrobials. One species of bacteria will produce molecules that harm other bacteria, while the producing species is unaffected by those molecules. These microbially-derived antimicrobial molecules have been the subjects of many scientific treasure hunts. Many niches have been explored, from the ground to the ocean (both of which have been very fruitful locations), but one recently published paper from the University of Tübingen looked for antimicrobials much closer… in their noses.

Any location on the human body that is exposed to the environment has a milieu of microbial life living in it. The nose is no exception. The nose, as well as the upper airway to which it is connected, is very nutrient poor, meaning that any bacteria that live there are in strong competition for space and food. Bacterial species from the human microbiota have been shown to produce bacteriocins, which are antimicrobial molecules. The authors of this study explored nasal commensals in attempts to identify antimicrobials capable of acting on Staphylococcus aureus, a bug that lives in the nose, respiratory tract, and on the skin. Although approximately 30% of the human population has S. aureus living in or on them, during times of immune suppression, S. aureus can cause skin and blood infections. Without antibiotic treatment, S. aureus bacteremia (bloodstream infection) has a fatality rate that ranges from 15 to 50 percent. Unfortunately, the prevalence of S. aureus in antimicrobial filled hospitals has lead the species to be highly resistant to many drugs. For example, MRSA is methicillin-resistant S. aureus, a highly difficult bacterium to treat. Many S. aureus strains are now classified as MDRO, as they are resistant to more drugs than just methicillin. Without novel antimicrobials to treat S. aureus infections, or any other MDRO infection, the fatality rates will only increase.

S. aureus is in the genus Staphylococcus and has many other genus family members, many of whom also live in the human nose. The authors of this study used a previously described collection of nasal Staphylococcus to screen the species’ ability to inhibit the growth of S. aureus. They identified one strain, called S. lugdunensis IVK28, which very strongly prevented the growth of S. aureus. In the hopes of identifying the specific molecule, or molecules, that S. lugdunensis uses as an antimicrobial against S. aureus, which could subsequently be used as clinical antimicrobials, the authors made a mutant library of S. lugdenensis. In essence, this means that they modified the entirety of the genome of S. lugdenensis randomly in many different ways. If a gene is mutated that is critical to repressing S. aureus growth, then that mutant would be unable prevent S. aureus growth. Then, they tested to see if any of the mutants were unable to repress S. aureus growth. When they found one mutant that did not have this antimicrobial property, they sequenced the bacteria’s genome to identify what gene was mutated. They found that the previously uncharacterized gene, which they named lugdunin, was mutated in this line of S. lugdunensis. This is the first clue showing that lugdunin could be a novel antimicrobial.

To test if lugdunin has antimicrobial properties outside of the context of S. lugdunensis, the authors isolated lugdunin on its own and found that it was able to independently act as an antimicrobial on S. aureus growth. This fact is important in that many molecules, particularly proteins, sometimes don’t function alone, working in conjunction with other proteins, or other biomolecules such as lipids or nucleic acids. An independently functioning molecule is much easier to work with, both in basic science characterization studies, as well as in the clinic. Lugdunin is a strong antimicrobial, with the ability to act against various strains of drug-resistant S. aureus and drug-resistant members of the Enterococcus genus, as well as many other bacteria. Importantly, lugdunin did not cause any damage to human cells (important when trying to develop a drug for human use). When the authors used a mouse skin infection model using S. aureus, lugdunin was able to eliminate most or all of the infection, a first critical experiment in demonstrating its potential as a clinically available antimicrobial.
The question of why some people are carriers of S. aureus, while others may go their entire lives without have one such bacterium ever live in or on them, has remained largely unanswered by scientists. To explore whether the presence of S. lugdunensis affects the presence of S. aureus, 187 patients’ nasal swabs were analyzed. A third of the patients had S. aureus, a number close to the national average, whereas a tenth of the patients had S. lugdunensis. The presence of S. lugdunensis, however, strongly decreased the likelihood that the patient had S. aureus. Although it couldn’t definitively be proved in humans, as human testing is strictly frowned upon by the higher powers that be in the scientific world, this finding was a critical step in understanding the relationship between the two Staphylococcus strains.

To further test this antagonized relationship, the authors asked whether lugdunin gave S. lugdunensis the capacity to outcompete S. aureus for nasal space. They found that this, in fact, is true. When they plated both bacteria on agar plates (basically thick jello with tons of nutrients), they found that S. lugdunensis always took over the plates within 72 hours. Even when the plate started as 90% aureus and 10% lugdunensis, three days later, there wasn’t a S. aureus bacterium to be found. When a mutated S. lugdunensis was used, a variant that lacked the lugdunin, S. aureus was able to take over the plate with ease. These findings show that S. lugdunensis is not just a member of many people’s nasal microbiota, but its ability to compete with S. aureus, thanks to its lugdunin molecule, can keep aureus at bay and prevent any potential infections it would cause.

The discovery of a potent antimicrobial that can act on drug-resistant bacteria is important. Of course, there is always the risk that bacteria will develop a resistance to this new antimicrobial, but when the authors of this study tested to see whether they can ‘force’ S. aureus to become lugdunin-resistant, they found that the rate of resistance development was minimal. Whereas S. aureus developed resistance to other drugs after even just a few days, lugdunin resistance wasn’t observed, even after a month. Lugdunin is an exciting new antimicrobial that hopefully will be able to treated MDRO-infected individuals soon. Additionally, as S. lugdunensis is a known safe nasal commensal, a fascinating potential of these findings is infection prevention, instead of treatment. Patients who are at a high risk for S. aureus infection can be colonized with S. lugdunenesis to make the bacteria work for us in exchange for the delicious mucus they feast on. The presence of this S. aureus fighter will lower the risk for S. aureus presence, even already drug-resistant S. aureus, in the nasal cavity, lowering the chance of a life threatening infection. Although it was a quiet field for a while, antimicrobial discovery has only been speeding up in the last few years. Exciting new discoveries are being published every few weeks and our ability to treat infections, as well as preventing them in the first place, is only getting better. Who knew that sharing boogers could save lives?


Early Grunter Gets The Worm

By: Helen Beilinson

Summer has finally settled upon Connecticut after a long winter. The undergrads are gone from campus, grad students are finding every excuse to go outside, outdoor seating at local breweries is constantly packed, and festivals are in full swing. Festivals in the United States are reaching far beyond the classic food and/or music type and into bizarre territories. There are road kill cook-off fests to days devoted to cow cake (read: dung) throwing contests. However, one fascinating festival, biologically speaking, happens every April in Sopchoppy, Florida: the Annual Worm Gruntin’ Festival.

To win this annual worm grunting contest, all you need to do is charm the greatest amount of earth worms out of the ground as possible, but you can only do so by making the ground vibratev. Using either hand tools or power equipment (although traditionalists prefer the former) one can easily cause earthworms to exit the ground by the thousands, making them very easy to collect. The most popular, and some argue most effective, way of creating the vibrations is using a stack of wood inserted into the ground that is vibrated by a flat iron slab being rubbed across the top. Although it has been known since the 1800’s that beating the ground forces earthworms to above ground, worm grunting as it is done today with wood and iron slabs originated in the 60’s and 70’s, first as a personal means to get worms for fishing, and then used on a more industrial scale for selling of worms. Over the years, the technique has been passed down and is still frequently being used for obtaining live earthworms as bait. Animals, such as herring gulls and wood turtles, have also been observed to use ground vibration to bring worms to the surface.

Despite its long history, the reason vibrating the ground charmed worms from the ground was largely a mystery before 2008. Earthworms live, well, in the earth. Leaving the ground poses two problems for worms. The first is aboveground predators. Birds and other small animals feast on worms. Second, earthworms have to continuously be in a moist environment, as they breathe air through their skin and must stay wet in order for oxygen to be exchanged through their skin (a reason why they come above ground at night or after a rainstorm—they can move faster on the cool, wet soil, while still being able to breathe).

In his last scientific book, The Formation of Vegetable Mould through the Action of Worms, Charles Darwin, knowing of the early observations that ground vibrations stirred earthworms (although he noted that he was personally unable to replicate them) commented on the strangeness of the worms’ ascending migration and offered a hypothesis that has for the last century and a half been the predominant theory for the worms’ movement. Although we are mostly familiar with worms’ above ground predators, they have underground predators as well –  chiefly, moles. One of the situations in which worms escape to the ground’s surface, presumably, is to escape the jaws of moles. When moles dig, the ground around them vibrates. Darwin proposed that the vibrations made by humans or other creatures mimic the vibrations caused by mole digging, inducing fear in the worms, making them surface. It has also been proposed that the vibrations mimic heavy rainfall, which also makes worm surface, as to prevent drowning.

One hundred and twenty seven years after the publication of Darwin’s Worms, Kenneth C. Catania, a scientist from Vanderbilt University, produced a study proving that the former’s hypothesis was, indeed, correct. Catania recorded the vibrations made by the traditional technique of driving a wooden stake into the ground and rubbing a flat iron slab lengthwise across the top of the stake. Vibrations were measured at intervals away from the stake and from multiple stake locations. The magnitude of the vibrations decreased as one got further away from the stake, as to be expected, and the intensity of vibrations depended on the soil composition, as different stake locations produced different vibration intensities. Accordingly, the number of worms that emerged also decreased with increasing distance from the stake. A year after Catania’s original paper, a Canadian research group published a study recapitulating the results showing ground vibrations cause earthworms to emerge from the soil.

Catania was collecting these data in Sopchoppy, FL, home of the worm grunting contest. The only mole living in Sopchoppy is the eastern American mole. These moles eat the equivalent of their body weight every day, with a diet consisting predominantly of earthworms mixed in with some vegetable matter. To test Darwin’s original hypothesis, Catania first confirmed that the presence of these moles in the forest in Sopchoppy. By studying mole tunnels, Catania showed that the moles are abundant in the area and that there is a clear overlap in the populations of earthworms and moles.

Then, studies were performed to test whether earthworms responded to simulated rain or to digging moles. To do these studies, fifty earthworms were placed in a large, container containing soil soil and allowed to burrow into the ground. Once the worms entered the soil, Catania studied their movements, specifically, how often the worms would exit the soil. The number of exiting worms was negligible, essentially one or zero over the course of a few hours. First, Catania studied the effects of simulated rain by placing the soil boxes under a sprinkler system. The number of exiting worms remained unchanged. However, when a mole was introduced into the bottom of the container and allowed to burrow through it, nearly half of the worms, on average 24, rapidly exited the soil. Catania even noted that many worms “crawl[ed] over the container walls”. Similar observations were seen when the same experiment was conducted in a much larger area, where rain did not influence the number of worms exiting the soil, but moles digging drove them out quickly.

A human observer can hear moles digging underground when standing several feet away because they dig so powerfully, particularly if they are digging through a root-filled environment. To study these vibrations and sounds more carefully, Catania recorded their digging vibrations and found that the amplitude of the vibrations was highly similar to that of the worm grunters. One difference was that the vibrations of the worm grunter were much more consistent throughout the vibrations, as opposed to the moles. This is unsurprising, as moles do not continuously burrow, often changing directions to avoid roots or stopping to munch on a worm. To test the worms’ responses to the mole’s vibrations, Catania took the recording of the mole’s digging and modified the sound file to simulate how the digging would sound if the mole were approaching the worm, such that it progressively grew louder. This experiment was conducted in the aforementioned container containing fifty earthworms. Fascinatingly, after the recording was played, an average of 16 worms exited the container compared to the one or zero that would exit with a nonspecific recording playing, mimicking the previous results. This finding was surprising as merely the sound of the vibrations, without any physical perturbations to the soil itself, was a strong enough force to drive many worms to the surface. The difference in the number of worms leaving the soil (a difference of about ten worms between the mole vs mole sound experiements) is probably due to the worms being able to detect compression of the soil. Without the mole digging, there is no change in compression. With only one of the two (or more) “predator approaching” signals, some worms probably don’t feel enough fear to leave the soil.

This study was conducted with one species of earthworm, Diplocardia mississippiensis, and one species of mole, Scalopus aquaticus. It is unknown whether this phenomenon is carried out throughout various worm and mole species or if this effect is specific for this predator-prey pair. However, detection of predators approaching is not uncommon in the animal kingdom. In the discussion section of this paper, Catania makes an interesting comparison between the vibration-avoiding mechanism of the worms to the ultrasound-avoiding mechanism used by flying insects preyed on by bats. As bats use ultrasound for echolocation, their prey have evolved the means to sense ultrasound and have learned to fly from it to avoid predation.

I highly recommend checking out Dr. Catania’s other work. It is highly unique and fascinating work, including a recent study exploring how electric eels attack predators not in the water. 


Sexism is in the AIRE

By: Jenna Pappalardo

Autoimmune disorders include a variable set of diseases in which a person’s immune system abnormally targets its own normal body tissue rather than an invading pathogen or other threat. Development of these diseases is a result of genetic and environmental factors synergizing to induce immune responses that would normally be prevented by a series of checkpoints in the immune system. These diseases disproportionately target the sexes, with estimates suggesting 78% of those affected by autoimmune diseases are women, but why this divide exists is unknown. When considering the biological differences between males and females, one obvious answer would be hormones—but how would they affect these processes? A recent study provides a new clue into how hormones could affect a barrier to autoimmunity that, when functioning normally, results in the death of self-reactive cells.

To understand this new link to hormones, I’ll pause to explain the very cool process that should normally happen in the immune system to prevent autoimmunity. T cells are part of the adaptive immune system that mounts specific and robust responses against pathogens. These cells work in various ways, including facilitating immune responses and directly killing infected cells, which makes them great defenders against invaders, but has severe consequences when aimed at their host’s tissues. Adaptive cells gain specificity for what they recognize through essentially random genetic recombination and mutations, so some cells develop to recognize a normal host protein and have the potential to attack it if they aren’t eliminated. T cells develop in a small organ by the heart called the thymus (hence their name), where they undergo stringent selection to delete any T cells that are targeted against normal tissue. But wait—I thought when I first learned of this process—how can their exposure only to the thymus ensure that T cells reactive to the brain or eye or pancreas are also killed before they’re released into the body? The elegant solution to this is the autoimmune regulator or AIRE. Similar to how differential expression of genes allows the same genome to give rise to hundreds of kinds of cells by different expression of proteins, AIRE allows for cells in the thymus to express proteins from other tissues throughout the body. AIRE induces the expression of proteins from all over the body (proteins that are normally expressed in the brain, kidney, liver, etc.) in the thymic stromal cells. T cells are then tested to see if they react against these proteins that represent other tissues and are killed if they recognize them too strongly. This process eliminates T cells that will be activated, and subsequently induce an active immune response, in response to a person’s own protein (ie autoimmune triggering T cells).

The authors of the previously mentioned study decided to investigate how AIRE expression differs in males and females and found that females have lower expression of AIRE, as well as the representative proteins (or tissue-specific antigens, TSAs) that it regulates. Recapitulating lowered estrogen levels in castrated male mice suggested that hormonal differences may be responsible. For more direct assessment of how hormones affected AIRE expression, the authors introduced either estrogen or DHT (basically activated testosterone) to human thymic epithelial cells in culture. Adding estrogen caused a decrease both in AIRE expression and an AIRE-dependent TSA, while testosterone actually caused a slight increase in both. If cells were given both estrogen and DHT, the effect of estrogen won out with an overall reduction in AIRE and its TSA. There are TSAs that do not depend on AIRE, which were measured in this study and not affected by estrogen. These results were mirrored in experiments where human thymus fragments were transplanted into mice. When estrogen was administered, there was lower AIRE and AIRE-dependent TSA expression compared to mice not receiving estrogen. The impact of estrogen was further confirmed by removing the estrogen receptor from cells in the thymus to prevent estrogen from acting on them, which restored AIRE and AIRE-dependent TSA expression to levels comparable to males.

The authors hypothesized that estrogen might be affecting AIRE expression by altering the areas of DNA that are available for transcription. DNA methylation can hide regions of DNA from transcription, which prevents those regions from being translated into the protein they encode. They found that adding estrogen increased the number of methylated sites in cultured human thymic epithelial cells, while DHT did not significantly change the level of methylation. This suggests that estrogen may affect how much AIRE is expressed by causing an increase in DNA methylation, turning off the gene that encodes it. These results were tied into susceptibility to autoimmunity using experimental autoimmune thyroiditis (EAT), which is an autoimmunity model in mice. In this model, an adaptive immune response is inappropriately launched against thyroglobulin, a protein expressed in the thyroid. As female mice are more prone to develop EAT, the next step was assessing if lower AIRE expression contributes to EAT susceptibility. Sure enough, there was more pronounced autoimmunity in males when AIRE expression was lowered to mimic levels in females by preventing the protein from being synthesized.

There are still many open questions about the interaction of genetics and environment in autoimmune susceptibility, but this study provides new insight into how sex contributes to that balance by proposing an estrogen-mediated process that could allow for the escape of more autoreactive T cells. These factors comprise such a complex interplay that at a recent dinner with a visiting speaker and prominent immunologist, he suggested that whoever untangles the mechanisms accounting for the sex disparity in autoimmunity deserves the Nobel Prize (and joked that he isn’t smart enough to tackle that challenge). Sex doesn’t just affect susceptibility to autoimmunity, but also its severity and immunological characteristics, making it a potential avenue for developing new preventatives and therapeutics.  

Treating Depression with Drugs

By: Helen Beilinson

[I would like to note that Because Science does not endorse the recreational use of drugs, psychedelic or otherwise. Please see your doctor before taking any new medications or changing your current regiment. All of the drugs mentioned below are illegal in the United States and were tested in experimental settings to ensure the safety of the volunteers.]

Refurbishing drugs fashioned for one therapy to treat another illness has been in practice for years. The anti-nausea drug, Thalidomide, was found to be efficacious in treating leprosy and multiple myeloma, and many therapies originally designed to fight tumors are currently being studied for efficacy against autoimmunity. Although many of these studies go under the media radar, an ever-growing group of studies of reusing drugs has raised a fair amount of controversy because the drugs that are being recycled are illegal, psychedelic drugs. The drugs they explore are felonious and their physiological and psychological effects are highly understudied. That being said, new effective and widely used treatments for depression have not been developed since the 1970’s and these studies hold important information in treating this disorder.

Last week, a group at the Imperial College London published a study in The Lancet Psychiatry exploring the effects psilocybin on depression. Psilocybin is the active, hallucinogenic molecule found in many toadstools, including magic mushrooms. Psilocybin is an alkaloid, a class of nitrogen-containing organic compounds found predominantly in plants. It also also includes morphine, a pain-reliving drug, and atropine, the poison found in deadly nightshade, but active as a muscle relaxant to dilate pupils and increase heart rate in small doses. Psilocybin is metabolized by the body to form psilocybin, which stimulates the serotonin receptor Serotonin works in many ways in many places in the body. It is believed to be critical in mood regulation, appetite, and sleep. Serotonin is a neurotransmitter, which is a molecule used by neurons to communicate with each other, relaying information from one end of the body to another. Many current available antidepressants, and other mood related disorders, act to increase the amount of serotonin, which subsequently increases its consequential signaling, lifting mood. Psilocybin acts kind of like serotonin in that it triggers the same receptor as serotonin, which stimulates the same chemical signaling that serotonin does. The biochemistry of psilocybin provides insight into its strong potential in treating depression.

 Although there is evidence that magic mushrooms have been used for religious, spiritual, and recreational purposes since 9000 BCE, they, as well as other psychedelics, only entered the academic and medical field in the late 1950’s. Backlash against the hippie culture of the 60’s and 70’s, however, halted research of hallucinogens. The last decade has brought back studies of these drugs and their effects on various human ailments, triggering molecular studies to elucidate the molecules responsible for hallucinations and changes in mood. 

In the aforementioned study, twelve clinically depressed patients, who were unresponsive to other treatments, were given two doses of psilocybin. The first was a low dose and the second, administered a week later, was higher dose. The patients were then questioned for the next three months and their mean depression severity scores were noted. Before treatment, all patients had scores reflecting severe depression. After the second dose of psilocybin, scores, on average, dropped to scores of mild depression and staying in that range three months after the treatments. Five of the twelve patients were in complete remission after three months, with all patients seeing a notable improvement. The study also noted that all patients experience side effects (including anxiety, confusion, and headaches), which, in all cases were mild and most symptoms passed within two hours of treatment.

 These results are very exciting for the field, as such success in an initial study, particularly for psychiatric disorders, is rare. Interestingly, this isn’t the first time psychedelic drugs have been used as a basis for depression therapy.

In March of 2015, researchers from Brazil published the first clinical trial exploring the potential therapeutic benefit of ayahuasca. Ayahuasca is a botanical hallucinogen used by indigenous groups of the Amazon for ritual and medicinal purposes. The ayahuasca beverage contains two ingredients. The first is a monoamine oxidase inhibitor (MAOI), which inhibits the breakdown of specific neurotransmittors, molecules used by neurons to communicate with each other, such that their effectiveness is increased. The second is dimethyltryptamine, or DMT, a psychedelic compound. Traditionally, the ayahuascan MAOI is from the bark of Banisteriopsis caapi, a jungle vine, and the DMT is from Psychotria viridis, a shrub common in the northwest of the Amazon. These plants are boiled together and concentrated over several hours. Interestingly, other MAOIs have been used for years in treating depression and Parkinson’s disease. For example, many MAOIs prevent serotonin degradation, increasing its signaling capacity. However, available MAOIs are not routinely used as they have a significant risk in interacting with over-the-counter medications and other prescription medicines and require strict diet restrictions, as they can cause high blood pressure.

In the study, six patient volunteers diagnosed with recurrent major depression were given ayahuasca prepared by members of the Santo Daime community in Brazil. Patients’ moods were analyzed for two weeks prior to drug administration, as well as at multiple intervals after drug administration. Three weeks after drinking the ayahuasca, almost all patients had reduced depressive symptoms. Not all patients saw dramatic decreases and throughout the course of the three weeks, and in some, moods did fluctuate from above the initial scores to below the scores seen at the conclusion of the three weeks. Although some patients experienced vomiting that is known to occur after consumption of ayahuasca, no other adverse side effects were noted. It is important to note that the sample size used in this study was very small, but the results are interesting.

 Magic mushrooms and ayahuasca have only recently entered the medical sphere as potential depression treatments. Ketamine is an anesthetic, used to treat chronic pain and has the potential for addiction and abuse. It also can cause severe confusion or hallucinations. It has long been known that ketamine, a club drug commonly referred to as “special K,” acts as an antidepressant at a surprisingly rapid rate, in comparison to other antidepression treatments. Unlike the current available treatments that require several weeks or months that take effect, ketamine has been found to suppress depressive symptoms after a single dose, occurring within hours of drug administration and lasting about a week. It is not approved by the FDA as a treatment, however, due to its side effects, which include blurred or double vision, jerky movements, including muscle tremors, and vomiting, in addition to its addictiveness. The side effects of ketamine are dangerous, but its beneficial actions have prompted it to be used as a last resort in patients with depression that have not responded to other treatments; it has been used to treat suicidal patients in emergency rooms, and there are ketamine clinics that have begun to appear to administer the drug off-label.

As most molecules, ketamine, scientifically known as N-methyl-D-aspartate receptor antagonist (NMDA) (R,S)-ketamine, is metabolized, or broken down into multiple components, by various enzymes once it has been digested. The components into which ketamine is broken down have different effects on the organism, which partially explains the broad set of reactions one can experience after ketamine consumption. NMDA receptors are found on nerve cells and signaling through these receptors is important for synaptic plasticity, which is the ability for synapses (the structure on a neuron that releases and captures neurotransmitters (electrical or chemicals signals, such as serotonin) allowing for neurons to communicate) to get stronger or weaker, changing the speed at which neurons can communicate. NMDA receptor agonists, such as ketamine, block the signaling through these receptors. This allows for anesthetic effects (as pain is felt through neurons), as well as hallucinogenic effects, due to signaling that is offset from baseline. As the signaling balance is complex, and ketamine can be broken down into so many different components, a study published early this month attempted to elucidate whether there were distinct chemicals in ketamine involved in depression suppression and side effect induction. This study was done to understand whether the molecules involved in the former could be isolated for depression treatment without the negative side effects.

 This paper showed that one of the molecules into which ketamine is degraded, specifically (2S,6S;2R,6R)-hydroxynorketamine (HNK) is responsible for the drug’s antidepressant effects. After doing multiple biochemical assays to study the degradation patterns of ketamine, the group assayed the physical, psychological, and behavioral effects of ketamine treatment as a whole versus treatment with its degraded forms, such as HNK, in mice. They found that although ketamine treatment suppressed depressive symptoms, it additionally induced motor incoordination, hyperactive locomotive activity, and other similar side effects that are seen in humans. In comparison, HNK treatment alone similarly suppressed depressive symptoms, but it did not induce the noted side effects.

This study was conducted in mice, meaning it still needs to undergo multiple rounds of testing before it reaches the potential for human treatment. However, the importance of the study lies in that the antidepression molecule of ketamine was separated from the whole of the drug. This molecule, in portions of the study I did not speak to, elucidated nuances of neuronal signaling that were previously unknown, uncovering potential treatment targets of depression. The study of recreational drugs, particularly with human volunteers, is, indeed, controversial. However, research is a step-wise progression. In order to unravel the mechanism by which these drugs affect mood, and subsequently how we can take advantage of these pathways pharmacologically to treat depression, research must start at the top, proving that these drugs truly have a true effect on mood. From there, detailed biochemical work can be done to chemically tease apart the drugs, eventually leading to the discovery of particular molecules that have beneficially effects without negative side effects, as was done with ketamine. From understanding how, biochemically, recreational drugs effect mood, both in positive and negative ways, scientists can develop novel drugs that target the former without inducing the latter. 

Two Cells Enter, One Cell Leaves

By: Charles Frye

In addition to constructing a miniature model of the world inside your skull for you to inhabit, the brain is also tasked with generating sequences of actions in the real world – breathe in, breath out; lather, rinse, repeat; stop, drop, and roll.  The brain performs these actions using the skeletal muscles, more commonly known as the muscles. When you feel the desire to take a step forward, reach for an object, or scratch an itch, the motor cortex must determine how to tug on these big bundles of springs in order to swing the bones to which they are attached in precisely the correct fashion to produce the desired movement. To gain an appreciation for just how hard this is, check out this compendium of robot fail gifs. Walking isn’t so easy after all!

These commands rely on a well-made interface between the nervous system and the muscles. Each muscle fiber needs to be matched to exactly one neuron, and all of the motor neurons need to be matched to at least one muscle fiber.  To complicate matters further, the neurons in question are born inside the spinal cord, while the muscle cells are born far away. In one final twist of complexity, large collections of individual muscle cells combine together, assembling themselves, Voltron-style, into a single, more powerful unit called a muscle fiber, which has many nuclei and many mitochondria.

So how are we to ensure that our motor neurons and our muscle fibers are well matched?  One modest proposal is to generate far more neurons than you need, and any that don't manage to find a motor neuron can just be killed. In order to ensure that this diktat is followed, nature adopts a strategy straight out of Saw II: motor neurons are, from the moment they are born, searching frantically for the antidote to a poison that will kill them when a timer runs out. They are, like Biggie Smalls, born ready to die.  The antidote is released by muscle fibers, but it is only released in small quantities and to directly-connected neurons. 

So, the motor neurons rush out from the spinal cord, making a mad dash for the nearest muscle fiber. Some cells find a partner and begin to form connections, also called neuromuscular junctions, but others are not so lucky. 

These unlucky cells are drawn to the “scent” of the antidote as it diffuses away from these immature connections – the table scraps, if you will. In a desperate attempt to survive, these cells become locked in a duel to the death with the original tenants -- whoever can make a stronger connection faster will choke the other one out. When all is said and done, only about half of all the motor neurons will survive to become functional. 




If you enjoyed this, check out more explanations of the foundational concepts of neuroscience at Charles' website!

Friends & Foes: Immunology, Neurology, and Schizophrenia

By: Helen Beilinson

At the end of the nineteenth century, Ilya Metchnikoff discovered phagocytes, a subset of cells that ingests and digests foreign particles and cells. This Nobel-winning finding spearheaded the study of immunobiology. The twentieth century brought innumerable basic biological discoveries in how the immune system works— from how it battles and eliminates unwanted invaders to what causes its functions to go awry and induce autoimmunity. The last decade has brought yet another layer into immunology research. Advances in studies of immunology and studies of other organ systems have become integrated to understand how these systems work together and influence each other. Each system is not isolated from the rest of the body; they function in unison, often with overlapping functions, to ensure the health of the whole body.

Phagocytes play a crucial role in immune responses as they work to remove invading pathogens before they are able to harm the host. They are also vital in eliminating debris that is formed during the development and day-to-day maintenance of an organism. Multicellular organisms often have to eliminate unwanted cells, and do so using a type of programmed cell death, termed apoptosis. Swift removal of dying and dead cells, also called apoptotic cells, is necessary for the maintenance of the health and homeostasis of the organism. As opposed to living cells, apoptotic cells display “eat me” signals on their surface as a label for phagocytes to distinguish which cell they should be eliminating. Numerous of these “eat me” signals have been identified and come in many forms, from changes to the sugars attached to surface proteins, to the exposure of new proteins or lipids (also known as fats). The signals can also be derived from the apoptotic cell itself or be attached to the cell after the induction of apoptosis.

One system that plays a notable role is tagging cells for elimination is called complement. The complement system is made up of numerous proteins that have a multitude of functions to strengthen the immune response against pathogens. One of its roles is to deposit specified proteins on bacterial cells to mark them as foreign for their enhanced uptake and elimination by phagocytes. Although complement has been traditionally thought to act in combating infectious agents, there has been an increased appreciation for its role in the removal of apoptotic cells. Throughout the course of apoptosis, the composition of a cell’s outer membrane changes, such that they gain the capacity to bind to complement proteins, marking them for uptake by phagocytes.

For many years, it was believed that certain organs are sites of immune privilege—free of inflammation and immune cells, including phagocytes. Too much uncontrolled inflammation can cause permanent damage to the tissues surrounding the inflamed location. Immune privilege was believed to be an evolutionary adaptation that added an extra layer of protection to critical sites, such as the brain, to prevent organ failure. Recently, the converged studies of neurological and immunological research have brought to light the intricate relationship between these two organ systems, revealing that, in fact, the brain is not a site of immune privilege. In fact, although neuroimmunological research is still in its adolescent stages, it has shown that the immune system plays a heavy role in the development, regulation, and maintenance of the nervous system, particularly of the brain.

Between birth and the onset of puberty, neurons undergo a process called synaptic pruning, or the targeted elimination of the structures that allow neurons to communicate to each other using electrical and chemical signals. Targeted pruning and apoptosis eliminate imperfect neuronal connections and those unnecessary for an adult organism, allowing for the maturation of neuronal circuitry. In complete opposition to the idea that the brain is immune privileged, both of these processes rely on brain-specific phagocytes, called microglia, to eliminate the unwanted synapses and dying cells.

Apoptotic neurons are marked, for the most part, the classic “eat me” signals that are traditionally associated with dying cells, mostly processes that are driven by the cell itself. The “eat me” signals of synapses were a bit more surprising. A finding made nearly a decade ago showed that complement proteins are deposited on synapses during synaptic pruning, targeting them for elimination by the microglia. This finding was unexpected, as it was one of the first papers showing the importance of the complement system in neuronal development. It also emphasized the extent of the complex relationship between the nervous and immune systems. The cells of the immune system provide an invaluable service in the proper maturation of the brain; however, growing research in neuroimmunology has revealed an unfortunate side effect of having immune cells involved heavily in the nervous system.

Scientific and anecdotal evidence has shown for centuries that the immune system loses its strength throughout aging, not only working less effectively, but also working in a less targeted manner, increasing the chance of immunopathology, or damage done to an organism by its own immune system. Immunopathology is caused when the immune cells of an organism begin to attack ‘self’ cells and molecules. Many aging-associated diseases are now believed to be driven, at least to some extent, by the loss of control of the immune system—including neurodegenerative diseases. For example, Alzheimer’s and Parkinson’s diseases have both been linked to increased and mistargeted neuroinflammation. Both have also been associated with elevation of complement proteins and inappropriate loss of mature synapses, as well as the loss of proper function of microglial cells, the phagocytic cells of the brain. Biomedical research has begun to explore how to target neuroinflammation in patients, in an attempt to target the source of the disease, as opposed to current medications, which predominantly work to alleviate symptoms.

Fascinatingly, psychiatric diseases, diagnosed in significantly younger patients than most neurodegenerative diseases, have been increasingly linked to increased neuroinflammation as well. Schizophrenia is a serious psychotic disorder affecting a patient’s cognition, behavior, and perception. Its age of onset is, on average, 18 in men and 25 in women, much younger ages than most neurodegenerative diseases associated with aging. Although there is a strong heritability associated with schizophrenia, the specific genes involved in the disease, and the mechanism by which they do this, has for a long time been only speculative and correlative. In 2011, a Scandinavian study linked complement control-related genes to the heritability of schizophrenia. These genes are involved in regulating the level of complement activity. The study found that schizophrenic patients were more likely to have variants of these genes that were unable to control the level of complement proteins, such that, those patients would have increased levels of complement proteins in their brains. This research, however, was correlative, looking only at the genetics of the patients.

A paper published a few months ago, however, sought to find whether this correlation, and other correlations with similar findings found by other labs, had a biological basis. The authors looked at the presence of complement proteins in human patients with schizophrenia. They first confirmed other groups’ findings that there is a correlation of increased complement activity and schizophrenia. Further, they found that the genetic correlation also manifested in an increase in complement protein expression in the brains of schizophrenic patients. Human complement proteins localized specifically to neuronal synapses and neurons. In mice, they found that the same complement proteins found to be highly elevated in their human patients were responsible for synaptic pruning and neural development. Schizophrenia, as well has other psychiatric diseases, is an incredibly difficult disease to replicate in mice, making it difficult to definitively prove that complement-mediated synaptic pruning and neuron elimination by microglia is the major mechanism driving disease. However, the evidence for this has only been increasing.

Millions of years of evolution have driven our neuronal and immune systems to be dependent on each other. Unfortunately, as regulated as these systems are, imperfections in their regulation can lead to many diseases. Neuroimmunology research is a quickly expanding field working to explore the relationship between these two fields to find new and innovative ways to treat not only neurodegenerative diseases, but also psychiatric diseases, both of which that have been surprisingly linked to a loss of immune regulation. 

A Friendship Threatening Our Honey Supply

By: Helen Beilinson

The Araña Caves in Valencia, Spain are famous for the rock art left by prehistoric people. Aside from more traditional images featuring human figures hunting with bows and knives, there is a portrait of a human gathering honey from a beehive high in a tree, surrounded by a swarm of honeybees. Estimated at 8,000 years old, it is the oldest known depiction of humans consuming honey. Millennia later, we are still eating honey, although our methods for obtaining honey have gotten much simpler and safer. However, the last three decades have been harsh for the apiculture (beekeeping) industry, with our honey supplies diminishing frightfully rapidly. The problem lies in honeybee populations being threatened, but fortunately, research aimed at understanding why honeybee death is at such a high point and how it can be stopped.

Honey is a sweet, thick liquid food made by various species of bees foraging nectar from various species of flowers. Distinct kinds of honey, differing in taste, viscosity, and other properties, arise from varying combinations of bee species feasting on different flowers. After collecting nectar from flowers, honeybees convert it to honey by regurgitating the nectar and allowing the liquid within it to evaporate, while it is stored in wax honeycombs that the bees build within their beehives. Although it is incredibly sweet and delicious for humans and many other animals, its acidity, lack of water (thanks to the evaporation process by which it is made), and low presence of hydrogen peroxide, mean that most microorganisms cannot live in honey. In fact, when burial chambers of Egyptian royals were discovered, the pots of honey they had buried with them (to ensure a sweet transition into the afterlife) were entirely unspoiled, and just as delicious, after thousands of years.

Aside from being a delectable addition to tea, Greek yogurt, and Nutella sandwiches, honey has medicinal applications, thanks again to its biochemical properties. In 220 BCE during the Qin dynasty, a Chinese medicine book was published praising the ability of honey to cure indigestion. Folk healers in Mali use it topically to treat measles, and my dad used to put honey in my nose when I was a kid because according to Russian folk medicine, if you let honey flow through your nose to your mouth, you can get rid of a stuffy nose. I cannot speak to honey’s curative abilities in indigestion and against measles, but I can say that for at least a day after honey being put in my nose, I didn’t need to blow my nose even once.

Since the 1980’s, the honeybee population has been drastically declining, nearly halving in those years. Not only does this pose a threat to the apiculture industry, it also means that any foods pollinated by bees are also facing the prospect of being threatened. According to the United States Department of Agriculture, one in three foods directly or indirectly benefit from honey bee pollination. The loss of honeybees has been linked to various causes, particularly to infection. Bees have very strong and interesting immune systems, but bee populations are often being infected with many new emerging pathogens that lead them to die more quickly. Additionally, Colony Collapse Disorder (CCD) has also been connected to the loss of honeybees. This is a mysterious phenomenon in which worker bees, who physically collect pollen and nectar and make honey, leave their hives and queen bees behind. In essence, this renders the hive nonfunctional. It is not known what exactly causes CCD, but many believe that when worker bees get infected, they will leave their hives to die independently, preventing the risk of getting their queen bee sick.

One of the biggest threats to the beekeeping community is the parasitic mite, Varroa destructor. This mite reproduces in honeybee colonies, sucking the circulating fluids of adult bees for food. If the mite is infected with a microorganism and this microorganism is present in the saliva, this microorganism can spread to the honeybee. Recently, a group of scientists published their discovery of the mechanism by which a virus takes advantage of this means of transmission.

Deformed wing virus (DMV) causes wing and abdominal deformities, as well as affects the cognitive functions, in its bee hosts. Infected bees not only have a drastically reduced lifespan, they are thrown out of their hives in an attempt to prevent the spread of the disease to other individuals. Because of this innate mechanism bees have to eliminate sick bees from their hives, DMV is not an exceptionally good virus at spreading. In fact, only about one in ten colonies are affected by DMV, and those colonies infected tend to eliminate the virus quite readily. Unfortunately, DMV not only can replicate within honeybees, but it can also quite readily expand in the mite, V. destructor. The mite acts as a species in which the viral population can be concentrated and also makes viral spread much faster and more efficient. When mites are also infected with DMV, frequency of the virus in colonies increases from 10 to 100 percent. This relationship is arguably the single greatest inducer of CCD. Although the relationship between DMV and mites was previously known, the details of how these two species work together to aid each other’s replication were not well understood.

It was known that DMV suppresses the immune system of honeybees. To gain an understanding of how the virus affects the bees, the authors of the aforementioned study assessed how the bee larvae respond to different levels of virus infection, without the presence of the dust mite. They found that with increasing levels of virus, the larvae had lower melanization and encapsulation indexes. Melanization is the process by which melanin, the dark pigment in skin, is concentrated, and encapsulation is the process by which the larvae can uptake things, like pathogens to neutralize, from their environment. These processes are linked in that when foreign objects occur in the larva, they are encapsulated and these capsules are subsequently deposited with melanin (melanization) and other toxic molecules to mark them for elimination. The genes responsible for these processes are genes involved in the immune responses of these honeybees, controlled by a factor called NF-κB.

The authors found that in honeybees with more virus particles, there was a greater effect on the expression of their immune genes: the more infected the bees are, the less NF-κB they express. Less NF-κB means less immune genes being expressed, leading to decreased immune responses, such as melanization and encapsulation. The authors observed that these responses are also increasingly dampened with more viral particles.

From the observation that the dampening of the immune response was proportional to virus presence, the authors hypothesized that mites would replicate better on honeybees with more virus and would replicate worse on honeybees with less virus. To test this, the scientists infected the larva first with DMV. After some stages of development, they placed only one mite on each bee. After the honeybees were able to grow independently, they assessed how many mites were on each bee. The proportion of mites on an individual bee correlated with the amount of virus in each bee, such that if a honeybee had lots of virus, the honeybee was covered in tons of mites. Any lucky honeybee to have only a few viruses or none at all had practically no mites living on it.

The close relationship between the Varroa mite and DMV has been a major cause of CCD in honeybees around the world. Many current treatments and prevention techniques against this disease have been targeted at eliminating the mite from bee colonies. However, this study has shown that by reducing the viral load in a bee population, it could directly reduce the mite burden, as well. Studying the basic biology of this complex relationship has shown that the current methods of treating honeybees may not be the best way to tackle the problem, highlighting the importance of basic science. Without the virus suppressing the immune system of the bees, the mites are not as able to feed on their honeybee hosts. Not only will targeting DMV help the honeybees combat the dust mites, but it will also maintain the strength of their immune systems to fight off any other pathogens that enter their colonies and keep honey a staple in many dishes around the world. 

The Parasite Manipulation Hypothesis: How To Get Where You Need To Be

By: Helen Beilinson

           In 1990, after students proposed a project asking whether frogs can hop in zero gravity, six Japanese tree frogs went to space. This question, as well as many others, was answered in the “frog in space” experiment (FRIS) of the early 1990’s. Two decades later, the mating calls of male Japanese tree frogs were the inspiration for an algorithm to create efficient wireless networks. Recently, these frogs, and their mating calls, have made it into the news again when a group from Korea showed that when these male frogs are infected with the fungus Batrachochytrium dendrobatidis, their mating calls become ‘sexier’.

            B. dendrobatidis infects various amphibian species, including the Japanese tree frog. This fungus causes a wide range of changes to the bodies of its host, including electrolyte and fluid imbalance, leading to heart failure and rapid death of immune cells. While some amphibians are susceptible to B. dendrobatidis and will die when infected, some, including the Japanese tree frog, are not. The Japanese tree frog is tolerant to the infection, meaning that after being infected, instead of destroying the pathogen (this occurs in resistant hosts), the pathogen remains within the host, but does not cause significant damage to the host (as occurs in susceptible hosts). Interestingly, even though no detectable changes occur in infected male Japanese tree frogs, other than very slight weight gain and lethargy, their mating calls change.

            After collecting and analyzing mating calls from male Japanese tree frogs, the authors found that those frogs infected with B. dendrobatidis had calls that made them more attractive to females. The scientists analyzed the calls for number of pulses per note, the repetition rate of the pulses, the number of notes, and the duration of the calls. The infected males’ calls were faster and longer, traits female frogs are known to find more attractive. The fungus and the tree frogs have evolved a relationship that presumably increases the fungus’ ability to spread, as the more females their host interacts with as a result of their more sultry call, the more new hosts the fungus can spread to.

            The manipulation of host behavior by fungi and other parasites in order to facilitate transmission to new hosts is not a new idea. The ‘parasite manipulation hypothesis’, first described in the early twentieth century, describes this phenomenon in which parasites purposefully alter the behavior of their host to increase the probability that they interact with a new potential host. A well-known example of such a parasite is Toxoplasma gondii, a protozoan that infects a broad spectrum of warm-blooded animals.

            T. gondii is a protozoan (a unicellular eukaryotic organism) whose life cycle has two components. The first is asexual, where it replicates by fission, and can happen in almost all warm-blooded species. The second is sexual, where two individual T. gondii ‘mate’ to form genetically different progeny, and only can occur in feline species’ intestinal cells. Famously, mice infected with T. gondii no longer have an innate aversion for cat urine odor, making them more likely to be caught, and eaten, by cats. It is thought that this behavior change makes it easier for T. gondii to spread to cats, their preferred host and the only host in which they can sexually replicate (sexual reproduction is preferred because it increases the genetic diversity of the species). Humans can also be a host for T. gondii; in fact, it’s one of the most common parasites in the western world, with nearly half of the population being infected. Fortunately, the infection does not seem to induce disease (toxoplasmosis) unless the infected human is immunocompromised (like infants, AIDS patients, and patients on chemotherapy). However, there are some interesting correlation studies showing that infected human men no longer find the smell of cat urine unpleasant.

            Humans are not a good intermediate host for T. gondii to infect, because we no longer have natural feline predators. Chimpanzees, however, have one known feline predator… the leopard. When scientists studied the influence of T. gondii infection on chimpanzee behavior, they found similar results as has been noted for years in mice: infected chimps lost their innate aversion to leopard urine. Presumably, the protozoan induces this phenomenon to increase the probability that its chimpanzee host is predated by leopards, such that the protozoan can replicate in the leopard. Interestingly, when the scientists studied the chimps’ attraction or aversion to another feline’s urine compared to leopard urine, they found that the affect of T. gondii only affected the chimps’ attraction to leopard urine, not lion urine. This result indicates that the lack of aversion to urine induced by T. gondii in chimps is specific to the urine of felines residing in proximity to their hosts. Additionally, in the previous study mentioned where infected human men do not find cat urine unpleasant, they still found tiger urine to have an irksome smell. The studies done in T. gondii infected chimps and humans were correlative, but they do produce stimulating evidence for the parasite manipulation hypothesis.

            B. dendrobatidis and T. gondii are nowhere near the only parasites able to manipulate the behavior of their hosts. A tapeworm infection in stickleback fish, native to cold saltwater regions, and malaria infection in female great tits, a common bird species in Europe, Central Asia, and North Africa, causes the species more bold in exploring new territories, making them more susceptible to predation. In humans, parasite manipulation may not be of concern, as we are no longer prey to other animals, but it is a predominant effect in the animal world. Not only does this effect point to the incredibly intricate relationships that are formed between host and parasite, but also show the importance of innate animal behaviors keeping them away from potentially dangerous situations.

Grapefruits & Drug Metabolism

By: Rose Al Abosy

Every time I am prescribed a medication, I read through the information packet. The last medication I picked up at Walgreens had one line that stuck out: “Avoid eating grapefruit or drinking grapefruit juice...these can affect the amount of [the drug] in the blood.” I was immediately reminded of brunch I had one Saturday morning last year with a close friend. I wanted a glass of fresh squeezed juice, either orange or grapefruit, and my friend told me a quick little anecdote about grapefruit juice. They never had it in the house because everyone in his family had high blood pressure and “you can’t drink grapefruit juice on blood pressure meds.”

I just took his word for it at the time, but reading it in the information packet for my own meds made me investigate. Turns out, it comes down to drug metabolism. Cells that line the lumen of the gut have to absorb drugs and pass them to the liver so they can enter the blood and be circulated systemically. This process can rely on specific transporters at each stage, from absorption to re-circulation. The cells that absorb materials from the gut have enzymes that begin the metabolism of the drug. As a result, the amount of the drug that circulates is less than what you originally took orally. Doctors take this reduction in drugs into account when prescribing the amount of medication a patient should take. Calculating drug dosage is always based on how much active drug eventually ends up in the blood and takes into consideration how strong the degradation response against the drug is.

One important family of enzymes that are involved in this process are the cytochrome P450 enzymes (CYPs), one of which is CYP3A4. This enzyme is found in the liver and the epithelial cells of the small intestine where it metabolizes a lot of commonly prescribed drugs, like statins (which are for high cholesterol, but can also be used to treat high blood pressure).

So what does this have to do with grapefruit? CYP3A4 is inhibited by molecules called furanocoumarins, like bergamottin and dihydroxybergamottin, and with CYP3A4, the drug undergoes less processing in the gut and liver. It can take up to 72 hours to regain full enzyme activity after inhibition, so consuming furanocoumarins can inhibit drug processing for well over a day. Of course, these furanocoumarins just happen to be found in grapefruits. That means drinking grapefruit juice along with taking a drug that is metabolized by CYP3A4 can lead to a higher dose of the drug in your blood than intended by your doctor, which in some cases can be fatal. And remember those transporters that aid in moving the drug into the cells of your body? Those transports are also affected by grapefruit juice, either blocking their activity or downregulating their expression, which can lead to a lower dose of the drug in your blood. Thus, medications can greatly affect the final concentration of our medications in our blood—either by increasing it by blocking natural decomposition of it in our guts, leading to an increase in the final drug concentration, or by blocking transporters of the drug, leading to a decrease in the final drug concentration.

Even abundant and naturally found chemicals like the furanocoumarins in grapefruit can modulate drug metabolism, causing a higher or lower concentration of the drug than expected. For your own health, verify whether your medications are affected by grapefruit consumption, and always talk to your doctor if you have any concerns.

The reason our bodies have these systems in place is to make sure nothing dangerous enters our bodies. As medical drugs we take are foreign to our systems, our bodies automatically begin the process of degrading them. Of course, these degrading processes take time and have a certain threshold of how many molecules they can process, meaning that we can override these systems to ensure drug delivery. Additionally, drug development can also take advantage of these pathways, designing drugs that are usually inactive, until they go through lumen cells that cleave the drug into its active form.

Curing Coughs with Chocolate

By: Helen Beilinson 

   Occasional coughing is completely normal. It’s one mechanism our bodies use to remove foreign objects or accumulated secretions from our lungs and throat. However, when a cough becomes chronic (defined as lasting at least two months in adults or one month in children), this could be the sign of something more serious. Chronic coughing is not just annoying and uncomfortable; it can cause exhaustion by keeping you up at night, lightheadedness, and even rib fractures in severe cases. Chronic choughs can be caused by a variety of things; the most prominent sources are tobacco smoking, asthma, acid reflux (also as gastroesophageal reflux disease (GERD)), and postnasal drip. However, various respiratory infections, damage caused by past infections or chronic inflammation, such as chronic bronchitis, and blood pressure drugs have also been linked to chronic coughing. Of course, the best way to treat chronic coughing would be to battle the cough at the source of the problem to try to eliminate what was causing the coughing. But sometimes patients just want to subdue the cough, to find some relief, and a get good night’s sleep.

   The most common method of treating coughing is by using cough suppressants, or antitussives, containing codeine, a mild, plant-derived opiate. Codeine is a fantastic cough suppressant, but using it in large doses is unhealthy due to the side effects, which include drowsiness, vomiting, constipation, and addiction. Although this article may have started on a slightly sour note, I come bearing good news: a substance in cocoa and chocolate has been shown to suppress coughing more effectively than codeine, without the unwanted side effects. Which means if you have a persistent cough—eat chocolate!

   In the 1970’s, asthma was being treated with a compound called methylxanthine theophylline, a synthesized molecule. Authors of a study published in 2004 wanted to explore whether a naturally occurring substance very close in structure to methylxanthine theophylline also had the same antitussive properties. This naturally occurring substance was theobromine—the bitter alkaloid of the cocoa plant, as well as a component found in tealeaves and the kola nut.

   To first test whether theobromine has cough suppressive properties, the authors turned to a cough model in guinea pigs. To give the guinea pigs coughs, the scientists microinjected small amounts of citric acid into the larynx of guinea pigs. The larynx is the hollow tube forming an air passage to the lungs from the mouth and holds the vocal cords. Citric acid treatment gives guinea pigs a cough that lasts about 24 hours. When the guinea pigs were treated with theobromine, their coughs were suppressed for 4 hours at a time.

   Once they had this preliminary evidence that theobromine acted as an antitussive, the authors examined if it could also inhibit induced coughs in human subjects. The volunteers were first given tablets with theobromine, codeine, or a placebo. Then, they inhaled capsaicin, the active component of chili peppers, which induces coughing. To measure the effectiveness of the cough medicines, the scientists measured the amount of capsaicin needed to induce coughing in the volunteers who had taken one of the three types of pills. The more capsaicin needed to induce coughing, the more effective the medicine is as a cough suppressant. Surprisingly, the volunteers that took theobromine required about one-third more capsaicin to start coughing as the volunteers who took codeine, meaning that theobromine was a more effective at suppressing a coughing reflux than codeine.

   Since this discovery in 2004, there have been more reports and clinical trials that have explored theobromine as an alternative cough suppressant to codeine. One study, presented at the British Thoracic Society’s winter meeting in 2012, found that of 300 patients with persistent coughs at 13 hospitals that were given theobromine, 60% found great relief. It has also since been shown the mechanism by which theobromine works: it blocks the sensory nerves that cause coughing, preventing them from inducing the cough reflex.

   The coldest weekend of this winter is upon us here on the east coast of the United States and I have a feeling another wave of colds is upon everyone. Thankfully, I have scientific evidence that hot cocoa and chocolate bars can keep me feeling better... or at least suppress my coughing.

Timing Decomposition with Microbes

By: Helen Beilinson

The last decade has seen many new discoveries that have revolutionized science. Arguably, one of the most influential of these advances has been the appreciation of the impact of microorganisms on human health. In particular, the important roles played by the bacteria, viruses, and other bugs (collectively called the microbiota) that live in or on us, continue to be enumerated. Numerous elegant studies have characterized changes that happen in a person’s microbiota throughout the course of a regular year, during the progression of an infection, or even how a space environment can affect the composition of bacteria in our gut. Recently, a group from the University of California, San Diego explored the changes in bacterial composition during a different phase of life—corpse decomposition.

It might sound a bit gruesome, but the decay of once living things is critical for the cycling of nutrients on earth. The completion of this task requires an extensive arsenal of microbial and biochemical activity. Previous studies had shown that decomposition occurs in a somewhat predictable, stepwise fashion. It was also known that bacteria and other microorganisms are critical for this natural process to occur properly. However, the details of this process were not well understood. The authors of this specific study wanted to know if the environment an organism inhabits dictates the microbial decomposers, whether these microbes come from the host or the environment, and how a decomposed organism changes the environment around it.

To answer these questions, the authors determined the composition of the communities of microorganisms in decaying mice and humans in various environments. Using human cadavers might sound a bit grisly, but it’s important. Mice are good models of various human diseases and are great tools to study many aspects of biology and organismal biochemistry. However, human and mice are still two different organisms, and human subjects were required in this study to verify that their findings in mice matched what occurs in humans. The use of human cadavers is important for the implications and potential applications of this study, as it may be the newest tool in forensic science... but I’m getting ahead of myself.

To identify the families that make up the microbial communities in their samples, the authors utilized a precise technique, 16S rRNA sequencing. This technique takes advantage of the fact that there are some genes that are pretty similar in bacteria that are closely related, and grow more different the less related they are. By sequencing all the microbes, the authors are able to group them into their families and compare how similar or different the microbial populations are between different experimental groups.

An exciting preliminary piece of evidence these authors observed is that the previously described stages of decomposition followed hand in hand with a very precise and dynamic microbial community. The microbes present on day 1 are different from those that emerge on day 4 which again are different from those on day 10. At every stage, between day 1 and day 71, the microbial communities were unique. Perhaps surprisingly, when the authors changed the location of where their mouse specimen was decomposing, there was no effect on the microbial decomposers! A microbial community from a mouse decomposing in a desert environment on day 7 was almost identical to that from a mouse decomposing in a forest on day 7. Seasons also did not significantly impact microbial populations. These same data were obtained using human cadavers.

Based on the latter piece of information, one might expect that if microbial decomposers are more or less the same in different environments, which these decomposers would come from within the host itself. However, the authors found that the soil is the primary source of the microbes, even if the soil type and environment is different. It’s important to note that 16S rRNA sequencing is not the best technique for identifying specific microbes. It is mostly used to identify families of microbes that are closely related. Families of microbes tend to have similar functions, meaning they can carry out similar reactions. These data together imply that because specific microbes carry out specific reactions and that there is a predictable change in microbes over the course of decomposition, one could predict that the biochemical changes carried out by microbes can be tracked in sequential steps during the course of decomposition.

To further explore this question, the authors examined biochemical reactions taking place in the abdomen of the decaying specimen. They found that indeed, throughout the process of decomposition, there are specific reactions that can be detected at each step. The biochemical reactions that take place correlate almost perfectly with the presence of particular microbes that can carry out this process. Interestingly, the authors were also able to show that the soil around decomposing organisms also has such post-death dating properties, in that the products of the biochemical reactions occurring in the organism seep into the soil surround it, changing its chemical properties. The products produced, such as nitrate and ammonium, are used by plants to grow. Although it is a tad ghastly to think about, mammalian decomposition is important for the cycle of life on Earth. The products of decomposition allow for plants to grow which feed the living mammals.

Although now a fairly simple technology, sequencing of the microbiota of an organism has been an incredibly powerful tool in the biomedical sciences. This study has shown that another, perhaps surprising, field that may benefit from this technology is forensic science. Although there are technologies in place to help forensic scientists identify when a person has died, these are often not very precise. This paper’s methods were able to distinguish time of death with a one to two day accuracy based on the microbes found in that body and the biochemical reactions occurring. The microbes around us clearly have a constant influence on our lives: from our birth to our death, microorganisms make us who we are.

Sex > Food for Male C. Elegans

By: Helen Beilinson

Caenorhabditis elegans, or simply C. elegans, are small nematodes (worms) that are one of the most popular organisms used to study animal biology. There are many reasons for this: they replicate quickly and they are easy and inexpensive to care for. But the most fascinating fact is that each individual worm has a set number of cells that each has a specific position and function. Each of the cells can be followed from its conception to its final location. In the 1980’s, Sir John Sultson was one of the first scientists to track each of the worm’s cells throughout development to create one of the first maps of cell lineage. Since then, many researchers have continued to follow up on Sultson’s studies, leading to the belief that C. elegans’ cell lineage map had been completed. So, it came as a big surprise to the field when a group out of London discovered two new neuronal cells in the male worms.

Although C. elegans are sexually dimorphic, like humans, they are not divided into males and females. Instead, they are made up of hermaphrodites that can self-fertilize and males that can only fertilize the hermaphrodites. Males and hermaphrodites have different reproductive behaviors to reflect their reproductive patterns. Male worms need to learn how to optimally locate mating partners, which they accomplish through a process called sexual conditioning. It was previously known that males are attracted to hermaphrodites by sensing their pheromones or by directly sensing them with their tails. A recent study in the journal Nature identified two previously undescribed male-specific neurons that are necessary for sexual maturation.

This finding came as a surprise. Because they are self-perpetuating, hermaphroditic worms are easy to maintain, and are consequently more widely studied than male worms. When studying males, most scientists have focused on physically obvious attributes, such as the worms’ tails, not their brains. However, when these authors looked more closely at male worms’ brains, which had been thought to contain 383 neurons, they found that they contained 385. They called these neurons mystery cells of the male, or MCMs. To identify the function of MCMs, the authors explored other cells that the MCMs interact with. They found that these cells are a component in a loop of interactions between neurons that function in regulating mating experiences by modulating behavior. Specifically, the MCMs are necessary for a male-specific switch in puberty, in which they respond to chemical signals differently after sexual conditioning. This sexual conditioning functions to make the males suppress cues from the environment that indicates presence or absence of food in favor of sex. While hermaphrodites will always migrate towards areas with good food and migrate away from dangerous areas without food, sexual conditioning causes males to go away from areas with good food or go towards areas with bad food if there are potential mates in those locations. In effect, males prioritize sex over food.

To test this hypothesis, the authors set up the following experiment. C. elegans tend to avoid salt-rich environments, because high salt is usually an indication of food scarcity. The authors placed potential mates into a salt-rich environment and placed either hermaphrodites or males outside of this salt-rich location. They found that hermaphrodites, before and after sexual conditioning, will always avoid salt locations. However, the males will avoid the salt location before sexual conditioning, and will enter the salt-rich area after sexual conditioning. If the authors remove the MCMs from sexually conditioned males, they no longer enter the salt-rich area after sexual conditioning. The authors conclude that male C. elegans suppress their knowledge of the risk of no food for the benefit of potentially mating.

This phenomenon makes sense. Hermaphrodites are capable of self-fertilization, so in order to procreate, they need any other worms around and need to prioritize their health to be good parents. Males, on the other hand, absolutely need another partner to reproduce. The health of the male is not as critical in producing viable children as is their partners, the hermaphrodites. Thus, they can risk putting sex before food.

When the authors tried to find the origin of the MCMs during C. elegans’ development, they found that they arise from glial cells. Glia are cells that reside next to neurons and provide structural and functional support to neurons with which they are associated. However, during sexual maturation, some of the worms’ glial cells begin to start expressing neuronal proteins and develop into MCMs. Hermaphrodites do not have the glial precursors of the MCMs, so these cells, from the beginning of the male worms’ lives, are male-specific. This is the first case found in non-vertebrates where neurons develop from glial cells.

The discovery of these new neurons links developmental and anatomical differences between males and hermaphrodites to their sex-specific behaviors. It’s fascinating that the behavioral patterns of these worms is quite literally hard-wired in their minds, as opposed to something they have learned and apply to a situation. These findings are also a testament to how many new discoveries are happenstance and often come from re-observing something that’s right under your own nose. 

Ovararian Transplants in Cancer Patients & Their Implications: Are we challenging nature too much?

By: Helen Beilinson

Cancer results from the accumulation of mutations within normal cells in our bodies that result in their abnormal and uncontrolled growth. These cells replicate very rapidly and amass to form tumors. Two of the most common treatments for cancer, chemotherapy and radiation therapy, function to eliminate cancer cells. Chemotherapy works by delivering chemical substances (such as anti-cancer drugs) into the patient, where they act as cytotoxic agents, killing cells that divide very rapidly. Chemotherapy is unfortunately not specific for cancer cells, just dividing cells, so it kills healthy cells as well. An infamous side effect of chemotherapy is  hair loss, or alopecia, which happens due to the cytotoxic effect of chemotherapy on hair follicles, a rapidly dividing cell type. Radiation therapy uses ionizing radiation, which takes advantage of high-energy rays, to kill cancer cells. Radiation can be targeted to a particular area within the body, as opposed to chemotherapy, which is predominantly administered into the blood stream. However, it still leads to the death of healthy, noncancerous cells, surrounding the tumor location.

Although cancer is predominantly known as a disease associated with age, youth doesn’t protect entirely from cancer. Teenagers and young adults can still be diagnosed with a variety of cancers. An unfortunate side effect in female cancer survivors is that chemotherapy and radiation therapy can often result in infertility, rendering the women unable to have children.

In an article this week in Human Reproduction, authors explored whether they could restore fertility in women who had survived cancer. To do this, before beginning cancer treatment, doctors removed either entire or partial ovaries from these patients who decided they would want to have children after treatment. They then cryopreserved the ovaries, freezing them at subzero temperatures for long-term preservation. After successful treatment of the women’s’ cancer, the surgeons transplanted the cryopreserved ovarian tissues back into their patients.

The doctors found that of the 32 women who chose to try to become pregnant after transplantation, 10 (31%) were able to conceive one or more children. Doctors estimate that women who do not undergo ovary transplants have a maximum of a 5% chance of conceiving after cancer treatments. A 25% increase is not too shabby.

Though it involves two additional surgeries, the treatment is very safe and has provided a lot of comfort to women diagnosed with cancers early in life. As Claus Yding Andersen, a reproductive physiologist who was involved in this study, said in an interview with Capital Public Radio, “Obviously, the thing that interests [patients] the most is to survive the cancers, but immediately after that they would say they are really interested in maintaining their fertility.” This advancement in transplantation medicine has provided cancer survivors with the ability to continue their life plans after the jolting reality of cancer.

This study, however, raises many moral questions. In 1970, the average age of a woman to have her first child was 21.4. Nearly half a century later, today the average age is 25.2. As women are having children later in their lives due to a variety of social, political, and economic reasons, many have considered freezing their eggs as a way for them to retain their fertility until a time when they are ready to have children. In light of the success of cryopreservation of ovaries of cancer patients, physicians have began asking whether it should be available for women who are not cancer patients, giving them the chance to preserve their ovaries until a time when they are ready to have children.

Due to how egg cells develop in women, which I will not go into detail here, eggs that are released from ovaries earlier in life tend to be more healthy and have less potential mutations, compared to those released later in life. It is also believed that the uterus is not as affected by age as other reproductive parts. Thus, in theory, if a woman freezes her eggs and undergoes in vitro fertilization later in life, the woman is very likely to have a healthy pregnancy and a healthier child than if she chose to have children without in vitro fertilization. In theory, this idea is also applicable to transplanted cryopreserved ovaries. However, many other problems deserve consideration. For example, due to decreased estrogen production later in life, the mother will be less able to produce milk to feed her child.


Of course, advances in medicine are always incredible—especially when we are able to protect and conserve such a complex system as pregnancy. However, there may be unknown consequences to having children later in life, especially by more medically aided means. Evolution has shaped the way our bodies work for millions of years. Evolution functions not only to advance traits that are helpful in a particular organism, but also to maintain a balance between all systems within that particular organism. Medicine has changed how our bodies interact with the outside world (with treatment of infectious diseases) and how our bodies handle changes within us, such as cancer or pregnancy. Medicine is able to target specific problems or concerns of patients, however, targeting one problem can off set known and unknown factors leading to unforeseen consequences. There is still a lot to be learned about how offsetting the age at which organisms have children can affect the offspring. Although medical advances have been incredibly helpful in some situations, such as allowing women who have lost their fertility do to cancer treatment to mother children, they also raise moral and ethical questions that should be considered before allowing such treatments to be used by everyone.