Ovararian Transplants in Cancer Patients & Their Implications: Are we challenging nature too much?

By: Helen Beilinson

Cancer results from the accumulation of mutations within normal cells in our bodies that result in their abnormal and uncontrolled growth. These cells replicate very rapidly and amass to form tumors. Two of the most common treatments for cancer, chemotherapy and radiation therapy, function to eliminate cancer cells. Chemotherapy works by delivering chemical substances (such as anti-cancer drugs) into the patient, where they act as cytotoxic agents, killing cells that divide very rapidly. Chemotherapy is unfortunately not specific for cancer cells, just dividing cells, so it kills healthy cells as well. An infamous side effect of chemotherapy is  hair loss, or alopecia, which happens due to the cytotoxic effect of chemotherapy on hair follicles, a rapidly dividing cell type. Radiation therapy uses ionizing radiation, which takes advantage of high-energy rays, to kill cancer cells. Radiation can be targeted to a particular area within the body, as opposed to chemotherapy, which is predominantly administered into the blood stream. However, it still leads to the death of healthy, noncancerous cells, surrounding the tumor location.

Although cancer is predominantly known as a disease associated with age, youth doesn’t protect entirely from cancer. Teenagers and young adults can still be diagnosed with a variety of cancers. An unfortunate side effect in female cancer survivors is that chemotherapy and radiation therapy can often result in infertility, rendering the women unable to have children.

In an article this week in Human Reproduction, authors explored whether they could restore fertility in women who had survived cancer. To do this, before beginning cancer treatment, doctors removed either entire or partial ovaries from these patients who decided they would want to have children after treatment. They then cryopreserved the ovaries, freezing them at subzero temperatures for long-term preservation. After successful treatment of the women’s’ cancer, the surgeons transplanted the cryopreserved ovarian tissues back into their patients.

The doctors found that of the 32 women who chose to try to become pregnant after transplantation, 10 (31%) were able to conceive one or more children. Doctors estimate that women who do not undergo ovary transplants have a maximum of a 5% chance of conceiving after cancer treatments. A 25% increase is not too shabby.

Though it involves two additional surgeries, the treatment is very safe and has provided a lot of comfort to women diagnosed with cancers early in life. As Claus Yding Andersen, a reproductive physiologist who was involved in this study, said in an interview with Capital Public Radio, “Obviously, the thing that interests [patients] the most is to survive the cancers, but immediately after that they would say they are really interested in maintaining their fertility.” This advancement in transplantation medicine has provided cancer survivors with the ability to continue their life plans after the jolting reality of cancer.

This study, however, raises many moral questions. In 1970, the average age of a woman to have her first child was 21.4. Nearly half a century later, today the average age is 25.2. As women are having children later in their lives due to a variety of social, political, and economic reasons, many have considered freezing their eggs as a way for them to retain their fertility until a time when they are ready to have children. In light of the success of cryopreservation of ovaries of cancer patients, physicians have began asking whether it should be available for women who are not cancer patients, giving them the chance to preserve their ovaries until a time when they are ready to have children.

Due to how egg cells develop in women, which I will not go into detail here, eggs that are released from ovaries earlier in life tend to be more healthy and have less potential mutations, compared to those released later in life. It is also believed that the uterus is not as affected by age as other reproductive parts. Thus, in theory, if a woman freezes her eggs and undergoes in vitro fertilization later in life, the woman is very likely to have a healthy pregnancy and a healthier child than if she chose to have children without in vitro fertilization. In theory, this idea is also applicable to transplanted cryopreserved ovaries. However, many other problems deserve consideration. For example, due to decreased estrogen production later in life, the mother will be less able to produce milk to feed her child.


Of course, advances in medicine are always incredible—especially when we are able to protect and conserve such a complex system as pregnancy. However, there may be unknown consequences to having children later in life, especially by more medically aided means. Evolution has shaped the way our bodies work for millions of years. Evolution functions not only to advance traits that are helpful in a particular organism, but also to maintain a balance between all systems within that particular organism. Medicine has changed how our bodies interact with the outside world (with treatment of infectious diseases) and how our bodies handle changes within us, such as cancer or pregnancy. Medicine is able to target specific problems or concerns of patients, however, targeting one problem can off set known and unknown factors leading to unforeseen consequences. There is still a lot to be learned about how offsetting the age at which organisms have children can affect the offspring. Although medical advances have been incredibly helpful in some situations, such as allowing women who have lost their fertility do to cancer treatment to mother children, they also raise moral and ethical questions that should be considered before allowing such treatments to be used by everyone.


What makes mad cows mad: The story of prions

By: Erica Gorenberg

In the years following the first human case of “Mad Cow Disease” or variant Creutzfeldt-Jakob Disease (vCJD), world governments introduced measures meant to prevent the infection of additional animals and to protect humans from the continued spread of the disease.

Diseases like mad cow, or bovine spongiform encephalopathy (BSE) in cows, had been documented in animals and humans throughout the world long before the 2003 outbreak. In humans, Creutzfeldt-Jakob Disease (CJD) was first described in 1920, and Kuru, “the laughing sickness” was discovered in the Fore tribe of Paupa New Guinea in the 1950s. In sheep, the equivalent disease is known as Scrapie, because as the disease progresses, the sheep scrape themselves against anything they can find, causing severe injuries. Although these diseases had been studied for many years, it wasn’t until the the 1980s that researchers understood that, unlike previously known infectious agents like bacteria or virus, these diseases were caused by an infecting protein, also known as a prion.

            Each of the thousands of proteins made in a cell has a specific sequence of amino acid building blocks that denotes how it should fold in order to function properly. Most cells in the human body make PrP, the protein that can cause CJD and the other prion diseases mentioned above, but in unaffected individuals it is harmless. In contrast to the normal form of PrP, its prion variant, PrPSc, has a conformation that is harmful to the cell and that can take the normally-folded version and convert it into the infectious misfolded version. Basically prions are the bad kids that your parents didn’t want you to hang out with in high school.

As if prion proteins weren’t already causing enough damage, PrPSc clumps together, inhibiting the normal function of the cells. When too much protein aggregation occurs, cells activate a suicide pathway, known as apoptosis, in order to prevent the spread of harmful materials by breaking them down. Under normal circumstances, misfolded proteins are broken down, but prion aggregates are resistant to the cell’s normal protein breakdown system, the proteasome. In prion disease, more and more cells die, leaving brain tissue porous and spongy and contributing to the symptoms of the disease. In humans, CJD and Kuru manifest first with dysfunctions in muscle coordination and progress rapidly to include personality changes, memory impairment, dementia and eventually death.

Prion proteins usually infect their hosts through consumption or contact with contaminated material. Only in rare cases do sporadic genetic mutations in the PrP gene lead to heritable prion disease. It seems BSE spread to cows because the protein in their feed came from scrapie-infected sheep. When humans consumed infected cow meat, the prion proteins of the cows were similar enough to pass along PrP misfolding to their human counterparts, creating vCJD.

The prion hypothesis has been controversial since its proposal, but more and more research stands to support the idea of infectious proteins. Now, researchers are able to purify PrP and study animal models that are helping them to understand how this protein may first spontaneously misfold to cause the diseases. Many questions remain unanswered, and a cure for prion disease has yet to be found, but research in this field continues. To understand prion disease, we must learn if PrP, even in its prion form, may exist to aid the cell in some way and whether diseases like Alzheimer’s or depression may be caused by prion-like proteins.


Follow these links for more information:




Phopivirus: Hepatitus A Virus' Newly Discovered Cousin

By: Helen Beilinson

As Zuri has written about before, seals may have been the original carriers of the bacterium that causes tuberculosis, Mycobacterium tuberculosis. Seals may get another bad hit to their name with a new paper showing that harbor seals have a unique virus called phopivirus, which is closely related to the human virus hepatitis A virus (HAV). The discovery of this virus provides a broader picture of the viral diversity within the HAV family.

HAV, as the name suggests, is the causative agent of hepatitis A. HAV infects hepatocytes and Kupffer cells, causing major liver inflammation, leading to nausea, jaundice, and dark, amber-colored urine. Hepatitis A has been evolving with humans for a while now, but it’s not clear when and from where the virus first emerged in the human population.

In a somewhat strange incident in 2011, many harbor seals (Phoca vitulina vitulina) were dying from pneumonia all along the coast of New England. To identify the pathogen that was responsible, scientists isolated lungs, livers, spleens, and oral mucosa (by taking a cheek swab) from three harbor seals that passed by this infectious agent. By isolating the genetic material from these samples, the scientists were able to identify viruses that had infected the isolated tissues. Not only did the samples reveal that the seals’ pneumonia was probably caused by an influenza virus infection (the stain, H3N8, had been circulating around the United States at the time), further studies showed that a previously unidentified virus was also in these tissues. By comparing the genome of this new virus to known viruses, the authors were able to characterize this new virus, phopivirus, as HAV’s closely related cousin.

The authors continued to characterize the virus in the hopes that it would provide more information as to the natural history of HAV. There are a few ways in which such a closely related virus could have emerged in the seal population. First, there could have been a transfer of HAV from humans to seals. Second, there could have been a zoonotic transmission of phopivirus from seals to human. Or, third, the two viruses may have both evolved from a common ancestral virus that both infected humans and seals independently, and within each speices, the virus evolved into the HAV and phopivirus we see today.

Like all pathogens, viruses evolve in conjunction with their hosts to in an attempt to survive within the host for as long as possible and to avoid the hosts’ attempts to eliminate them. This leads to the observation that when viruses have been in a host for many generations, the virus evolves to function ideally within that particular host. When this virus jumps from one host to a new one, the virus must then adapt fairly quickly to adjust to the new host, such that after the jump occurs, the virus has a very rapid burst of adaptive evolution. To identify whether phopivirus has evidence of adaptive selection, the authors looked at two genes within the viral genome (VP1 and 3D) and performed a particular algorithm that compares the genes’ sequences in phopivirus to other viral strains to evaluate whether rapid diversification of the genes had occurred. In addition to this algorithm, the authors looked at various tissue samples from 29 other harbor seals and six gray seals. In eleven of the harbor seals and one of the gray seals, the authors not only found phopivirus, but found that the VP1 and 3D genes were dearly identical, which complies with the conservative nature of VP1 and 3D (meaning that these genes tend to not mutate much once the virus has established itself within a new host population). From these results, the authors conclude that seals are the natural host of phopivirus and that the virus has been evolving in the seals for a long period of time, and had not been introduced recently, eliminating the hypothesis that HAV has been transmitted to seals or that phophivirus was zoonotically transmitted to humans.

Phopivirus is the closest related virus to HAV ever discovered, and, in fact, is the first known liver-infecting virus found outside of primates. The authors of this study argue that the two viruses have very similar origins, and that differences within their genomes are due to the evolutionary constraints on the viruses within their distinct hosts. Based on these data, it cannot be concluded that there was no transmission of either virus from human to seal (or vice versa). A previous hypothesis on the origin of HAV was that an Old World nonhuman primate zoonotically transmitted the virus to humans. This hypothesis also cannot be discarded with these new data. More studies will have to be to done to further understand the history of the HAV family, but seals have proved that the HAV family is must more diverse than previously thought.

Roses with Different Names Don’t All Smell as Sweet

By: Helen Beilinson

Juliet famously declared her love for Romeo arguing that for her, no meaning lies behind his Montague name, and that she loves him regardless because “a rose by any other name would smell just as sweet”. Although the backstory of this phrase is quite romantic, the science behind it somewhat dims the romance. In fact, roses with different (scientific) names don’t all smell just as sweet. Of note, you might have noticed that most roses you can buy have essentially no scent. This phenomenon has been mostly attributed to the fact that rose cultivars have been selected for their color and longevity once cut.

Color, longevity, and scent are all traits that are controlled by genes. Such genes can either act independently (such that their inheritance is unconnected to the inheritance of other such traits) or can be linked (such that particular traits are inherited together). It’s still unknown why the three aforementioned genes have been selected for in such a pattern, but authors of a recent Science report have made strides in understanding the gene that controls the scent trait in the hopes that it can be genetically returned to those under-scented cultivars that are sold in florists’ shops.

Just as humans sometimes use flowers to attract their mates, flowers use scents to attract pollinators, which are necessary intermediates for plant sexual reproduction (for obvious mobility reasons). There is an entire field devoted to studying flower (and other) scents, called aroma chemistry. Aroma chemistry is incredibly complex. Two important take away points from the field, however, are that first, most floral compounds are aromatic, meaning they contain planar circular components and are volatile (meaning that it easily evaporates) because they need to get from the plant to whomever is smelling them. Second, every floral scent is unique to its particular flower species and is made up of multiple such aromatic compounds. Most rose scents are made of different mixtures of two kinds of aromatic compounds: monoterpene alcohols and 2-phenylethanol. The biochemical pathway of 2-phenylehtanol synthesis is known, but that of monoterpene synthesis in plants was not known. Reinstituting scents in roses that have lost them requires knowing what genes need to be replaced. That is why this group’s goal was to identify what this enzyme (or enzymes) is in roses.

To find the enzyme, the authors compared the genes that are expressed in two different species of roses: the Papa Meilland (PM) cultivar, which produces a heavy rose scent and thus the most amount of aromatic compounds, and the Rouge Meillant (RM) cultivar, which produces minimal scent. They wanted to identify a candidate gene that was highly expressed in the heavy scented rose (PM) and minimally expressed in the non-scented rose. They identified a candidate gene they named RhNUDX1. RhNUDX1 is a Nudix hydrolase, a family of proteins that use water molecules to break their substrate (the molecule for which they are specific) into two. They found that RhNUDX1 is expressed exclusively in petals of the PM cultivar, which is where the aromatic compounds are expected to be expressed, and is most highly expressed during their third stage of growth, a period when maximum scent production occurs. In the RM cultivar, RhNUDX1 expression is minimal in all parts of the plant.

The authors then wanted to verify that this gene’s expression correlated with scent production in various other roses to make sure this wasn’t specific for the two initially studied roses. To do this, they did a survey of 10 different cultivars with different scent potencies. They found that scent intensity directly correlated with RhNUDX1 expression, providing more evidence that this gene was the gene candidate they were indeed looking for.

To directly test whether RhNUDX1 levels directly influence scent and monoterpene production, the authors manipulated the level of RhNUDX1 expression in another heavily scented cultivar, the Old Blush (OB). They found that monoterpene production was impaired following when RhNUDX1 expression was reduced, whereas the level of other aromatic compounds was unaffected. Unfortunately, the authors did not transfer the RhNUDX1 gene into a rose cultivar that had a low level of scent to confirm that the presence of the gene was sufficient to synthesize the monoterpenes. However, the coorelative studies, in addition to showing that RhNUDX1 levels in the OB cultivars is linked to monoterpene levels, show fairly conclusively that this gene is heavily involved in scent production in roses, specifically in monoterpene synthesis.

Although monoterpenes are common aromatic compounds amongst plants, the pathway involving RhNUDX1 in roses is novel. This discovery adds to an increasing line of evidence that shows that although many plants produce similar, or even exactly the same, scent compounds, they independently evolved the proteins needed to do this, pointing at the great importance of having a scent in plant life. This discovery may also mean that readily available roses will soon smell much sweeter, and that we could potentially manipulate how our roses smell.

Sex against parasites

Image acquired from  Flickr  under a Creative Commons 2.0  license .

Image acquired from Flickr under a Creative Commons 2.0 license.

Sex may seem like all fun and games, but evolutionarily speaking, sexual reproduction has perplexed biologists for decades. It’s a question of math—why have a population in which only 50% of people can reproduce? In other words, why do men exist? Other than killing bugs and lifting heavy things that you could probably lift yourself, men, and sexual reproduction, confers an important evolutionary advantage: protection from pathogens.

The generation time of a human, other animal, or even a plant, is far greater than that of a bacterium. Think years, versus hours (or even minutes). Bacteria, and other pathogens, also acquire mutations at a much higher rate than humans per generation. Although mutating doesn’t sound like a benefit, it actually allows the bacteria to evolve as it is able to find mutations that better suit the particular environment in which it finds itself.

With bacteria acquiring new mutations so often, and evolving so rapidly, how are we humans supposed to keep up? This is where sex comes in. While we aren’t able to reproduce every hour, sexual reproduction allows us, as a species, to be constantly mixing our genetic material. Asexual reproduction, as occurs in bacteria, involves a single organism making an almost exact copy of itself. Any mutations that arise are random, and useful ones are just lucky. Sexual reproduction, on the other hand, always involves mixing information of two parents, so each generation is an opportunity for the acquisition of lots of new traits.

The idea that sexual reproduction might provide protection from pathogens is not a new one. This theory has its roots in a set of ideas known as the Red Queen Hypothesis. In Lewis Carroll’s Through the Looking Glass, the Red Queen says: “Now, here, you see, it takes all the running you can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!” In evolutionary biology, this translates to the idea that pathogens (e.g. viruses, bacteria, fungi, and parasites) and their hosts are engaged in a constant race against one another where the pathogens want to remain in their hosts and hosts want to eliminate them. Fortunately for the pathogens, they’re able to reproduce and evolve more rapidly than complex multicellular organisms like us. Each new generation, which occurs on the scale of hours, is an opportunity for a species of bacteria to acquire new mutations that could fortuitously render it less susceptible to attack from an animal’s immune system.

Multicellular organisms cannot mutate themselves on a per infection basis, so we depend on other mechanisms of battling quickly mutating bugs. The genetic variation that we, as a species, get from sexual reproduction is particularly important for the ability of our immune system to fight pathogens. In fact, the most variable set of genes in the human genome encode proteins that determine what kinds of pathogens an individual is best at fighting. This variation affords our species widespread protection from pathogens in general—even if one person is particularly susceptible to a certain viral infection, for example, the likelihood of everyone being susceptible to this virus is made extremely low by our extraordinary genetic diversity. This diversity is afforded by sexual selection that allows humans to acquire new traits with every generation.

There’s no guarantee that newly acquired traits will be useful, and many of them can be neutral, like eye color, or detrimental, like genetic diseases. Over evolutionary time, however, sexual reproduction is hypothesized to give organisms a leg up in the arms race with pathogens. So in addition to allowing you to make babies and enjoy yourself at the same time, sex may also play an important role in protecting species from extinction.

Changing Intrinsic Social Biases While You Sleep

picture by Moyan Brenn on Flickr

picture by Moyan Brenn on Flickr

As we all know, a good night’s sleep is necessary to maintain normal function and to prepare our bodies for the demands of day-to-day life. Without proper sleep, we are more likely to feel groggy or depressed, be more susceptible to becoming sick, and are more likely to develop chronic diseases, such as obesity. Outside of these more disease-preventing functions of sleep, it has also been shown that sleeping promotes learning and recollection of events. In particular, sleep plays an important role in our ability to consolidate our memories. Neuronal traces of memories are reactivated during sleep in order to strengthen these memories and provide them with long-term stability. It’s kind of like our brains replay while we sleep what we saw, smelled, heard, etc. while we were awake in order to instill it in our memory.  

Rapid eye movement (REM) sleep is the most beneficial phase of sleep for memory consolidation; it’s also the phase where you experience the most dreams. During REM sleep, exposure to odors associated with a particular experience can enhance the reactivation and consolidation of specific memories. For example, if you were studying wearing a particular perfume, smelling that perfume during REM sleep reminds your brain of what you were studying that day, as the smell and the facts are associated. After waking up, trying to remember those particular facts becomes easier, particularly if you smell the particular perfume. Scientists have recently shown that a similar phenomenon exists for sounds. One can imagine that by using either olfactory or auditory triggers while we sleep, we can learn new things, or, as a recent paper in the journal Science explored, relearn things in a different way.

Gender biases, particularly the association of women with art and men with science, are a form of memory, as are racial biases that associate people of color with negative words over positive ones. The authors of the aforementioned paper asked whether one’s intrinsic gender and racial biases could be altered using auditory cues during REM sleep. The researchers in this study investigated whether such gender or racial biases could be during sleep.

To do this, pictures of men and women of different racial groups were shown next to either science or art words, as seen in the figure below (from Science). Participants were asked to choose counterbias pairs- men with art terms and women with science terms. When these pairs were seen, participants pressed a button. A correct counterbias association would cause the program to produce a sound. Thus, when they saw a picture of a woman with a science word and reacted in a timely manner, a sound was made. When they saw a picture of a woman with an art word, no sound was made.

After this initial “training”, participants were invited to take a 90-minute nap. Once participants entered REM sleep (which you can distinguish as your eyes make rapid movements during this phase), the authors played the sound that was associated with counterbias pairs to half of the participants. Participants took bias exams both after they woke from their naps and a week after, to see whether they gained or lost social biases. In both cases, participants were more likely to correctly associate counterbias pairs, such that they were more likely to match women with science-related words or those of color with positive words. They only observed this significant change in those participants that were exposed to the sound while sleeping. Those that weren’t exposed didn’t show any changes in their biases.

So what does this all mean? First, it’s pretty cool that sound alone can change how a person thinks. This, of course, has Brave New World caution tape all over it. Manipulation of human thinking is an ethical issue that needs to be taken into account, in research and otherwise. This study also raises the question of whether or not their experiments are truly a test of social bias, or just adept picture/word matching. Many questions remain, but the study does open a lot of new doors. For example, auditory therapy could potentially be used in the treatment of posttraumatic stress disorder (PTSD). The benefits of such therapies could outweigh its costs, but ethical considerations must always be taken into account.

Ask a Scientist: How do you become a researcher?

By: Kylia Goodner

This week’s question is for the dreamers who want to be on the forefront of knowledge creation.  If you like topics that other people think are gross or boring (like bugs, or physics), are full of skepticism, and like proving ideas wrong, then scientific research might be for you! Although a science career might seem completely out of reach for most people, it isn’t! There are multiple levels within scientific research – and people from every educational level can be a part of the scientific process!

If you are more organizationally minded you might want to consider becoming a research or laboratory technician. Typically, these positions require a bachelors or masters degree in a related field (biology, chemistry, environmental studies, physics, etc…) and at least one semester of laboratory experience. You can get these types of experiences in college by seeking out specific undergraduate research programs or by just asking around your school to see if anyone is taking students. A position as a laboratory technician is extremely important, as they are in charge of managing the every-day laboratory tasks (like purchasing supplies, making solutions, etc…) but also keep a few side research projects going. Most of the time they still publish papers and advance scientific knowledge, but at a slower rate than other laboratory members because they have other, extremely important laboratory responsibilities.

If you’re not financially minded, and the thought of being in charge of laboratory supplies makes you cringe, you might be more inclined to perform bench or field work. This work typically requires at least a Ph.D. in a relevant field. Luckily, many scientific Ph.D. programs cover your tuition and provide you with a (modest) living stipend.  So, although you will take the extra educational time to complete a 4-6 year Ph.D. program, at least you have some income rather than student loans! After the completion of a Ph.D., you will likely need to complete a postdoctoral position as well. These positions can take between 2-5 years to complete, but again, you’re being (moderately) paid.  During the 5-10 years of your Ph.D. and postdoc you’ll be doing hands on research. Day to day activities during this time can vary greatly, but typically you focus on one or two main research projects and publish multiple scientific papers. If you want to continue to do the hands-on work after the completion of your training you can get a job as a research scientist. These are typically hired on in larger labs to help advance the research at a fast pace (as they’re already heavily trained and aren’t as slow as the trainees). However, these positions are disappearing quickly as scientific funding is decreased.

When people think of a scientist, they are mainly picturing a professor or principle investigator (PI), who runs his/her own laboratory and is typically associated with a research university. These positions require the same training as the research scientist, but may require multiple postdoc experiences in order to obtain more diverse training. The day-to-day life of a PI is very different to that of a research scientist. PIs spend much of their time writing and presenting their research in order to obtain funding for their laboratory. They also are heavily involved in teaching and mentoring students and postdocs of many levels. But, PIs are the main force behind the current research being done in the United States and are responsible for training the new generation of scientists.

Scientific research is an ever growing and exciting field! And although the amount of training can be daunting to some, the ability to discover a piece of knowledge that no one has ever known before is incredibly thrilling. If discovery is what awakens your mind and ignites your passion then becoming a researcher is an extremely fulfilling career path!  





T-Rex’s Weird Looking Vegetarian Cousin

By: Helen Beilinson

Eleven years ago, seven-year-old Diego Suarez found dinosaur bones while hiking with his parents in the Toqui Formation in southern Chile. Fortunately, his parents, Manuel Suarez and Rita de la Cruz, are geologists. They were instantly able to identify these bones as fossils and continued to search of the rest of this beast. Little did the family know, but they had unearthed a previously unknown species—T-Rex’s funny looking, vegetarian cousin. A study published this week in Nature describes this interesting beast, named Chilesaurus diegosuarezi (as the name suggests, this dinosaur was named after the lucky kid who discovered it).

Velociraptors and T-Rexes are some of the best known meat-eating dinosaurs. They both, along with many of their cousins, belong to the group of dinosaurs known as theropods. Theropods are bipeds, meaning they walk on two legs. They first appeared 230 million years ago and although the ‘dinosaurs’ have since gone extinct, common day birds are modern dinosaurs evolved from the theropod family.

For a long time, it was thought that theropods were strictly carnivores. The last decade, however, has brought forth new data showing that a variety of different diets were consumed. Of course, as these organisms are no longer around, it’s difficult to know for sure what they ate, but paleontologists have many tools at their disposal to understand dinosaur diets. First, instead of looking what goes into the dinosaurs, they can study what came out of them by looking at their fossilized poop, called coprolites. Meat-eating dinosaurs tend to have crushed up bones in their poop, whereas coprolites from vegetarian dinosaurs contain more traces of plants. Sometimes, paleontologists are even able to find another animal's bones inside the stomach of a bigger predatory dinosaur! Second, a dinosaur’s teeth give many clues about its diet. Sharp teeth, as you can imagine, are good for killing prey and biting through skin. Large, flat teeth, or leaf teeth, are better for chewing up plants. Skeletons also give scientists an idea of what the animal’s body type was, which gives hints to how it hunted, and thus, what it ate. For example, if you compare the body of an elephant to that of a panther, you might be able to predict which one would be better at chasing after prey. So based on all of these sources of evidence, paleontologists have been able to discern that most theropods are carnivores, except for a few anomalies, likes C. diegosuarezi.

The 3-meter long C. diegosuarezi was found to have flat teeth and a horny beak, characteristic of a vegetarian dinosaur. Although most theropods have sharp teeth characteristic of carnivores, it isn’t entirely surprising that they would have a plant-eating cousin. When in competition with other organisms, it’s good to have something that sets you apart; this make it easier for you to survive. When you are surrounded by meat eaters, if you eat plants, you’ll have less competition for your food source. This leads to many forms of convergent evolution, where you see the same type of feature (such as short, fatter leaf teeth) in different organisms that are unrelated, which can give the appearance they are related. This is why it took so long for this dinosaur to be announced, even though it was discovered over a decade ago—its features were convergently evolved with other vegetarian dinosaurs. Importantly, the discovery of C. diegosuarezi shows that vegetarianism in theropods appeared much earlier than previously thought.

You might be thinking that if only this dinosaur’s teeth were slightly different than other theropods, why did it take so long to classify it? That’s because C. diegosuarezi is special not just because of its teeth, but because it has other non-theropod features, like the leaf teeth.

Like other theropods, C. diegosuarezi ran on its two hind legs with its shorter forearms (about half the size of its hind legs) in the air, much like a raptor. Their hands have two, stump-like fingers with short claws at the end, much like other theropods such as  the T-Rex. Unlike the T-Rex, however, which had a characteristic massive head with a large mouth and thick neck, C. diegosuarezi had a longer neck and a small, rounded head with a small neck. These physical features provide even more evidence that it was a vegetarian.

The femur (a large bone in the leg) of C. diegosuarezi is somewhat different that the classic femurs of theropods, looking a lot more like those of another group dinosaurs called sauropodomorpha. These dinosaurs were long-necked, bipedal herbivores, like C. diegosuarezi, but usually a bit larger. This is another example of convergent evolution contained within C. diegosuarezi’s body. The pelvic girdle, which connects the lower limbs to the spine and upper body, of C. diegosuarezi is also distinct from other theropods.

I can honestly say I’m not a paleontologist, so I cannot give you the details of the new dinosaur with much confidence, but I can say this: it is always fascinating to me when such ‘platypus’ animals come about (as Martín Ezcurra at University of Birmingham calls them). These are animals that at first glance look like they’re made up of different parts of different animals (like the duck-like bill, otter-like body, and beaver-like tail of the platypus), but actually they’re merely very demonstrative examples of convergent evolution. If something works for one animal, proof of how highly adapted it is for a particular function or environment is much stronger if it independently evolves in separate families of organisms. And, hey, they might look slightly different, but if it works, it works.

Monsters of Med School: enucleation of red blood cells

By: Vicky Koski-Karell

Ask a Scientist is on vacation this week, but will return next week! In the meantime, here's one of Vicky's monsters of med school: enucleation of red blood cells. 

DNA encodes all of the information necessary for life, and each cell in our body stores its DNA in its nucleus, a specialized compartment within the cell that protects the DNA from damage. Every cell in our body contains a nucleus, except for red blood cells. Interestingly, red blood cells develop with a nucleus in the bone marrow (where all blood cells develop), and then lose their nucleus as they mature and enter the bloodstream. This process is called red blood cell (or erythrocyte) enucleation.

The Extinction of Tasmanian Devils: Sometimes It's Better to be Different

By: Helen Beilinson

Australia has an incredibly unique list of animal inhabitants. From massive pythons to flying foxes (the largest bat species in the world) to ridiculous spiders and centipedes to some of the largest, smallest, and most poisonous jellyfish, Australians definitely have more interesting backyard fauna than I do here in New Haven (although the black squirrels are pretty cute).

Aside from its slightly more terrifying creatures, Australia is home to a huge amount of marsupials. Marsupials are mammals, meaning they feed their young with milk, like humans do. Unlike humans, however, mother marsupials do not carry their young in their uteri until birth. Instead, after a certain time of developing in utero (meaning, in the uterus), marsupial young will climb into a special pouch on their mothers’ belly to continue developing. These pouches contain the mother’s nipples, to feed the young, and offer protection while the baby marsupials continue growing. Some of the best known marsupials are kangaroos, koalas, and the happiest animal on Earth, quokkas. One marsupial that’s predominantly known more for its cartoon depiction is the Tasmanian Devil, which is currently the largest carnivorous marsupial. Unfortunately, their population is at high risk of extinction. Extinction of species is nothing new; it happens all of the time. Extinction can be caused by high predation, changes in food or climate, high rates of disease or infection, or a slew of others reasons. The pathway to extinction of devils is particularly interesting because their population is threatened by a rare type of disease—a transmissible cancer.

That’s right, the devils are transmitting cancer to each other, like humans can spread a cold.

Before the 1400s, Tasmanian Devils populated the entirety of Australia. However, due to heavy predation by dingoes and indigenous people, the devils were isolated to the Australian island of Tasmania. Since then, major population crashes have continued to affect the devil population. From 1830 to 1930, locals made efforts to exterminate the devils because they preyed on their livestock. In 1909 and 1950, there were smaller epidemics of infectious disease that hurt the devil population. In 1941, however, laws were enacted to protect the devil population because half a decade prior, another carnivorous marsupial, the Tasmanian Tiger, went extinct. These laws aided the devil population drastically, until about half a century later.

In 1996, the first case of Devil Facial Tumor Disease (DFTD) was documented in the Tasmanian Devils. This cancer, as the name suggests, causes large facial tumors on the devils. These facial tumors eventually cause the devils to die of thirst and starvation, as they are unable to eat, drink, or see. The fascinating thing about these tumors, as I previously alluded to, is that they are passed from devil to devil through biting. This transmissibility is incredibly rare in cancer. Usually, cancerous cells develop within one individual and cannot be passed from one person to another. Curiously, the same phenomenon helps to explain why cancer cannot be spread in most species and why it is spread through the devil population.

Each of our cells contains markers (these are basically proteins that cover the exterior of the cell such that other cells can “see” them) on its surface to tell other cells that they are part of the same organism. They mark our cells as “self” as opposed to other cells, that either have “nonself” markers or have no “self” markers. One self marker is the Major Histocompatibility Complex (MHC) molecule. These molecules are critical to our immune responses because they will hold motifs (kind of a protein pattern) that allow other immune cells to recognize what is infecting the body. Because they need to bind to so many different kinds of proteins (they have to be able to display features of all the bacteria, viruses, fungi, etc. that invade our bodies), you can imagine that have different forms is a good thing for your immune system, because you can bind various forms of such proteins, instead of a smaller subset, which would allow you to react to a greater variety of things. Mammalian immune systems have this taken into account. MHC genes are some of the most diverse, or polymorphic, genes out there. This means that there are many, many forms of it throughout the population (don’t worry, they still work great!). Because you have two copies of DNA (one from your mother, one from your father), you get two copies of these MHC genes. This means that the more different your mom and dad’s MHC genes are, the greater variety of foreign proteins you can display on your MHC. This positive aspect of MHC genetic variety can be more greatly appreciated when you see a population where this variety doesn’t exist. This loops us back to the Tasmanian Devils.

Due to the massive population downsizing and isolation of a small population of the devils, they have a very limited variety of these MHC molecules. This observation is one of the major reasons why DFTD is so rampant. As I mentioned before, MHC molecules are self markers. The variety that we see in these molecules allows for a greater system of defining what “self” is compared to nonself. For example, if you have a signal of two letters for variety, there’s only 262 (676) options. So out of the 7.1 billion people on earth, your body will see 10.5 MILLION people’s cells are your cells. If you have a signal for ten letters of variety, there’s 2610 (140,000,000,000,000), so your body will only recognize your body’s cells are your cells.

When rejection occurs after an organ transplant, this is due to the acceptor’s body recognizing the MHC on the donor’s organ, thereby seeing it as nonself and attacking it. Although this isn’t good for a transplant, this keeps a lot of problems at bay. Unfortunately, due to their lack of variety of MHC genes, Tasmanian Devils do not have this ability. The cancer cells of the DFTD can be found on teeth and on lesions on infected devils’ faces. When they bite another devil, these cells are transferred into the wounds of the uninfected devil. If there were enough variety in the MHC genes of the devils, the newly infected devil would recognize the cancerous cells as nonself and would eliminate them, preventing the development of a facial tumor. However, because there is so little variety in the devil’s genes, the newly infected organisms do not recognize the cancerous cells as nonself, and instead see them as self. Self cells are not attacked by the system normally, so the cancerous cells stay, and develop into tumors. This perpetuates the cycle, leading to the cancer spreading throughout the population.

There are currently two other known transmissible cancers. One is a venereal tumor that affects dogs that has been spreading around the world for the past 11,000 years. The second was recently confirmed as a transmissible cancer—it is a soft-shell clam leukemia that has spread throughout the east coast of North America.

The extinction of the Tasmanian Devil is being driven by this transmissible cancer that the devils are unable to eliminate. However, without the initial downsizing of their population due to human predation, its highly probable that the population would have enough diversity of their MHC genes that this cancer would not have been able to even come about.

Ask a Scientist: How do touch-screen gloves work? Why do some gloves work on touch screens and not others?

By: Kylia Goodner

As this year’s bitter winter transitions into spring, many of us have already traded our gloves, hats, and scarves for shorts! But, we all still retain the memories of one of winter’s major frustrations: touch screens. Most electronics contain touchscreens that are unresponsive when your fingers are covered by cloth. This, of course, is unless you have special touchscreen gloves. But how do these frustration-reducing winter gloves work?

You may not realize it, but the human body has the ability to store an electric charge, which means that it can be considered to be a “capacitor”. The touchscreens on most modern electronics are known as “capacitive touchscreens,” which just means that they contain sensors that can detect anything that has an electric charge – including your body. So, when your finger touches the screen, it forms a circuit, or connection, between the electrical field within your body and the screen itself.  This connection “tells” the phone what app to open or text to send based on where your finger’s electrical charge touched the screen.

For a capacitive touchscreen to work, you must be able to transmit your body’s electricity to the screen. When you put on gloves, the cloth acts as a barrier between your electrical charge and the touchscreen.  Special touchscreen gloves overcome this by using a conductive wire in the fingertips. These gloves have a metal wire interspersed between the cloth fibers, which allows the electricity in your fingertip to travel through the metal wire in the glove’s fingertips to reach and act on the touchscreen.

Although I extremely appreciate the convenience of a pair of touchscreen gloves, I do not believe that this is the most interesting use of this technology. E-health devices are an up and coming field of research that use technology similar to touchscreen gloves. These wearable devices contain sensors in them that can detect bodily changes in disease symptoms like heart rate, and blood sugar levels, and send these updates directly to your doctor. The use of this wearable technology has the potential to drastically help rural or other populations without continuous access to medical facility, or those with severe chronic diseases. This technology is in its beginning stages, but within a few years, like our body’s electricity, it will be right at your fingertips.


Man's Best Friend

Can dogs affect our oxytocin levels through Skype too?

Can dogs affect our oxytocin levels through Skype too?

If you’re a dog person like me, you probably feel a special bond with your dog, and may even consider them as a member of your family. Though we’re unable to communicate with our dogs in the same way we communicate with other humans, many dog owners would likely agree that they feel an emotional connection to their pets. Because of these close relationships, scientists have long been interested in the evolution of domesticated dogs. Last week, the journal Science featured a number of research articles addressing longstanding questions about domesticated dogs. One of these articles investigated a possible mechanism by which we form emotional connections with our dogs, through a hormone called oxytocin.

Oxytocin is produced in the brain and acts as a powerful modulator of neuronal processes in mammals. It is specifically important in social behavior, playing crucial roles in the bonding between mothers and infants, as well as between sexual partners in species that exhibit lifelong mating behavior. The role of oxytocin in maternal/infant bonding has been well-studied in humans, and acts through a positive feedback loop. Positive feedback loops are extremely common in biology, and describe a system in which signal A promotes signal B, and signal B reciprocally promotes signal A.

Specific interactions, such as eye contact, between a mother and her infant increase oxytocin levels in the mother, and the increase in maternal oxytocin causes a corresponding increase in the infant’s oxytocin levels, which then amplify oxytocin levels in the mother. While this phenomenon is well-described between humans in an intraspecies manner, the authors of this study asked whether oxytocin-mediated bonding could be observed in an interspecies manner, specifically between humans and their pet dogs. This pairing is a great model in which to observe such interspecies interactions because humans have anecdotally described forming one-on-one relationships with their pet dogs.

To study this, the investigators looked at pairs of dogs and their owners, as well as pairs of hand-raised wolves and their owners. Wolves are the closest living relative of domesticated dogs and share many common biological features. However, because dogs, unlike wolves, have cohabitated with humans for many generations, the authors hypothesized that interspecies bonding behavior with humans would have evolved in dogs, but not in wolves.

In their first experiment, the authors observed interactions between the wolf/owner and dog/owner pairs. They specifically focused on eye contact, or gazing, as this interaction has been well-documented to stimulate oxytocin feedback loops in human interactions. They measured oxytocin levels in the urine of the dogs, wolves, or owners before and after their interactions, and also measured the duration of gazing between the animals and their owners. They found that the longer the duration of eye contact between dogs and their owners, the higher the levels of oxytocin in both the humans’ and the dogs’ urine. Interestingly, this correlation was not observed in the wolf/human pairs, supporting the idea that interspecies oxytocin-mediated bonding has evolved specifically in dogs as a function of their close evolutionary relationship with humans.

While this finding showed that gazing behavior was related to oxytocin levels in dogs and humans, it did directly not address the issue of the positive feedback loop. In other words, they showed that dog and human oxytocin levels were both elevated following extended eye contact, but not that dog oxytocin levels directly affect their owners’ oxytocin levels. To answer this question, they administered oxytocin or saline (salt water) to dogs, and then allowed them to enter a room in which their owner and two unfamiliar human volunteers were present. The investigators observed the interactions between the dogs and the humans, and measured oxytocin levels in the dogs and the humans throughout the experiment.

They found that female dogs who had received oxytocin engaged in more eye contact with their owners than the dogs who had received saline, and that the owners of the oxytocin-treated female dogs had significantly increased levels of oxytocin following the interaction, even though the owners had not been given oxytocin themselves. This experiment demonstrated that in female dogs, oxytocin stimulates dog/owner gazing behavior, which results in elevated oxytocin levels in the owner.

These results were not observed in male dogs, for reasons that remain unclear. Some evidence suggests that in humans, females are more sensitive to the effects of oxytocin than males. Additionally, in a small rodent called the prairie vole, oxytocin may be related to male aggression. Thus, the authors hypothesize that in their experiments, male dogs may have been exhibiting an aggressive response to the strangers in the room, limiting their interactions with their owners. In all species studied, however, the differing role of oxytocin between the sexes remains largely unknown.

Nevertheless, the results from this study indicate the bonds formed between dogs and their owners are mediated by oxytocin, the same hormone that contributes to maternal/infant bonding and lifelong sexual partners. Our feelings of affection for our dogs seem to be driven by bona fide neurological mechanism, in addition to how cute they are and how much fun they are to play with.

Ask a Scientist: Do wearable copper and/or magnetic gadgets reduce pain and inflammation?

By: Kylia Goodner

We all experience pain, and whether it’s due to an intense workout, or just from the side effects of aging, we would all like to kick it to the curb. One folklore remedy for pain reduction has recently re-emerged into the public sphere, and suggests that wearing copper or magnetic jewelry can help to reduce pain in as little as 5 minutes. Proponents of this remedy suggest that the magnetic jewelry forms a magnetic field near your body that interferes with your nervous system in order to reduce pain. The copper jewelry is presumed to work by releasing tiny amounts of copper onto your skin, which are then absorbed by your body and used to re-grow joint cartilage. Let’s delve into the science to see if these claims are true.

One study examined the effect of magnetic therapy by placing magnets or dummies (which resembled magnets, but weren’t actually magnetic) near an incision site directly after surgery. The doctors then observed whether the patients wearing the magnets needed less pain medication than those wearing the dummies. After two hours, the patients required the same amount of pain medication, regardless of whether they had been wearing magnets or dummies. The doctors concluded that the magnets had no effect on pain after two hours. But two hours is a relatively short time, and even though many of these companies claim the jewelry begins to work after 5 minutes, what is their effect after weeks of wear?

Two studies examining pain reduction in arthritis patients after wearing a magnetic or dummy bracelet for up to twenty weeks found no difference in the amount of pain experienced by patients in the two groups.  I found only one study that said there was a small effect on pain reduction in groups wearing strong magnets. However, these patients were aware that they wearing the magnetic bracelets because their bracelets kept sticking to their keys. This study was therefore unable to rule out the possibility that the small reduction in pain described by the participants was due to a placebo effect.

Copper bracelets appear to be equally as ineffective as the magnetic jewelry. One study looking at reduction of pain due to osteoarthritis found no difference in the amount of pain or inflammation between patients wearing a copper bracelet and patients wearing a dummy. This was also true in an additional study looking at rheumatoid arthritis, in which copper bracelets made no difference in pain experienced by patients. However, it appeared that the copper bracelets actually caused pain in some patients due to a mild skin irritation.

Overall, science has concluded that neither the magnetic gadgets nor copper jewelry have an effect on pain reduction.  Luckily, scientists have found something that will reduce pain, and is easily purchased at your local grocery store. Numerous studies have found that fish oil reduces pain in people with rheumatoid arthritis. Fish oil can also help reduce inflammation, which would result in pain reduction.  So, although the folklore jewelry wont do much to help kick your pain to the curb, hope is not lost as certain dietary changes and supplements will!

Where did life on Earth come from?

By: Zuri Sullivan

This fundamental question fascinates and frustrates scientists and non-scientists alike, and scientists across many fields have spent centuries trying to answer it. In biology, for example, we address this question through the study of evolution. This particular branch of biology allows scientists to draw inferences about past organisms through examining certain characteristics of current organisms. By comparing and contrasting the species that exist today, and investigating their relationships to one another over evolutionary time, biologists can make predictions about what some of Earth’s earliest life forms may have looked like.

These predictions are made possible through our understanding of natural selection, which is the process by which random variations that make an organism more likely to survive and reproduce are passed on to subsequent generations, gradually becoming more frequent in the population. In other words, natural selection is “survival of the fittest.”  Through this process, advantageous variation in very simple systems slowly gave rise to more complex ones. From single-celled organisms like bacteria slowly emerged more complicated single-celled organisms, like yeast. From this class of organisms, called single-celled eukaryotes emerged simple multicellular organisms, of which sea sponges are a modern example. Gradually, over hundreds of millions of years, increasing layers of complexity were built upon one another, giving rise to the diverse array of highly sophisticated organisms (including ourselves) that we observe today. This doesn’t necessarily mean that simpler life forms haven’t been able to survive over all of these millions of years (in fact, the vast majority of living organisms today are unicellular). Rather, evolutionary biology tells us that the common ancestor of all extant organisms was a single-celled organism that could have resembled some of the bacteria we see today.

The insights we gain from evolutionary biology are extremely powerful, but the question of the origin of the original life form upon which all this sophistication was built remains elusive. However, a recent study published in Nature Chemistry, led by John Sutherland of the UK Medical Research Council, provides important clues as to how this original life form could have emerged. Now you may be wondering—if we’re talking about the origins of life, and biology is the study of life, then why were chemists investigating this question? In order to understand how life began, it is necessary that we examine the individual building blocks that are needed for life, and organic chemistry provides the tools necessary to study these building blocks.

So what are these most fundamental building blocks for life? They’re called macromolecules, and include nucleic acids (like DNA or RNA), proteins, lipids (or fats), and carbohydrates. Each of these macromolecules is made of even smaller building blocks: nucleic acids are made of nucleosides, proteins of amino acids, fats from fatty acids, and carbohydrates from monosaccharides (simple sugars). The names aren’t important, but the fact that life is built upon macromolecules, which are built from small precursor molecules, transforms our question about the origin of life from the realm of biology to the realm of chemistry. Instead of asking, “where did life on Earth come from?” the more fundamental question is “how were the building blocks of life first assembled?”

Chemists have been asking this question experimentally since the 1800s, and have made a number of important discoveries. Chemists have figured out ways that amino acids, complex sugars, and certain nucleosides could be synthesized from the simplest possible building blocks that are believed to have been on Earth before life emerged. Scientists interested in these questions often refer to the hypothetical composition of pre-life molecules and water as the “primordial soup.” The issue with these studies, however, has been that the complex reactions needed to produce each macromolecule were incompatible with the reactions needed to synthesize other macromolecules. In other words, no one has been able to create a set of conditions under which all of life building blocks could be synthesized. 

This is the problem that the Sutherland lab set out to address—are there a set of conditions under which all of these macromolecule precursors could have been synthesized? Using three simple molecules that could have existed on Earth before life began, the group showed how the combination of water and ultraviolet radiation from sunlight could have produced a set of chemical reactions that gives rise to building blocks for the carbohydrates, lipids, proteins, and nucleic acids that we know today. As it was put in a commentary that covered this study, the Sutherland group uncovered “a primordial soup that cooks itself.”

As is always the case in science, this study led to more questions than it did answers. One caveat to their complex synthesis reaction is that certain molecules needed to be added at particular times in the reaction. Returning to the soup analogy, the recipe would have relied on a cook standing over the pot and slowly adding certain ingredients at the right moment. The authors of the study put forth an additional hypothesis to address this, suggesting that rainfall could have introduced these molecules at the right moment in the synthesis reaction. Seems plausible, but I’m not a chemist.

Ask a Scientist: Are humans the only animals that communicate by written or verbal language, and if so, why?

By: Kylia Goodner

Before we delve in to this intriguing question, we need to first understand the difference between communication and language. Communication, by definition, is the transmission of a signal between the sender and receiver. This signal could be language, but it could also be smells, movements, or postures. Obviously, all animals communicate, but do they do it through language? Defining language is actually a complex and highly debated area of research, but most scientists would agree that language is a structured form of communication, containing words and grammar that can be combined in infinite ways to create new combinations.  So a chimpanzee can communicate by having a specific yelp which means “predator” or “safe”, but unless that chimp can combine those yelps into a new sentence which means “No predator here! It’s safe,” we cannot claim that the chimp has language.

Due to the difficulty of studying animal communication in the wild, most research has focused on the ability of animals to learn human languages. This has been done in the past by teaching dolphins, apes, and even parrots a sound or symbol representing a specific object or action. Then the scientists reorder these symbols into new combinations and assess whether the animal can understand and perform the task. For example, scientists can teach dolphins the words for “Frisbee” “left” and “Fetch,” then they can re-order them into the sentence “Fetch left Frisbee,” and the dolphins would be able to understand and perform that task. This suggests that dolphins (as well as apes, and even animals like parrots and prairie dogs) have the ability to comprehend a structured language. Unfortunately, very little progress has been made in determining whether animals in the wild have a structured language.

As for written language, there is no evidence to suggest that animals possess this truly unique facet of human nature. This isn’t terribly surprising, as humans did not develop a written language that was not based on pictures until around 3200 BCE, which is 200,000 years after modern humans evolved.  This suggests that in order to develop a written language, a species needs an extremely long period of using complex spoken language. This long period of spoken language is, to our knowledge, unique to humans. Further, some external factor must drive the creation of a written language, because writing is a skill that takes time to create and learn, and animals aren’t going to create it just for fun. For humans, this drive was the development of agriculture. Humans had to keep track of the seasons, their crops, and food allotments for their citizens. All of this is extremely hard to remember, and can get confused if only conveyed through verbal language. Therefore, humans needed to develop a written language, but, as far as we know, this pressure does not currently exist in animal populations.

So, we aren’t the only animals to communicate, and we may not be the only animals to have language, but we are the only ones with the ability to write it down. Moving forward, research identifying different animal communication systems in the wild is a major focus of scientists in this field.  Unfortunately though, the basis of what these languages could entail is unknown, making it extremely difficult for scientists to “translate” potential animal languages into a human form. But, difficulty never stopped a scientist before, so keep your eyes peeled for the “Elephant to English pocket dictionary”

The Anthropocene: Humankind in the Geological Record

By: Chris Kelly

The Anthropocene?

If the concept of geologic time seems mind-boggling to you, you are not alone.  One of the foremost questions considered by geologists is how to divide the immensity of time on Earth into various chunks, defined by common characteristics.  Generally, these characteristics are based on geological indicators of past environments on Earth: its chemistry, the types of life that existed "way back then," or tectonic changes (think back to your junior high plate diagrams). For example, the Pennsylvanian subperiod roughly 300 million years ago refers to the large amount of coal that was formed in the swamps of ancient Pennsylvania, USA during this time.  Other time periods have been named for intense climatic changes, the extinction of certain species, or the emergence of others.  In 2000, two scientists, Paul Crutzen and Eugene Stoermer, suggested that our current geological time period (the Holocene Epoch) has ended, and we are now living in the "Anthro-pocene."  "Anthro," refers to humankind, while the suffix denotes a division of time associated with very recent time, to geologists at least.  They claim that given the profound human-induced transformation of the planet, we have entered a new geological epoch marked by our own influence. 

Human Impacts, Past and Present:  Scientific and Social Considerations of the Anthropocene

Many modern environmental narratives perpetuate the notion that until the very recent past, humans lived in harmony with the rest of the natural world with minimal environment effects.  This, in short, is a farce.  Modern humans have existed for roughly 200,000 years. Between 50,000 and 10,000 years ago, Australia, North America, and South America all lost between 70 and 90 percent of their largest mammal species.  Although it is possible that past climate change drove these extinctions, scientists increasingly attribute this extinction event to "anthro-pogenic" (human-driven) causes.  Following these extinctions, over the last 11,000 years, mass agriculture developed incrementally, both independently and through cultural cross-pollenation, throughout human societies.  This led to significant conversion of land from forests to farms, and a release of greenhouse gases (which contribute to modern climate change) from the fell trees to the atmosphere.  More recently, after Europeans first made contact with indigenous peoples of present day North and South America, smallpox and other diseases swept across the two continents, killing roughly 50 million people.  That number bears repeating; diseases killed about 90 percent of native peoples in the Americas between 1492 and 1650- six times the number of Jewish and non-Jewish victims that perished in the Holocaust.  Afterward, land that had been cultivated by indigenous peoples reforested, which shows up in geological ice cores as a decline in carbon dioxide.   For those of us living in the United States, the myth of a primeval, unpopulated continent of vast forests is just that- a myth. 

 18 pt 
 18 pt 
 /* Style Definitions */
	{mso-style-name:"Table Normal";
	mso-padding-alt:0in 5.4pt 0in 5.4pt;
	font-family:"Times New Roman";
	mso-fareast-font-family:"Times New Roman";
	mso-bidi-font-family:"Times New Roman";
    Figure and statistics crafted from Lewis, Simon L., and Mark A. Maslin. "Defining the Anthropocene."  Nature  519, no. 7542 (2015): 171-180.  For full facts, statistics, and other examples of preindustrial human impacts, please see the original source.  

Figure and statistics crafted from Lewis, Simon L., and Mark A. Maslin. "Defining the Anthropocene." Nature 519, no. 7542 (2015): 171-180.  For full facts, statistics, and other examples of preindustrial human impacts, please see the original source.  

While these impacts were tremendous, they pale in comparison to modern humanity's capacity to effect environmental change.  Closing the 14th century, there were approximately 380 million people on Earth.  At the time of this blog entry, population is nearly 7.4 billion, an expansion of nearly twenty fold in those last 800 years.   Humans are now the dominant agents of soil erosion, modern extinction, deforestation, and we have changed Earth's atmosphere so profoundly that present levels of greenhouse gases are unprecedented over the last 800,000 years, and likely much longer (3-5 million years).  Human and Earth historians are presently debating how to conceptualize such gargantuan transformations to the earth system:

1) What led to this transformation?

2) How do we define a time period in Earth history dominated by humans in such a whole-sale manner?  Just as geologists term recent Epochs in Earth history the Pliocene, Pleistocene, and Holocene, can we definitively say that there is an "Anthropocene"?

3)  What are the social ramifications of declaring a new time period to mirror this human agency? 


1) To the first question, many narratives of this exponential takeoff in population and environmental impact point to a start date that coincides with either the Industrial Revolution or its subsequent globalization.  The technologies that led to this movement commenced solely in England in 1784 with James Watt's invention of the steam engine, and have spread across the world, if unevenly, ever since.  Indeed, although they were employed by a variety of countries prior to 1950 (mostly Europe, its settler colonial states, and Japan), it is only in the last 65 years that population, plastics, fertilizer consumption, large dams, water use, paper production, and deforestation in the tropics have exploded.  Still, other scientists argue that this most recent "Great Acceleration" is just the longest punctuation in a long history of human environmental impacts- in line with the large mammal extinctions and advent of agriculture of prehistory. 

2) A recent paper in the journal Nature explores the possible start dates, and their respective geological evidence, for an Anthropocene Epoch.  Although it is easy to point to individual historical events, like Watt's steam engine, or archeological evidence for the development of mass agriculture, geologists are searching for slightly different criteria:  distinct, worldwide, synchronous evidence preserved in records the earth itself keeps, such as sediments, ice sheets, tree and coral rings, among others.  Scientists term such a record that preserves clear and discrete evidence of global change a "golden spike."   For example, a layer of the element Iridium  (rare on Earth, but more plentiful on meteorites) in Tunisian rocks was the result of the large impact event that corresponds to the end of the era of dinosaurs (Mesozoic).  More recently, fluctuations in Hydrogen isotopes preserved in Greenland ice sheets herald the Holocene, the recent Epoch in which we presently live. In addition to the golden spikes themselves, the existence of a distinct epoch should be supported by accompanying evidence that suggests global environmental change- like the lack of dinosaur fossils and multiple corroborations of Holocene climate change in sediment, coral, and tree ring records. 

The authors of the aforementioned Nature study suggest two events that have a clear signature in Earth history that could be used to demarcate the geological age of humans.  First, the cataclysmic die-off of indigenous peoples in the Americas is likely preserved in the 1610 carbon dioxide minimum in an Antarctic ice core.  Accompanying evidence could include pollen changes recorded in American lakes, Arctic sea-ice extent, and the coolest temperatures of the associated Little Ice Age.  Second, nuclear weapons testing in the 1950s and 1960s produced measurable levels of radioactive materials; particularly notable is the spike in the radioactive Carbon-14 isotope preserved in ice cores, lake sediments, tree rings, and other Earth recorders.  The authors specifically propose that the 1964 peak in Carbon-14 recorded in a Polish pine tree be the golden spike.  Other evidence could include genetically modified crop pollen, the detection of molecules unique to plastics and refrigerator production in marine sediments, among other hallmarks of the Great Acceleration of population and environmental impact since 1950. 

Ultimately, for the Anthropocene to be officially pronounced, it will take a recommendation by the 'Anthropocene' Working Group, a supermajority when put to vote at the International Commission on Stratigraphy, and ratification at the International Geological Congress.  The earliest opportunity for all this is at the 35th convening in 2016 in Cape Town, South Africa.  Regardless of the outcome, however, the Anthropocene has taken off in the scientific and public consciousness.  

This has tremendous ramifications for us in how we see ourselves and our environmental impact. 

3) Although if viewed impartially it seems as though the actual start date for unprecedented human impact on the earth system seems arbitrary, I would submit that the dialogue tells us just as much about ourselves and our time as does the actual scientific evidence presented.  As this Nature study points out, the difference in describing the present geologic age oftentimes broke down along Cold War lines in the 20th century.  The authors write, "The East-West differences in usage may have been due to differing political ideologies:  an orthodox Marxist view of the inevitability of global collective human agency transforming the world politically and economically requires only a modest conceptual leap to collective human agency as a driver of environmental transformation."  If political affiliations could affect scientific nomenclature then, they certainly could today.  Depending on whether scientists agree on either 1610 or 1964 as the start of the Anthropocene, different stories are privileged.  As the authors describe astutely, 1610 "implies that colonialism, global trade, and coal brought about the Anthropocene," whereas 1964 marks a technological watershed arrived at by a powerful elite of the few wealthiest nation-states on Earth.  As of yet, nuclear weaponry has thankfully not been the most potent factor impacting the environment.  We can only hope this peace endures.

The authors argue that the utmost prudence will be required to make sure that contemporary culture has a minimal effect on scientific definitions of the Anthropocene.  Just this week, a different cohort of scientists argued in a Science Perspectives piece that the formal designation of the Anthropocene is a bad idea partly for this reason, and because it will necessarily fail to recognize the long history of human environmental impact beginning with our global emigration(s) from Africa. 

Whether you find nuances of the debate intriguing or appalling, I leave a plethora of more philosophical points unanswered.  Likely, we in the collective, are leading to the sixth great extinction event in Earth history.  This level of global transformation has occurred before by both living and non-living entities, from the blue green algae that allowed our atmosphere to oxygenate 2.3 billion years ago, to the extraterrestrial object that likely drove the dinosaurs extinct and gave rise to the age of mammals 65 million years ago. But, never before has a sentient species like ourselves caused such wholesale upheaval.  Either addressing, mitigating, or grappling with that kind of power could characterize the psyche of many future generations.

I would be most remiss not to acknowledge that the impetus for this blog entry was provided by a collaborative, forthcoming social history of the Anthropocene by myself and lead authors Nancy Jacobs and Danielle Johnstone at Brown University.  However, I alone accept responsibility for any errors presented in this post.  

Ask a Scientist: Is the process of waist training unhealthy or harmful to your body?

By: Kylia Goodner

This one’s for the ladies (or really anyone trying to get an hour-glass figure)!  Everyone from Jessica Alba to Kim Kardashian are wrapping up and reporting great success. But how does waist training work? And are you causing more bodily harm than good?

If you’re like me and have never heard of waist training before, let me give you a brief overview of what it actually involves. The process of waist training involves putting extreme amounts of pressure on your waist through the use of some type of binding that will “train” the waist to form an hour-glass shape. There are typically two types of binding: corsets and wraps.

Corsets have been around since the 1800s when they were first introduced as an undergarment to assist females in attaining an extremely tiny waist. Unfortunately, there have been no recent studies regarding corset use, so the following information is taken from observations and research done before 1980.  An effectively tied corset can exert 85 pounds of pressure per square inch. This can decrease the size of the abdomen by 6 inches, but has extremely harmful side effects if worn long-term. Corsets have been shown to decrease lung capacity by 20%, and can cause the muscles in your waist to deteriorate, making it impossible to sit or stand without the support of the corset. Other medical problems, including hernias and uterine damage, have been associated with corsets. Of course, all of these side effects are from extreme long-term use, but in order to obtain the hour-glass figure, a corset must be worn continually.

Body wraps, on the other hand, are slightly less intense, but equally as under-studied as corsets. Body wraps work by wrapping the torso (or other bodily regions) in a plastic or cloth wrap. These wraps supposedly work by shaping your body while causing you to sweat off the extra weight. To date, I was not able to identify even one study looking at the effectiveness or dangers of body wraps. Therefore, it is important to note that science does not support the claims made by companies that produce body wraps. Further, multiple reports from in the 1980s FDA consumer* warned about believing the claims of these companies because they had not been approved by the FDA. These reports explained that these wraps do not dissolve fat, because “fat is not broken down by perspiration”. Instead, you are only losing water from sweat.

In reality, the topic of waist training is extremely under-researched. But, from what we do know from past observations is that corsets will create an hour-glass figure, but at a very high cost that has little to do with dropping pounds. Further, body wraps are less intense, but also less effective at creating this shape.  Overall, if you want to drop weight and gain that hour-glass figure, science currently supports eating healthy and exercising rather than following the current celebrity fad.

*The FDA consumer reports are only available as hard copies. For information pertaining to weight loss fads and body wraps please check out the 1981, 1982 and 1985 editions.