Musseling through climate change

Our planet’s climate is pretty old — an estimated 3.5 billion years old, in fact. Understanding how Earth’s climate has changed since then is important for predicting and coping with climate change today and in the future. But, because it is hard to know exactly what happened a couple of billion years ago, climate scientists use mathematically constructed models that take into account abiotic, or non-living, factors like carbon dioxide levels and ocean chemistry in the past to predict weather patterns in the future. Ultimately, the goal is to predict how biotic factors — living things like us — will be affected. 

These mathematical models are a work in progress. Often, they are made using data from field studies conducted over periods of only one to two years. Additionally, many models do not factor in biological mechanisms for plasticity that allow organisms to adapt to changing environmental conditions. These gaps were the impetus for a study conducted by Dr. Luca Telesca and colleagues, recently published in Global Change Biology. Their work investigated shell shape and body structure in archival specimens (read: preserved in ethanol, not fossilized) of the blue mussel (Mytilus edulis) collected roughly every decade between 1904 and 2016 along 15 kilometers of Belgium’s coast (Fig. 1). Measurements of the mussels themselves were coupled with extensive long-term datasets of coastline environmental conditions over the past century, all of which were obtained from collections at the Royal Belgian Institute of Natural Sciences

Fig. 1: Study location, along 15 km of Belgian coast between the cities of Ostend and Nieuwpoort (starred). Image source: Google Maps.

The blue mussel is not your typical specimen in an archival collection. Common animals often aren’t considered worth preserving for the historical record. However, it’s precisely because they are common that species like the blue mussel make great barometers for environments gone by. The blue mussel in particular is an example of a “calcifying foundation species,” species so named for their ability to sequester and store calcium and carbon from the surrounding water (see Fig. 2), and their habitat on shallow marine floors. This calcifying ability, or biomineralization, is the process by which living organisms convert non-living organic substances into still-non-living inorganic derivatives. It is an astoundingly ubiquitous process: all six taxonomic kingdoms from single-cell organisms in Archaea to mammals like us — we’re in the kingdom Animalia — contain organisms capable of biomineralization. The bones in our bodies are an example of this, the result of binding calcium phosphate from our diets into a different, crystallized form of calcium called hydroxyapatite. Furthermore, because biomineralization is an easy-to-measure, direct interaction between biotic and abiotic factors, it is an ideal study for climate scientists. 

Fig. 2: A typical blue mussel shell and cross-section. After calcium carbonate crystals are absorbed from the surrounding water, they become layered with secreted structural proteins from the mussel’s body tissue, or mantle. These layers of calcium carbonate and secreted proteins form the mussel’s shell, the thickness of which can vary depending on how much calcium carbonate is absorbed. Image created using Biorender.com.

One of the most pressing concerns presented by rapid climate change today is ocean acidification, characterized by an increase in oceanic carbonic acid resulting from elevated levels of carbon dioxide in the atmosphere. Excess carbonic acid increases the acidity of ocean water, which can dissolve shells, and decreases the availability of calcium carbonate, the nutrient that mussels and other ocean biomineralizers use to form shells in the first place. Ocean acidification has had a negative impact on many species; one notable impact is on coral in the Great Barrier Reef. Given these known effects of climate change and ocean acidification on many ocean calcifers, the authors predicted that they would observe a steady decrease in shell size between 1904 and 2016. 

Instead, to their surprise, they observed a marked increase in blue mussel shell size since 1904. The team’s results hold a number of implications for predictive climate change modeling. First, the findings signify that archival collections of organisms from the past can and should be used to influence our current predictions about what’s to come in any given biome, 10, 20, or 100 years from now. Second, and quite hopefully, the findings suggest that mussel populations somehow acclimated to shifting environmental conditions along the Belgian coast over the past century. The authors speculate that this could be because rising ocean temperatures could actually increase calcification, combating dissolution induced by acidic conditions, or that rising water temperatures may have increased the availability of a specific food source. Altogether, the potential for compensatory mechanisms in this study population of blue mussels points to the same potential in other species for coping with rapid environmental change over the next century. As we continue to update predictive models with data from the past and study and protect the populations most vulnerable to rapid climate change, we may find ways to help them mussel through yet. 

 

Dr. Telesca is a postdoctoral research scientist affiliated with Columbia University’s Earth Institute and the Lamont-Doherty Earth Observatory.

Maternal Stress and the Developing Brain

As humans, we all experience stress. It is a normal, and sometimes even beneficial, part of life. A small amount of stress can help motivate someone to prepare for a job interview or study for an important exam. There are times, however, when stressors become too overwhelming and even detrimental to health. Scientists, from medical researchers to psychologists, have studied stress for decades and documented some of these negative impacts on the brain. When thinking about the importance of the foundational, early years of a person, the presence or lack of stress can play a crucial role in development. For instance, extensive research shows that living in poverty is extraordinarily stressful for families and can negatively influence children’s brain development. The impacts of stress resulting from situations such as growing up in poverty warrant further investigation, especially considering that in 2020, one in six children in the U.S. was living in poverty.

Researchers can use various methods to assess how factors like stress impact the brain of growing children. Developmental scientists can use a tool called EEG, short for electroencephalography, to study the brain. EEG measures electrical activity in the brain by recording the communication between brain cells. It is an ideal neuroimaging method for understanding infant brain development since it allows for infants to be awake and moving, and even sitting on their caregiver’s lap during recording. Besides being infant-friendly, EEG is a useful tool for looking at brain development, given that there is a known pattern of how brain activity changes across the first few years of life.

Specifically, when using EEG to look at brain development, scientists typically see two different patterns. Broadly, infants have a mix of different types of brain activity that we call low-frequency and high-frequency power. Low-frequency power (e.g., theta) tends to be higher when the brain is at rest, while high-frequency power (e.g., alpha, beta, and gamma) tends to be used for more complex thinking like reasoning or language. As infants grow, scientists see that low-frequency power decreases and high-frequency power increases. Importantly, we can use EEG to assess how factors like stress impact the tradeoff of low-frequency and high-frequency power in the developing brain.

Image of a one-month-old infant with an EEG cap.
Figure 1. A one-month-old infant with an EEG cap. Courtesy of the Neurocognition, Early Experience and Development Lab.

Research shows that children growing up in chronically stressful environments often show alterations in the typical pattern of brain activity development. To further understand the mechanisms underlying this pattern of development, scientists have begun to study which biological and environmental factors may be at play. For instance, researchers can examine the role of caregiver stress, socioeconomic status, home environment, and neighborhood factors, just to name a few.

A recent paper by Dr. Sonya V. Troller-Renfree and colleagues examined maternal stress by looking at the amount of stress hormone (cortisol) found in hair. This measure assesses chronic stress and provides researchers with the average cortisol level of the mother from the preceding 3 months. Dr. Troller-Renfree’s research group hypothesized that infants who have mothers with higher stress hormone, compared to mothers with lower levels of stress, would show differences in their brain activity. Specifically, the researchers predicted that infants of more chronically stressed mothers would exhibit proportionally more low-frequency power and proportionally less high-frequency power compared to infants with physiologically less-stressed mothers.

Indeed, their results showed that infants of mothers who had higher levels of hair cortisol demonstrated higher levels of low-frequency (theta) activity and lower levels of high-frequency (alpha and gamma) brain activity. This finding is consistent with previous research showing that stress and adversity impacts early neural development. Importantly, Dr. Troller-Renfree’s team sampled a diverse pool of participants (both in terms of socioeconomic status and race), therefore bolstering the generalizability of their findings.

So what are the implications of these alterations? Research suggests that similar patterns of neural activity are associated with negative outcomes later in a child’s life, including language development and psychiatric problems. Nevertheless, this does not mean that a child will undoubtedly experience these issues. Additionally, it may be possible that these patterns, while associated with negative outcomes in some areas, may also be adaptive in other circumstances. Furthermore, the issue of the mechanisms by which a mother’s stress impacts the developing child still remains unclear. How exactly does a mother’s stress level impact the brain of her child?

Based on previous research by other scientists, Dr. Troller-Renfree posits a few mechanisms that must be further explored. For example, it is possible that stress impacts crucial mother-child interactions. It could be that stress hormones are passed from mother to baby in utero or through breastmilk. Moreover, it is also possible that environmental factors impact stress and brain development.

It is crucial that developmental scientists continue studying these mechanisms so that targeted intervention programs can be formed for families facing stress. Indeed, the esteemed pediatrician and researcher Dr. Jack Shonkoff of the Center on the Developing Child said in an episode of The Brain Architects Podcast: “In fact, one of the cardinal principles of the science of early childhood development is that if we want to create the best kind of environment for learning and healthy development for young children, we have to make sure that the adults who care for them are having their needs met as well.” As a society, we must recognize how detrimental stress can be to the developing child and invest in finding effective ways to alleviate caregiver stress.

Dr. Sonya V. Troller-Renfree is a Goldberg Postdoctoral Fellow in the Neurocognition, Early Experience and Development Lab at Teachers College, Columbia University. Her research focuses on the effects of early adversity and poverty on cognitive and neural development. She intends to continue examining these questions as part of her new, federally-funded Pathway to Independence Award (K99/00). You can stay up-to-date on her research findings on Twitter at @STRscience or on her website: www.sonyatrollerrenfree.com.

New Technology allowing gene switch to study multiple sclerosis

Our genetic blueprint consists of thousands of genes (more than 30,000) with new genes being discovered and added to the growing list. Our genes provide DNA instructions to the protein-making machinery in our bodies. These instructions can influence our health and dictate if we will get debilitating diseases. Have you ever wondered how scientists unlock which genes are responsible for what? For example, does gene A control our hair colour or gene B dictates if we will develop an autoimmune disease such as multiple sclerosis? The answer lies in DNA recombination technology which allows scientists to delete, invert or replace DNA instructions. The technology called Cre-lox recombination relies on the use of an enzyme called Cre recombinase which can bind, cut and recombine DNA at specific sites that are inserted in pairs in the DNA. The Cre-binding site in DNA is called the LoxP sequence that consists of 34 nucleotides DNA sequence made up of two inverted repeats separated by a spacer.  Cre enzymes can recognize these LoxP sequences and edit the stretch of DNA resulting in gene deletion or inversion.

In a recent research article, Dr. Olaya Fernandez Gayol and colleagues use an advanced version of Cre-lox technology called DIO (Double Floxed Inverted Open reading frame) to understand the role of the Interleukin-6 (IL-6) gene in multiple sclerosis (MS). MS is a chronic disease of the brain and spinal cord in which our immune system eats away the myelin sheath around nerves disrupting the communication between the brain and the body. IL-6 is a proinflammatory cytokine known to promote MS. Gayol et al use an experimental mouse model of MS which acutely develops brain inflammation called encephalitis (Encephalo- “the brain” + itis “inflammation”) within 3 weeks of disease induction. This mouse is referred to as EAE (Experimental Acute Encephalomyelitis) which closely mimics human MS disease.  

Scientists have conventionally studied the role of IL6 in EAE mice by irreversibly deleting the IL6 gene in one cell type. However, the results were confounding due to the compensatory expression of IL6 from other cell types. Gayol et al circumvent this problem by wiping out IL6 from all the cells and then recover IL-6 expression specifically in the microglial cells. It is akin to entering a dark room and turning ON a light switch at one corner of the room to clearly see what’s lying there. 

Figure 1.  Cartoon depicting the genetic strategy used by Goyal et al to recover IL6 gene expression exclusively in microglial cells in the mouse brain. Created with Biorender.com.

Olaya and the team use the cutting edge DIO method to wipe out IL6 and introduce the inverted form of the IL6 gene which makes this gene non-functional (Figure 1A). This inverted form of the IL6 gene does not produce IL-6 protein and mice carrying the inverted IL-6 gene (referred to as IL6-DIO-KO) are healthy (Figure 1A). As shown in figure 1B, Cre mediated recombination flips the IL6 gene in the correct orientation to make it active. The IL6 gene flipping occurs exclusively in the microglial cells and only upon treatment of mice with tamoxifen (TAM) drug. Mice in which IL-6 expression is active (referred to as IL6-DIO-ON) develop EAE disease (Figure 1B).

The team carefully optimized the duration of tamoxifen treatment in mice. Just 5 days of TAM did not flip the IL6 gene, so they extended the drug treatment to 11 days and found the IL6 gene turned on in all IL6-DIO-ON mice. Olaya says it is important to validate when creating new mouse models. “We used EAE to validate the mouse because it was a model readily available in our lab and IL6KO [deficient] mice happen to be completely resistant to the disease.” Their interesting finding that IL6-DIO-ON with IL6 gene active exclusively in microglia indicate that IL6 made in the brain promotes disease in the EAE mouse model. 

As compared to more traditional methods of generating gene mutation which requires extensive mice breedings or continuous drug treatment, the strategy presented by Olaya and colleagues is labour and cost-effective. Their findings showed that in the absence of IL-6, EAE disease does not develop in mice. On the other hand, turning on the IL-6 gene (like a gene-switch) using DIO technology, mice develop the disease.  Overall, this technology is highly customizable to understand the role of different genes in specific cell types in the disease context. It paves the way to gain a deeper insight and more thorough analysis of different molecular blocks involved in disease.

 

Dr. Olaya Fernandez Gayol is a postdoctoral research scientist in the Department of Pediatrics and co-president of Columbia University Postdoc Society(CUPS).  She also manages the CUPS Press office that provides postdocs with a platform to publicize their science while improving their science communication skills. 

Transcription factors and cellular fixer-uppers

Self-renewing stem cells are capable of developing into certain specialized cell types thus making them ideal candidates to study human development and as potential treatment modalities for a range of diseases. There are three types of stem cells: embryonic stem cells, adult stem cells and induced pluripotent stem cells. As the name suggests, embryonic stem cells are found in the embryo at very early stages of development. Adult stem cells are found in specific tissues post development. However, using human embryonic stem cells in research is quite restricted due to ethical, religious, and political reasons. This limitation has resulted in the identification of cell reprogramming techniques to convert differentiated cells, such as skin cells, back to an embryonic stem cell state through a process called induced pluripotency. The resulting induced pluripotent stem cells (iPSCs) are equivalent to the natural human embryonic stem cells and can be differentiated to any desired cell type using a mixture of biological molecules.

Cell reprogramming techniques can be likened to fixer-uppers. Imagine trying to remodel a building for a different purpose – converting an office building into a residential one for instance. Though the building material can be reused, with the aid of experts, there would be some structural changes and remodeling necessary to make it a home. Similarly, cellular reprogramming is the technique by which one cell type can be converted to another cell type in the lab with the help of certain gene expression regulators called transcription factors (Fig. 1). The process of inducing pluripotency has been studied extensively and the overexpression of four transcription factors – OCT4, SOX2, KLF4, cMYC (collectively referred to as “OSKM”) – has been shown to induce pluripotency in mouse skin cells.

Many studies have tried to identify other transcription factors with the potential to induce pluripotency or to replace OSKM in an effort to enhance the efficiency of iPSC generation. Of these four transcription factors, SOX2, KLF4 and cMYC have been successfully replaced by members of their protein family to induce pluripotency. However, replacing OCT4 with structurally similar and evolutionarily related factors failed to show similar reprogramming capabilities. This could indicate the presence of special molecular features on OCT4 that give it the ability to reprogram cells. However, these special features and the molecular mechanisms that enable OCT4 to induce pluripotency remain to be identified.

Fig.1. Depiction of pluripotency induction in differentiated cells. Transcription factors regulate the process of converting a mature cell into an induced pluripotent stem cell which can then be directed to differentiate into any desired cell type. Illustration created with BioRender.com

In the current study, Dr. Malik and colleagues hypothesized that the ability of a transcription factor to reconfigure chromatin (the complex of macromolecules composed of DNA, RNA, and protein, which is found inside the nucleus of eukaryotic cells), is one of the features that distinguishes a reprogramming competent transcription factor from a non-competent one (Fig. 2). To test this hypothesis, they studied the well-established OCT4-SOX2 relationship from initiation to maintenance of pluripotency. They performed their study by comparing DNA-accessibility, DNA-binding,  and transcriptional control by OCT4, OCT6 and an OCT4 mutant that does not interact with SOX2 (OCT4defSOX2) during early, mid and late phases of cell reprogramming. What makes this study particularly interesting is the fact that a previous study by the same group has shown that OCT4 naturally interacts with SOX2 to induce pluripotency, whereas OCT6 can only induce pluripotency when OCT6 was mutated to enhance its interaction with SOX2. Dr. Malik’s current study focuses on the mechanisms by which the above-mentioned transcription factors interact with chromatin and in turn bind to the transcription factor binding sites on the genes that are involved in processes from the initiation to maintenance of induced pluripotency.

Fig. 2. Depiction of chromatin remodeling by competent vs non-competent transcription factors. Opening up the chromatin by competent transcription factors and making transcription factor binding sites accessible is required to induce pluripotency. Failure to do so by non-competent transcription factors results in a failure to induce pluripotency. Illustration created with BioRender.com.

From this study, the researchers found that OCT4, OCT6 and OCT4defSOX2 have unique transcription factor binding sites on the pluripotency-related genes which could explain why substituting OCT4 with related transcription factors does not activate these genes. The results from this study challenge previously established roles for OCT4 in driving pluripotency. Dr. Malik and colleagues have identified distinct modes of chromatin interaction and roles for SOX2 and OCT4 during initiation, progression and maintenance of pluripotency. They found SOX2 to be a better facilitator of chromatin opening and initiator of pluripotency compared to OCT4. Once the cells have been initiated towards pluripotency, OCT4-SOX2 binding is required to see the process through and once the cells are pluripotent OCT4-SOX2 binding becomes less essential. The most important role of OCT4, they found, was to maintain the cells in a pluripotent state as opposed to its previously investigated role as an initiator of pluripotency. 

The results from this study contribute new insights to a rapidly progressing field. Identifying the roles of key factors during the stages of reprogramming would add vital pieces of information to the big puzzle of cellular reprogramming. These pieces of information would considerably enhance the use of stem cells as potential therapeutic candidates for a number of diseases .

Dr. Vikas Malik is a Postdoctoral Research Fellow in Dr. Jianlong Wang’s lab in the Department of Medicine at Columbia University Medical Center and is a member of CUPS and the Outreach and Communications Committee.

 

 

No more lazybones

Contrary to what many people think, bone is a highly dynamic tissue that is constantly being broken down and reformed in order to maintain a healthy and strong skeleton. This process of bone remodeling is enabled by specialized bone cells called osteoclasts and osteoblasts. Osteoclasts produce enzymes to degrade old and damaged bone, which is replaced with new bone by osteoblasts. However, these cells do more than simply breaking down and rebuilding your bones. Recent advances in bone biology have shown that bone cells also have an important endocrine function, meaning that they release hormones into the circulation to affect other tissues and organs in the body. As such, the bone-derived hormone osteocalcin was shown to promote muscle function in a mouse model. Dr. Subrata Chowdhury from the Karsenty lab of the Department of Genetics and Development at CUMC followed up on this remarkable finding, and investigated the regulation of osteocalcin in animal models as well as humans, as recently published in the Journal of Clinical Investigation.

Dr. Chowdhury and colleagues found that circulating osteocalcin levels are increased after a 12-week exercise program in humans, and that this effect requires the signaling molecule, or “cytokine”, interleukin-6 (IL-6). The latter was shown by inhibiting IL-6, which completely blocked the induction of osteocalcin by exercise. They continued by using a mouse model to show that IL-6 is actually derived from the muscle itself, and that its production is necessary for maximal exercise capacity. In other words, mice that could not produce IL-6 in their muscles were not able to run as far on a treadmill as compared to mice that were able to produce IL-6.

They further investigated the interplay between IL-6 and osteocalcin in mice, and found that IL-6 stimulates osteoblasts in the bone tissue to produce RANKL, a protein that is necessary for osteoclast differentiation. As a result, more active osteoclasts are formed within the tissue. These osteoclasts produce high amounts of osteocalcin, which signal back to the muscle to promote the uptake and breakdown of glucose and fatty acids by muscle cells. In addition, osteocalcin stimulates the muscle to produce more IL-6, thereby generating a positive feedback loop between muscle and bone (see Figure below). The end result of this loop is a muscle tissue which can utilize more nutrients from the circulation, and is therefore more functional during exercise.

Exercise capacity, also referred to as fitness, is a strong predictor of chronic disease and mortality. The research by Dr. Chowdhury and colleagues has shown that exercise capacity can be improved by stimulating the IL-6-osteocalcin axis. Although their findings are very convincing, according to Dr. Chowdhury the scientific community initially reacted with disbelief. IL-6 is classically known as an inflammatory cytokine, and is one of the components of the detrimental “cytokine storm” that occurs during, for example, a COVID-19 infection. However, while the high levels of IL-6 under pro-inflammatory conditions are damaging for the body, low sustained levels of IL-6 may actually be beneficial. Follow-up studies are now being performed with low doses of long-acting IL-6 analogues, to study their potential to safely and effectively promote exercise capacity and improve health.

Dr. Chowdhury showed us the importance of not being led by scientific biases, but by our observations. And who would guess that our skeleton does not weigh us down, but actually makes us run faster?

Figure adapted from Chowdhury, JCI 2020, and created with BioRender.com.

Cracking early construction steps of the blood brain barrier

Figure 1. Early demonstration of blood-brain barrier phenomenon in developing brain.

In physiology, we often associate the terms “central” and “periphery” to refer to the brain vs the rest of the organism. This is not an anodyne dichotomy, as early 19th century injections of a dye in mice bloodstream highlighted its spreading everywhere within the organism, except in the brain (Fig. 1). In fact, a structure conveniently named the blood-brain-barrier surrounds the brain, and has two functions: protect from peripheral pathogens or toxins present in the blood, and allow nutrients to cross over to provide energy to neurons and glial cells (Fig.2).

 

Neurodegenerative diseases, ischemic strokes or other diseases such as multiple sclerosis often occur with a disruption of the blood-brain-barrier. Understanding its formation is important to investigate a cure for these disorders. In their current paper, Cottarelli and colleagues focused on the genetic determinants involved in the maturation and function of the blood-brain-barrier.

Figure 2. Blood brain barrier anatomy. From Anatomy and Physiology of the Blood–Brain Barriers, J. Abbott

The formation of a complex multicellular structure from stem cells requires the regulation of cells proliferation, migration and differentiation. These processes rely on a few key molecular signaling pathways (Fgf, hedgehog, wnt, TGFbeta, Notch). Wnt/β-catenin is one of the highly evolutionarily conserved molecular pathways that allows a cell to send information from its nucleus to cell surface receptors. Mutations in this pathway lead to abnormal development or cancer. While we know that this signaling pathway is involved in the establishment of the blood-brain-barrier, the detailed molecular mechanisms were still to elucidate. Dr Cottarelli’s work identifies a new partner of Wnt/β-catenin pathway necessary for the blood-brain-barrier development: the protein Fgfbp1 secreted by the endothelial cells of the brain and released in the basement membrane during the first weeks of age in mice. Collagen is a well known- component of conjunctive tissues. Using fluorescent microscopy techniques, Dr Cottarelli nicely highlighted a complex molecular pathway where the blood-brain-barrier maturation is enabled through collagen deposition in the vascular basement membrane. She shows that removal of Fgfbp1 gene in the blood vessels leads to a decreased signaling in the Wnt/β-catenin pathway, abnormal vascularization, delays in the establishment of the blood-brain-barrier, and abnormal cell interactions at the level of the neurovascular units. The paper also identifies a molecular mechanism linking Fgfbp1 and collagen IV in the basement membrane through the regulation of the gene Plvap (Fig 3).

 

Figure 3. Proposed model for the role of Fgfbp1 in BBB maturation.

Future studies will investigate how Fgfbp1 is involved in complex neurovascular diseases.

Azzurra Cottarelli is a postdoc in Dr Agalliu’s lab in the department of neurology. Her new paper in Development highlights her expertise in the formation of the blood-brain-barrier.

 

Mating induces transgenerational silencing in worms

Just imagine if apart from the looks one could also inherit their parents’ skills, memories, knowledge, and ideas. Sounds amazing right? However, passing down such characteristics would require transgenerational epigenetic inheritance.  The literal meaning of epigenetics is “above” or “on top of genetics”, i.e., the external modifications of the cell without any change in its DNA sequence that could turn a gene on or off and the transmission of the epigenetic marks from parents to the child is called transgenerational epigenetic inheritance.  One’s lifestyle factors, for example, diet, smoking, physical activity, alcohol consumption or even night shift work could be major contributors to the epigenetic modifications. Although the occurrence of epigenetic inheritance in humans is still a controversial debate, but it has been observed in plants, worms, mice and flies. The recent preprint by Dr. Sindhuja Devanapally and colleagues focuses on transgenerational epigenetic inheritance (TEI) and silencing in worms by reporting features that provide barrier against TEI.

Caenorhabditis elegans (C. elegans) is a transparent, small (1 mm) worm that lives in temperate soil environments with a rapid life cycle (3 days) and can be easily grown in a petri-dish while munching on bacteria as their food source. Most of these worms are hermaphrodite (with both male and female sex organs) while a few are males. These worms may look alike to the naked eyes but they differ from each other in developmental timing, lifespan and, also behavior which could be epigenetically inherited as opposed to being hard-wired in their genomes. For instance, some Pseudomonas bacteria strains are toxic food for the worms. Yet, mom-worm unlucky enough to eat the poisonous bacteria can “teach” their new born kids not to make the same mistake, thus epigenetically transferring the pathogen avoidance experience to the progeny.

RNA interference (RNAi) by double-stranded RNA (dsRNA) is a technique where RNA molecules inhibit gene expression or translation by neutralizing targeted mRNA molecules and has been shown to contribute to transgenerational epigenetic inheritance. The Jose lab has previously shown that dsRNA expressed within neurons of worms could enter the germline and cause transgenerational silencing. However, some worm descendants maintain the epigenetic gene silencing inherited from their ancestors for the long-term, while others lose silencing quickly. Therefore, the mechanism that can perpetuate silencing versus that can reverse it are both not clear.

RNAi

Figure 1: Transgenerational silencing of a gene is observed in descendants (no green GFP expression) for up to several generations when parents (green GFP expression) but not the kids were fed with RNAi (RNA interference, where RNA molecules inhibit gene expression or translation, by neutralizing targeted mRNA molecules). Illustration created with BioRender.comIn this preprint, the authors fed the parent worms with double-stranded RNA (dsRNA) targeting a green fluorescent protein (GFP)-encoding gene expressed in the worm germline and monitored the maintenance of gene silencing in their unfed descendants  (Fig. 1). While this GFP expression was turned off in initial generations, it almost always came back in the later generations except in one peculiar case. The authors  discovered that RNAi against GFP when expressed as part of a rare recombinant two gene operon, named T (containing GFP and mCherry fluorescent proteins), showed permanent RNA-based silencing. They reported that such silencing can also be triggered without using dsRNA and simply by mating dad-worms expressing T with mom-worms (hermaphrodites) lacking T expression. Because this kind of inducible permanent silencing was never reported previously, the authors introduced this phenomenon as mating-induced silencing. Mating induced silencing of T could be maintained for more than 300 generations without selection beyond second generation, thus making it the first ever study to report persistent silencing without external triggers. As the authors report, this contrasts dramatically with the genes expressed in the germline that can be silenced for a few generations by RNAi or trans effects of mating-induced silencing. Follow up experiments confirmed that maternal T can provide a protective signal that prevents paternal T silencing, suggesting that the germline has evolved to prevent permanent silencing potentially to prevent negative responses to temporary change in the environment.

According to the germline immortality concept, unlike somatic cells, the germline cells are well protected from the environment and can be passed on indefinitely across generations. However, Devanapally and colleagues in the current study reported that the expression of genes (not all but rare examples like T) within the germline can potentially be changed for hundreds of generations without any external triggers. This highlights how worms have adopted fascinating epigenetic mechanisms to accelerate evolution yet keeping the DNA sequence unchanged. Yet, the sheer infrequency at which permanent changes occur shows how impenetrable the germline is to permanent changes and the germline’s capacity to revert back to ancestral gene expression states. Thus, this study points to an organism’s ability to preserve persistence of gene expression, resulting in the preservation of the species.

Whether such rare examples of transgenerational epigenetic inheritance also occur in mammals, especially humans, is still up for debate. Epigenetic modifications have to occur in sperm or egg cells in order to pass to the next generation. Yet, most of these modifications in sperm and eggs get erased upon fertilization, resetting it to default and thus the next generation starts from scratch and makes its own epigenetic modifications. However, it is believed that some of these epigenetic modifications can escape this erasure and are passed on to the progeny. A study published in Nature Journal in 2013 reported that the mice-parents exposed to smell-fear conditioning (smell followed by electric shock) could pass their trauma to the next generation. Although rare, this opens up the possibility that indeed parents could pass on their experience, skills or even fear to the next generation. It will be fascinating to identify the mechanisms by which environmental information is transgenerationally inherited in humans.

Dr. Sindhuja Devanapally is a Postdoctoral Research Scientist in the Department of Biochemistry and Molecular Biophysics, and co-chair of the Networking and Community Building committee of CUPS.

 

 

Plasticity inception in a nutshell

Have you ever realized that you remember experiences associated with strong emotions more vividly? For example, you probably remember what you ate at your (or a close friend’s) wedding, but not last Tuesday. However, these persistent memories are not always pleasant. People exposed to actual or threatened death, serious injury, or sexual violence can develop Post-Traumatic Stress Disorder (PTSD), which involves recurring memories or dreams of the traumatic event, bodily reactions to reminders and active avoidance of those reminders. Treatment for PTSD combines psychotherapy and medication, and it aims at enabling the person to understand their trauma and detach the triggers from the responses.

The area in your brain responsible for the formation of such emotional memories is called the amygdala (from the Greek word for almond, due to its shape, Fig. 1). It can modify the way it will respond to similar stimuli in the future, and it can also affect how other brain areas, like the medial prefrontal cortex or the hippocampus, do as well. This ability to change and adapt is called plasticity, and it can start with something as “simple” as a synaptic connection becoming stronger or weaker. There are higher levels of plasticity, though. If changes alter the potential response of a region to a future challenge, this plasticity of plasticity is called metaplasticity.

Human and rodent brain with highlighted amygdala, medial prefrontal cortex and hippocampus.
Fig. 1. Depiction of a human and a rodent brain. Highlighted areas are responsible for establishing emotional memories, fear conditioning and extinction. Modified from Sokolowski and Corbin 2012.

In the recent review “Intra-Amygdala Metaplasticity Modulation of Fear Extinction Learning”, CUIMC postdoc Dr. Rinki Saha and colleagues provide a comprehensive account of recent literature on metaplasticity in the amygdala in the context of fear conditioning, and how it may lead to plasticity in other connected brain regions.

Fear conditioning is a classic rodent model in neuroscience research that allows scientists to study the mechanisms that lead to associations between neutral stimuli and unpleasant stimuli. The general experimental layout is as follows: first, a neutral stimulus (a light or a tone, for example) is consistently paired to precede an aversive stimulus (like an electric foot shock). After this exposure, animals learn that the neutral stimulus (called conditioned stimulus) predicts the aversive one (called unconditioned stimulus) and they develop a fear response which they perform right after the neutral stimulus (life freezing in place). The experiment can continue to study how they learn to dissociate them once the stimuli stop being paired. For this second part, called fear extinction learning, the neutral stimulus is presented by itself (without pairing it to the aversive one), and researchers measure the time it takes the animal to stop performing the fear response.

In order to study the amygdala’s role in fear extinction, scientists can inject different drugs into it with very fine syringes (in a procedure called stereotaxic surgery, Fig. 2). By either activating or inhibiting different signaling pathways, they can elucidate what roles those molecules play in the fear extinction process. In addition, experiences like stress and trauma can interfere with this extinction learning, as evidenced in people who suffer from PTSD and in rodent models exposed to different stressful situations, both acute and chronic.

Depiction of a stereotaxic surgery in a rodent. Detail of injection in the amygdala.
Fig. 2. Depiction of a stereotaxic surgery in a rodent. The anesthetized animal is fixed on the frame of the stereotaxic instrument, which has very accurate rulers for the three dimensions. A very fine syringe is introduced through the skull into the brain to administer the drug or virus in a very precise way.
Made with BioRender.

This paradigm has been used by many to study metaplasticity, where the change that occurs is not a modification of the baseline response but rather of the response to a subsequent plasticity-inducing stimulation. For example, Dr. Saha herself showed that it is possible to alter fear extinction learning by injecting a virus into a subregion of the amygdala that disrupts inhibitory synapses. Importantly, this happened without modifying the initial fear conditioning or the anxiety level of the animals. In addition, they also showed that those alterations in inhibitory synapses in the amygdala led to independent changes in the medial prefrontal cortex, hindering its intrinsic plasticity. The same intervention caused increased resilience to acute trauma and improved the performance of a task dependent on another brain region, the hippocampus. Hence, a very targeted intervention in the amygdala can cause an array of effects across multiple brain areas.

This body of research has tremendous implications in our understanding of the brain and how to treat its diseases. On a very pragmatic sense, it should serve as a cautionary tale for researchers to take into account and consider the potential for “undesired” plasticity in more than one place as a response to certain interventions. But more importantly, it opens up potential therapeutic strategies for trauma-related disorders like PTSD, stress or fear. Changes in one small region can lead to widespread effects through its connections to other brain areas. Hopefully, we are a little bit closer to tricking the brain into equating those traumatic memories with what you ate last Tuesday.

 

Dr. Rinki Saha is a Postdoctoral Research Fellow in the Department of Psychiatry researching  stress, and one of CUPS’ social media managers.

Science communication vaccine: a key weapon against coronavirus misinfodemics 

The CUPS blog provides a space for postdocs to share their perspectives and express their opinions. We welcome your submissions – please email [email protected]

Rinki Saha, a postdoctoral fellow in the Department of Developmental Neuroscience, shares a personal narrative and offers advice for scientists to combat misinformation during the COVID-19 pandemic.

‘We realize that this is an unprecedented time, and there are a lot of unknowns. We’re still working to make sense of the COVID-19 outbreak and how, as a company, we can best support our customers and employees during this time…’ 

By now, almost all of us have scrolled through dozens of these kinds of emails. We all are probably so psychologically numb that these words are not able to make scratches on our minds. 

I still remember how in late January, during a lunch break, we were having fun reading a meme about how people have stopped drinking Corona beer after hearing stories about the virus outbreak in Wuhan, China. We were all surprised how China was building hospitals under a devastating health emergency, just in a few days. The whole world had no clue that this dreadful virus was already in action at least from December 2019. Later we have witnessed how this demon called coronavirus extended its paw starting from Europe to the USA with its differential spreading trajectory every day. The panic engulfed us in a way that we would binge the whole day by looking at the numbers of coronavirus infected cases and deaths growing on different websites worldwide. On March 11th, the World Health Organization announced the COVID-19 outbreak as a pandemic. Probably, this declaration of pandemic did not entirely reveal the deadliest extent of this virus.

This pandemic has shown us how a microscopic organism can take over the whole world in just a matter of days. From the health crisis to economic breakdown, the COVID-19 outbreak has become the darkest patch in our society. The most heartbreaking thing for me as a scientist is to see the flood of misinformation flowing around social media and creating perpetual confusion and chaos among the general public. The prevention of COVID-19 outbreak is straightforward, maintaining basic hand hygiene and social distancing can reduce the spread. Of course, continuing social distancing for a longer timespan takes a considerable toll on people’s  mental health. I guess at the first stage of this pandemic, psychologically everyone tends to believe that we need a more critical preventive measure to stop this devil. The moment people realized that a vaccine and medicine are not immediately available, they started to look for easy fixes. Suddenly there were several magic cures for COVID-19 available in WhatsApp or other social media platforms ranging from a lemon ginger cocktail to cow urine or even disinfectant. Suddenly, people with zero scientific expertise adding the disclaimer ‘although I am not a doctor’ started to claim that blah blah blah (read hydroxychloroquine) can cure coronavirus patients. Arguments in social media are still ongoing that COVID-19 is not much deadlier than flu. This is a relatively easier topic to explain to people because we have the statistics to show the transmission rate of COVID-19. Additionally, we can specify that the number of deaths from COVID-19 per week is actually several times more than the influenza virus.

The most shaking propaganda of the current situation is the conspiracy theory of how the coronavirus has been created in the laboratory as a biological weapon to destroy the whole world. I have spent countless times explaining to my family and loved ones that there is no single evidence present at this moment, which can prove this is the case. Apart from the health crisis and economic crunch, COVID-19 pandemic has generated “misinfodemics”. As  scientists, it is our duty to help non-scientists understand the whole situation surrounding the coronavirus pandemic. Most of the time, the language used in scientific journals to describe the newest discoveries is beyond the understanding of the general public. Although sometimes, few journals provide a separate section narrating the study’s significance to make it more digestible for non-scientists. Scientific jargon could make it very difficult to identify the subtle difference between information and pseudoscience. This is where science communicators could become that useful tool that can help to understand the difference between evidence-based science and misinformation. Right now, science communication is in dire need to restore the balance in the society. 

We as scientists have to explain our work without unnecessary jargon so that whenever there is this news that X lab has already developed the vaccine against COVID-19, people should start questioning rather than generating false hopes.

Science communicators can pitch in and explain several difficult stages of vaccine development and that success in the initial stage does not necessarily mean that it will lead to final vaccine production. As research at its very core is challenging, we can fail at any point in our experimental ventures. In this current scenario, science communicators have to elaborately explain the different models used in research namely, cell culture, mouse, macaque to human. Drugs actively reducing the harmful effects of COVID-19 in cell culture does not mean that it will successfully work similarly during human clinical trials. Science communicators could also explain using  evidence-based information what the right guidelines are and what is just misinformation. Probably more interactive sessions with science communicators could be most useful. More and more community-based science events need to be organized to make the general public aware of recent scientific trends and advancements. 

Science communication could act as a ‘vaccine’ itself to fight against this coronavirus “misinfodemics”. How can it happen? Science communicators can embed laypeople with the right information, exactly the same way we get our vaccine booster. Immunity against some of the virus even needs multiple booster doses. In a similar manner, science communicators can administer an exact dose of scientific information in public. Once vaccinated, whenever our body encounters a virus, our immune system starts to respond by producing antibodies. I speculate that a layperson vaccinated with proper ‘science communication’ will begin to ask the right question at the right moment. Appropriate science communication can help a layperson even recognize the pattern in the news which contains misinformation. Whenever there will be a news article showing that a cure for COVID-19 is available according to a ‘research study’, I want to see that day when a layperson will ask to see that specific ‘research study’ for verification. 

The job of science communicators will not be easy at all because just a few months ago, this virus was non-existent on this planet. We are still learning everyday new information about this virus. But with the willpower of science communicators the truth behind science will always prevail in the fight against misinformation. 

Disclaimer: The opinion of the author does not necessarily reflect the opinion of CUPS.

Wildfires and air pollution: beyond deadly fires

Science Stories: Wildfires and air pollution: beyond deadly fires

Author: Alex Karambelas, Postdoctoral Research Scientist, Lamont-Doherty Earth Observatory of Columbia University.

Bio: Alex is an interdisciplinary air pollution scientist, working with air quality modelers, energy experts, epidemiologists, and environmental scientists to determine source contributions to health-damaging air pollution. In her work, she uses chemical transport computer models, designing various emissions scenarios to identify mitigation strategies to curb future air pollution and premature mortality. Her background is in atmospheric sciences, and she earned her Ph.D. in Environment and Resources at the University of Wisconsin—Madison.


Imagine a field filled with tall sunflowers, their yellow faces smiling in your direction. The sky is a bright, crisp blue, the minimal clouds are fluffy and pearly white. You take a look around you and breathe in one big deep breath, the air feeling cool and refreshing, even a bit rejuvenating.

Now imagine you’re stuck in traffic on a crowded highway. It’s a beautiful spring day, so your windows are down. Just as you’re about to take a big breath in, the semi-truck to the left of you belches thick black smoke from its exhaust pipe. The taste is sour and unpleasant, and you roll your windows back up to turn on your air conditioning.

The black smoke is an example of air pollution, or the gases and aerosols suspended in the air that are harmful for human health and the environment. Sometimes referred to as smog, air pollution includes surface level ozone (O3) [1]—formed from the reaction of pollutants directly emitted, for instance, from cars and power plants—and fine particulate matter (PM2.5) a fraction of the width of a single human hair—directly emitted (released) and formed from chemical reactions in the atmosphere. In New York City, we can sometimes see the summer haze when we look out over the city: a thin, discolored layer muting the skyline. Across the globe, millions of people die prematurely and millions more suffer disabilities each year due to breathing in O3 and PM2.5 air pollution for extended periods of time. In my own research, I seek to identify sources of air pollution that lead to the greatest health damages, designing future emissions scenarios to try to reduce the future health burden of air pollution.

Many different emissions sources lead to air pollution, and sources and pollutant concentrations vary from city to city and region to region. Most air pollution is man-made from the (incomplete) combustion of products like fossil fuels and woody biomass from which we meet our energy needs. Biomass burning can also be considered man-made, for instance agricultural biomass burning in India is considered man-made because farmers burn their crop waste. There are natural sources, too, like windblown dust, sea salt spray, and gases released from plants and trees during growing phases or when under stress such as from a drought (these are also called “biogenic sources”). Researchers like myself who study air pollution tend to consider seasonal sources like wildfires like the 2018 Camp Fire in California to be a “natural” source, even if the fire was started from a careless person with a lit match or hot car.

Wildfires are a unique source of air pollution because they are isolated events but can release considerable amounts of gases and aerosols, including that same black smoke. We don’t often think about wildfires as contributing to health-damaging air pollution, instead considering the direct catastrophic destruction they produce. Wildfires occur seasonally under hot, dry conditions in wooded areas all across the world, including in the western United States. They can be very strong in magnitude, burning or smoldering for days or weeks, and can cover a large area of “fuel,” i.e. dry woody biomass. In the western U.S. the wildfire season traditionally is late spring through summer, when brush is often dry and easy to ignite by a lightning strike or spark from semi-truck undercarriages. Around this time we tend to see dozens of news articles from local and national sources that cover the devastation caused by wildfires, often for weeks on end.

Wildfires can lead to dramatic increases in local and regional air pollution, releasing aerosol and gas-phase air pollutants that can chemically react to yield enhanced O3 and PM2.5 concentrations. Near-term health impacts such as increased incidences of hospital admissions due to asthma attacks or other respiratory ailments may be the first sign of elevated pollution due to a wildfire event. Pollution enhancements such as those from wildfires can exacerbate pre-existing health conditions, lead to an increase in hospital admissions, and impact economic productivity. People are susceptible to adverse effects from exposure to air pollution at different rates. Children and the elderly are much more likely to experience lung irritations at moderate exposure rates. Outdoor workers may have to limit their time outdoors, reducing productivity, or be harmed in the process of their workday. Besides structural and health damages from wildfires, other negative economic implications also occur. For instance, in Seattle during the 2018 wildfire season, local business owners faced an economic burden when they were required to cancel various outdoor tourist outings due to the nearby wildfires affecting visibility and human health exposure. Similarly, during the worst seasonal biomass burning events in northwestern India, Delhi will often ground flights due to reduced visibility, whether because of biomass burning in upwind regions or because the event was exacerbated by stagnant winds.

During a wildfire event, concentrations of O3 and PM2.5 in the atmosphere downwind of burn sites may exceed U.S. Environmental Protection Agency (EPA) air quality standards (exposure limits deemed unhealthy for humans). We can measure this enhancement with surface observations, noting changes hour by hour and comparing across air pollution monitor locations. Data from EPA monitor sites are accumulated into an Air Quality Index (AQI) warning system, visible on airnow.gov, which you can use to track all sorts of pollution episodes, even the O3 air pollution event during the recent heatwave in New York City. Surface monitors form a sort of constellation of air pollution measurements, to help us understand the changes in concentrations over time and space, however there is a lot of empty space between surface monitors where we have to make inferences about air pollution.

We can fill in this empty space and assess the amount of air pollution coming from wildfires—or other sources—using complex chemical transport computer models, made up of hundreds of chemical equations in four dimensions. Computer models are also how we get our daily and weekly weather forecasts, data from which is often used in forecasting air pollution. In my own research, I use such computer models to understand various energy sector contributions—such as biomass burning in India—to regional air pollution, and ways to reduce pollution and improve air quality into the future. We can test “What If?” scenarios where certain sources or pollutants are reduced or removed entirely from the system to understand emissions and chemistry contributions to air pollution. Models help researchers understand the space and time between observations, filling in gaps to help understand the sources and chemistry of air pollution, including helping us identify what might be missing when compared to observed values.

We can use models at a variety of scales from urban to globally. The bottom layer of this NASA image from the Earth Observatory blog shows light pollution, indicative of human population, observed from space, and it is overlaid with model data of different types of aerosols including sea salt, dust, and black carbon and their respective sources. In this image, you’ll notice that there are “plumes” of air pollution blown across continents and off coastlines. Air pollution is often localized to urban centers and downwind areas, but pollution, including from wildfires, can become lofted in the air and transported downwind, sometimes for very long distances. Even here on the east coast at Columbia University we can experience wildfire pollution plumes coming from Canada and even occasionally from the Pacific Northwest. Aside from using models, we can track the transport of air pollution including from wildfires using satellites. Long-range transport is nearly as important to air quality scientists as locally emitted pollution in understanding what sources contribute to ambient air pollution.

Wildfire air pollution is a small component of the total air pollution story, where there are many diverse sources across the globe, but the short-term air pollution and health implications from wildfire air pollution may be considerable. In Southeast Asia, modeled seasonal biomass burning events coupled with meteorology are estimated to contribute to more than 100,000 premature deaths due to air pollution across Indonesia, Malaysia, and Singapore (Koplitz et al., 2017). Similarly, fall agricultural waste burning in northwestern India contributes between 7 and 78% of Delhi’s air pollution, even though the burning occurs hundreds of kilometers away (Cusworth et al., 2018), leading to a near doubling of PM2.5 during waste burning episodes (Liu et al., 2018), and potentially contributing to thousands of deaths. In the western U.S., over 100 deaths occurred during California wine country wildfires in October 2017.

Is there a way to reduce the air pollution deaths associated with wildfires? Check airnow.gov for forecasts and tweet “#AirAirAir [place name]” on Twitter for current air pollution levels. Wear facemasks and stay indoors during events if you live in the direct downwind areas, and avoid travel to wildfire-active regions during and shortly after wildfire events will greatly reduce your air pollution exposure. Call family and friends in the vicinity of wildfire pollution exposure to suggest these steps is a good idea too. Save hiking trips in dry-prone regions for (slightly) wetter seasons if possible, and always make sure a campfire is fully extinguished.

You can also reduce air pollution and mitigate the impending enhancement of wildfires by reducing your carbon footprint, thereby reducing GHG emissions into the atmosphere. For instance, we can expand affordable public transportation with electric fleet vehicles to reduce the number of traditional gasoline passenger cars or affix pollution “scrubbers” to power plant stacks, removing PM2.5 precursors through adsorption processes. Exacerbation of drought and high temperatures due to climate change will likely lead to increased wildfire extent and strength in the coming decades, putting millions of people worldwide at risk of losing their homes or their lives. Many sources contribute to air pollution, some more manageable than others, but when it comes to wildfires, we can all take steps to reduce our impact and protect ourselves and our loved ones.

 

To follow Alex: 

 


Footnotes:

[1] Although the same chemical compound as stratospheric ozone, surface-level ozone does not serve a positive purpose and is harmful to humans, animals, plants, and buildings.

References:
Koplitz, Shannon N, Loretta J Mickley, Miriam E Marlier, Jonathan J Buonocore, Patrick S Kim, Tianjia Liu, Melissa P Sulprizio, et al. “Public Health Impacts of the Severe Haze in Equatorial Asia in September–October 2015: Demonstration of a New Framework for Informing Fire Management Strategies to Reduce Downwind Smoke Exposure.” Environmental Research Letters 11, no. 9 (2016): 094023. https://doi.org/10.1088/1748-9326/11/9/094023.
Cusworth, Daniel H, Loretta J Mickley, Melissa P Sulprizio, Tianjia Liu, Miriam E Marlier, Ruth S DeFries, Sarath K Guttikunda, and Pawan Gupta. “Quantifying the Influence of Agricultural Fires in Northwest India on Urban Air Pollution in Delhi, India.” Environmental Research Letters 13, no. 4 (April 1, 2018): 044018. https://doi.org/10.1088/1748-9326/aab303.
Liu, Tianjia, Miriam E. Marlier, Ruth S. DeFries, Daniel M. Westervelt, Karen R. Xia, Arlene M. Fiore, Loretta J. Mickley, Daniel H. Cusworth, and George Milly. “Seasonal Impact of Regional Outdoor Biomass Burning on Air Pollution in Three Indian Cities: Delhi, Bengaluru, and Pune.” Atmospheric Environment 172, no. September 2017 (2018): 83–92. https://doi.org/10.1016/j.atmosenv.2017.10.024.