A pain in the foot: moves to prevent injury in dancers

Dancing can be one of life’s greatest pleasures. But for folks who consistently engage in intensive forms of dance, such as ballet, it can also lead to injury. One injury amongst dancers and other athletes is flexor hallucis longus tendinopathy

The flexor hallucis longus tendon (FHL), as seen in Figure 1, helps stabilize a person when they’re on their toes and mainly moves to flex the big toe. It stretches all the way from the calf muscle, through the ankle, down to the big toe. When athletes engage in repetitive movements that recruit their foot and ankle in this manner, like jumping up and pushing off the big toe, strain of the FHL tendon can occur. FHL tendinopathy is painful and can leave dancers and gymnasts out of commission from their passion and profession.

The posterior view of the FHL in the right leg, taken from Sports Injury Bulletin
Figure 1. The posterior view of the FHL in the right leg, taken from Sports Injury Bulletin.

Luckily, researchers study overuse conditions like this one. Dr. Hai-Jung Steffi Shih and her colleagues recently published a study where they had 17 female dancers (9 with FHL tendinopathy and 8 without) perform a specific ballet move called saut du chat. The dancers wore a full-body marker set that allowed the researchers to capture the fine-grained positions and movements of the dancers’ bodies throughout the ballet move. 

When performing a movement like the saut du chat, the body tends to place a heavy load on one particular joint in the foot called the metatarsophalangeal joint (MTP). Repeating the movement like this over and over again (as dancers often do), can contribute to overuse of the FHL. Researchers can measure something called stiffness in an athlete’s musculoskeletal system to assess the potential for injury. Scientists think that greater stiffness may lead to impact on a person’s bone, while reduced stiffness may lead to soft tissue injury. To better understand stiffness, it can help to think about parts of the lower body as springs, such as the ankle, knee, and hip joints. They compress, store energy, and then release – like when you squat and jump. Researchers can examine stiffness of these joints, specifically joint torsional stiffness, or how easy or hard the joints bend.

Additionally, researchers who study movement can also measure how a dancer’s body makes contact with the ground to determine if certain kinetic factors might be significantly associated with injury. For instance, the difference in the angle at which a dancer’s lower limb makes contact with the ground has been associated with injured and uninjured groups of dancers. If researchers can accurately indicate which angles are associated with injury, they can collaborate with medical professionals and teachers to craft targeted interventions on how a dancer should be properly moving their body.

In the current study, the researchers posited that lower extremity joint torsional stiffness measured when participants made contact with the ground would be altered in dancers with tendinopathy compared to uninjured dancers. They also thought that dancers with tendinopathy would demonstrate a lower limb posture associated with kinetic factors that differed between injured and uninjured dancers. 

Using the marker set that the dancers wore, the researchers measured torsional stiffness by examining the rotational forces, or the change of joint movements, over the change of the joint angle over a period of time. This data was gathered as the dancers flexed and extended their lower extremity joints during the saut du chat. Moreover, the team measured the contact angle at which the dancers’ feet took off from. 

Dr. Shih and her colleagues found that the dancers with tendinopathy demonstrated less joint torsional stiffness at the metatarsophalangeal (MTP), ankle, and knee joints during the takeoff step of the dance move. To reiterate, research suggests that a lack of joint stiffness is not good because it could lead to soft tissue injury, as it allows for excessive joint motion. Additionally, these injured dancers took longer to reach peak force when pushing off the ground and the peak force was also lower than in the uninjured dancers. Finally, the angle at which dancers first contacted the ground during that take off step was smaller (i.e., their foot was further in front of their pelvis) in participants with FHL tendinopathy compared to those without injury.

How can these findings help dancers and those that provide movement guidance to dancers? Knowing the particular biomechanical changes that precede tendinopathy can inform targeted interventions aimed at improving how dancers move their feet and legs when performing certain moves. For example, teachers can offer cues and guidance on how a dancer should position their pelvis and feet in an effort to prevent injury. 

This study by Dr. Shih and her colleagues is the first to demonstrate the differences in movement profiles between dancers with and without FHL tendinopathy and could go a long way to informing interventions that could prolong dancers’ careers.

Dr. Hai-Jung Steffi Shih is currently a postdoctoral research fellow in the Neurorehabilitation Research Lab at Teachers College, Columbia University. She received her PhD in Biokinesiology and Physical Therapy at the University of Southern California where this research was conducted. Steffi’s research aims are to further understand musculoskeletal pain, movement disorders, and improve intervention strategies using a multidisciplinary approach. Outside of academia, Steffi is an avid traveler who has been to more than 35 countries. She loves to dance, enjoys playing music, and is aspiring to become an excellent dog owner one day. You can email her at [email protected] and connect with her on Twitter @HiSteffiPT.

The hungry algorithm: machine learning to ease the “what’s for dinner?” decision

When Dr. Jaan Altosaar heard that food deprivation increases stem cell regeneration and immune system activity in rats, he did what many would not dare: he decided to try it himself and fasted for five days. Thoughts of food started to take over his mind and, with what can only be qualified as a superhuman ability to think with low blood sugar, he went on a scientific tangent and channeled them into tackling the complicated task of improving food recommendation systems, which led to publishing a research article about it.

Dr. Altosaar wanted help in making decisions because choosing is hard. When faced with an excessive number of options, we fall victim to decision fatigue and tend to prefer familiar things. Companies know this, and many have developed personalized recommendations for many facets of our lives: Facebook’s posts on your timeline, potential partners on dating apps, or suggested products on Amazon. But Jaan had a clear favorite: Spotify’s Discover Weekly algorithm. The music app gathers information on co-occurrence of artists in playlists and compares the representation of you as a listener to the couple billion playlists it has at its disposal to suggest songs you might enjoy. Since Dr. Altosaar’s problem was similar, he framed the problem as feeding the algorithm a user’s favorite recipes (“playlists”), which are made of a list of ingredients (“songs”). Would the algorithm then cook up suggestions of complimentary meals based on the ingredients in them?

A meal consumed by a user (hamburger) is made up of ingredients (bread, lettuce, tomato, cheese, meat). This information is given to the machine learning algorithm, which will use learnt information about that user to provide a recommendation likely to be eaten by them.

Meal recommendation in an app is challenging on several fronts. First, a food tracking app might record eating the same meal in many different ways or with unique variations (such as a sandwich with homemade hot sauce or omitting pickles). This means that any specific meal is typically only logged by a small number of users. Further, the database of all possible meals a user might track is enormous, and each meal only contains a few ingredients. 

In traditional recommender systems such as those used by Netflix, solving this problem might mean first  translating  the data into a large matrix where users are rows and items (e.g. movies or meals) are columns. The values in the matrix are ones or zeros depending on whether the user consumed the item or not. Modern versions of recommender systems, including the one in Dr. Altosaar’s paper, also incorporate item attributes (ingredients, availability, popularity) and use them as additional information to better tailor recommendations. An outstanding issue, however, is striking a balance between flexibility, to account for the fact that we are not all like Joey Tribbiani and might not like custard, jam and beef all together (even if we like them separately), and scalability, since an increasing number of attributes takes a toll on computing time. Additionally, these machine learning algorithms are not always trained the same way they are later evaluated for performance.

A matrix with a representation of users and items
A sparse matrix representing whether a user “u” consumed an item “m” (coded with a one). If the user did not consume the item, there is a zero. Note that most items in the matrix are zeroes, so that there is not a lot of actual information (thus calling it sparse).

The new type of model Dr. Altosaar and colleagues propose, RankFromSets, frames the problem as a binary classification. This means that it learns to assign a zero to meals unlikely to be consumed by a user, and a one to those that are likely to be consumed. When faced with giving a user a set of potential meals (say five), it strives to maximize the number of meals the user will actually eat from those five recommended to them. To leverage the power of incorporating the meal’s ingredients, the algorithm uses a technique from natural language processing to learn embeddings. These are a way to compress data  to preserve the relevant information you care about to solve your problem; in this case, learning about patterns useful for predicting which ingredients tip the balance for someone to consume a meal. This allows for a numerical representation for each meal based on its constituent foods, and the patterns in how those foods are consumed across all users.

The RankFromSets classification model incorporates several components. There are  embeddings for representing user preferences  alongside the embeddings corresponding to a meal a user might consume. The classifier is spiced up with additional user-independent information about the meal’s popularity and its availability. These components are used by the model to learn the probability that a particular meal will be consumed by a user. Potential meals a user might enjoy – or that might be healthier options – are then ranked, and the top meals will be recommended to the user. For example, if you have had avocados in every one of your meals, they are in season, and all those Millennials are logging in their avocado toast, you are very likely to receive recommendations that include avocados in the future.

As a proof of concept, the authors tested their method not only on food data, which they got from the LoseIt! weight loss app, but also on a dataset unrelated to meal choices. For this independent data set, the authors used reading choices and behavior among users of arXiv, a preprint server. They trained the model on past user behavior data and evaluated performance (accuracy of paper suggestions) on a previously separated portion of that same data (so they knew whether the user had actually read the paper, but this information was hidden from the algorithm for evaluation). This is a typical way to  assess the performance of machine learning systems, and their method outperformed previously-developed recommender systems. The better performance and translatability to tasks other than meal recommendation is indicative of the potential of this tool to be applied in other contexts.

This new recommender system could be applied to either recipe recommendation apps, or even to an app that would suggest first-time customers of a restaurant the menu items that they are more likely to like based on their preferences. The system also has the potential to incorporate additional information beyond whether a user consumed (and liked) a particular ingredient or meal. Sometimes the method of cooking determines whether a food is appealing or not (Brussel sprouts I’m looking at you). Additionally, classifying ingredients by flavor might also be helpful in suggesting similar (and even healthier) alternatives. Therefore, adding those tags as extra layers to the user-meal intersection will certainly provide better recommendations and opportunities to cook outside of the box. Dr. Altosaar’s fast might or might not have gotten him a boost in his stem cells, but he certainly succeeded in helping everyone else worry a bit less about what’s for dinner tonight.

Dr. Jaan Altosaar is a Postdoctoral Research Scientist in the Department of Biomedical Informatics and an active participant in CUPS. He publishes an awesome blog about machine learning and its implications.

Laboratory evolution of a cellular reprogrammer provides a potent path to stem cell generation

The human body has approximately 15 trillion cells, all of which arise from embryonic stem cells which are considered the building blocks of life. Stem cells renew themselves by dividing indefinitely and can also give rise to cells with specialized functions which ultimately end up forming various organs and tissues in our body. This process is called differentiation. Typically, once cells specialize or differentiate, they lose the ability within the body to go back to being stem cells. Given their unique properties, stem cells have become a critical starting point that scientists can tinker with to develop new drugs and therapies. Because of their tremendous value for research, scientists have figured out non-invasive ways to transform differentiated cells into cells with stem cell like properties. These lab-grown cells, called induced pluripotent stem cells or iPSCs, are typically generated by a process called “cellular reprogramming”.

As Dr. Tania Thimraj explains in a recent article, proteins called transcription factors can act as cellular “fixer-uppers” and renovate differentiated cells to look and behave like stem cells. The current state of the art process for making iPSCs involves excess production, also known as overexpression, of the following transcription factors in differentiated cells: Oct4, Sox2, Klf4, and c-Myc (collectively called the “OKSM” cocktail). Despite significant advances in the formulation of this cocktail, there is still a huge margin for improvement in the ability of this cocktail to transform differentiated cells into stem cells. In a recent study performed by Dr. Tan and co-authored by Dr. Malik, the authors propose that the cocktail is not especially effective because the transcription factors were never under any evolutionary selection pressure to produce stem cells. Inspired by this, the authors set out to use evolution in the dish, also known as directed evolution, to make a more efficacious transcription factor cocktail.

Although natural evolution takes place over millions of years, smaller scale evolution can be done in a laboratory setting at much faster timescales. This approach is known as “directed evolution” and has been successfully used by scientists to evolve proteins with new functionalities. This process involves making random mutations in the protein of interest. Then, these mutants undergo a selection process in an appropriate cellular context so that protein variants with desirable properties can be isolated.

In a pioneering study, members of the Jauch lab, including Dr. Malik, used directed evolution to optimize the cellular reprogramming ability of the transcription factor Sox2. Building on this success, the Jauch lab used directed evolution to make ePOU, an enhanced and evolved version of Oct4 which is an integral part of the OKSM cocktail. In the current study for creating ePOU, the authors made random mutations at six functionally important positions in Oct4 and overexpressed the mutant proteins in mammalian cells such that the Oct4 transcription factor activity was tied to the production of a green fluorescent protein representing stem cell transformation.

This innovative study demonstrates that the transformation potential of naturally occurring transcription factors can be drastically enhanced by directed evolution. In addition, this work also provides a framework for future research on transcription factor engineering for cell reprogramming. By providing a faster and more efficient way to produce stem cells, this study has the potential to accelerate various research and therapeutic avenues such as regenerative medicine, drug efficacy and safety testing, and studying human development and disease.

Dr. Vikas Malik is a Postdoctoral Research Fellow in Dr. Jianlong Wang’s lab in the Department of Medicine at Columbia University Medical Center and is a member of CUPS and the Outreach and Communications Committee.

The Science Behind Never-Ending Love for Food

“Eat what you want and stop when you’re full.”

For some people, this statement is absolutely invalid as they never feel full; they don’t have an ‘off-switch’ while eating. Sometimes, consuming food makes them feel even hungrier. These are some classic symptoms of binge eating. Binge eating falls under the big umbrella of eating disorders, which are serious mental health conditions characterized by persistent alteration of eating behavior and associated emotions. Three different diseases belong to the spectrum of eating disorders: anorexia nervosa, bulimia nervosa, and binge eating disorders. Although binge eating disorder is the most prevalent, surprisingly it does not get as much media coverage compared to anorexia and bulimia.

Binge eating results from “hedonic hunger” the drive to consume food not because of an energy deficit, but for the inherent pleasure associated with eating. The pleasure signal for bingeing relies mostly on the reward-associated component of feeding and sensory stimuli such as smell and taste. The reward system functions by raising the level of the neurotransmitter dopamine in a midbrain structure called the ventral tegmental area. Years of research in laboratory animals also depicted a positive correlation between binge eating and increased dopamine release. The endocannabinoid system has been connected with this rewarding aspect of food intake and represents the key system modulating bingeing. Fun fact, cannabis consumption leads to overeating (read: munching) by tricking the brain into feeling like it’s starving when in reality that’s not the case. In association with the endocannabinoids and reward system, the gut or gastric lumen also plays as a master driver controlling feeding behavior in general along with binge eating. Intriguingly, endocannabinoids are functionally dependent on the vagus nerve innervating the gastrointestinal tract. Overall, scientists have just begun to understand the complex nature of binge eating from the neurobiological and psychological standpoint.

A recent preprint by Dr. Chloé Berland and colleagues dissected, for the first time, the indelible role of the reward system, gut-brain axis, and endocannabinoids in binge eating. This study successfully leveraged a unique binge eating model in which a highly palatable milkshake was provided to mice in a time-locked manner. This binge-eating model was driven by reward values rather than metabolic demand, as animals had unlimited access to less palatable food throughout the test, so milkshake consumption occurred in absence of energy depreciation. This study pinpointed that two phases of binge eating, anticipatory and consummatory, are controlled by a specific dopamine receptor called D1 (D1R).

The cannabinoid receptors are available both in the peripheral and central nervous systems. The current study aimed to uncover the specific connection between peripheral cannabinoid and bingeing. To achieve that goal, a peripherally restricted chemical was administered in the mice to block the activity of the cannabinoid receptor. Dr. Berland and her colleagues observed that the injection of peripheral cannabinoid blocker completely silenced the hedonic drive for bingeing. This finding reveals that physiologically, the peripheral endogenous cannabinoid acts as a gatekeeper for binge eating.

Figure 1. Schematic representation showing how the peripheral endocannabinoid mediates bingeing via the gut-brain axis. The left panel of the diagram shows that increased peripheral endocannabinoid causes increased reward and bingeing while the right side shows the opposite. The brain region, Nucleus Tractus Solitarius (NTS), Endocannabinoids, or eCB (2-Arachidonoylglycerol molecule as a representation). Adapted from Berland et al. and created with Biorender.com.

To delve more into the involvement of the gut-brain axis in endocannabinoid-mediated bingeing, the current study used vagotomy, a severing of the vagus nerve’s connections to the gastrointestinal tract and other abdominal organs, to shut off the function of the vagus nerve in these organs. Injection of peripheral cannabinoid blocker in vagotomized mice led to strong activation of a brain region known to play a key role in receiving signals from the gut about meals, the nucleus tractus solitarius (NTS) (see Figure 1). This observation indicates that the peripheral endocannabinoids are important influencers that act in between the gut and brain in regulating the hedonic drive for food.

This study took advantage of a unique, cutting-edge technology called fiber photometry to further dissect how the endocannabinoids control the reward component of bingeing. With fiber photometry, the neural activity of specific brain regions can be detected in awake animals. The neural activity in the midbrain reward area was dampened after the peripheral endocannabinoid blocker injection. This finding suggests that peripheral endocannabinoids control the food craving by modulating the reward system.

Taken together, the observations of this study provide crucial mechanistic insights on gut-brain and endocannabinoid integration. Using state-of-the-art tools, this study sheds light on the previously unexplored regulatory mechanism of the endocannabinoids in bingeing. So, the next time you binge eat a pint of Ben & Jerry’s ice cream, you know it’s not only the burst of pleasure chemical dopamine but also your body endocannabinoids tricking your gut and brain to finish it all.

These new and exciting data warrant that peripheral endocannabinoid blockers could be utilized for the treatment of binge eating disorders or related eating disorders in humans. Patients with eating disorders struggle mentally, emotionally, and physically. For instance, individuals with eating disorders often become victims of body shaming. We can always do more to help binge eating disorder patients in the recovery process. Here are some useful resources for patients struggling with binge eating disorder:

 https://www.nationaleatingdisorders.org/

 http://beyondhunger.org/

https://anad.org/

 

Dr. Chloé Berland is a Postdoctoral Research Scientist in the Department of Preventive Medicine where she studies the effect of overfeeding on brain circuits. She also serves as CUPS secretary.

The Bitter Sweet Symphony

During your childhood, your parents might have added a sweet flavor to the bitter medicines that you did not want to take. Do you wonder why you were getting a bitter taste anyway? There is a scientific explanation.

Attraction to sweet compounds and the aversion to bitters are innate behaviors triggered by the mammalian taste system. Despite their apparent simplicity, the neuronal mechanisms that trigger these behaviors are highly complex. Alterations in the sense of taste are quite common in adults. The most common taste dysfunctions are loss (ageusia) or reduced (hypogeusia) sense of taste. Interestingly, ageusia is one of the most frequent symptoms reported after infection with COVID-19. Response to bitter and sweet taste starts when chemicals in the food activate specialized cells called taste receptor cells on the tongue and palate. These cells make contact with matching ganglion neurons, which form a bridge from the periphery to the brain. In the brain, bitter and sweet signals are represented by spatially distinct populations of neurons in the taste cortex that receive these signals through the brainstem. Scientists are investigating the brain regions and mechanisms that regulate this circuit and how bitter and sweet responses intermingle.

In a recent study in Cell, Dr. Hao Jin and colleagues uncover the regulatory mechanisms of neuronal responses to sweet and bitter taste in mice and how this modulation is important when sweet and bitter responses are combined. Aversion to bitter taste is well-recognized to be an innate behavioral response important to detect and prevent ingestion of harmful chemicals. So, how does the behavioral rejection of a bitter taste prevail even when combined with a sweet taste? To address this question, the authors firstly aimed to identify the neural population in the brainstem responsive to bitter and sweet taste. Dr. Hao Jin and colleagues tested the evoked response to artificial sweetener and bitter substances in subsets of neurons in the brainstem using fiber photometry, a prominent in vivo imaging technique that quantifies the neuronal activity of a region or a population of brain cells in awake animals. They found that a specific population of neurons (b-neurons for simplification) were specifically active in response to bitter tastes, while the activity of a distinct neuronal population (s-neurons) was enhanced solely after sweet stimuli. A series of experiments were then performed to functionally validate these neurons in the brainstem as a passage of response to bitter and sweet tastes. To start, it was observed that chemical ablation of b- or s-neurons leads to a decreased avoidance of bitter solutions and a loss of attraction to sweet stimuli, respectively. Additionally, the authors asked whether selective activation of b- and s-neurons in the brainstem was sufficient to evoke a taste response even without a taste stimulus. Using optogenetics, a technique that allows to artificially increase or decrease neuronal activity through light, Dr. Hao Jin and colleagues observed that activation of b-neurons in mice decreased licking to bitter substances while activation of s-neurons increased licking to sweet stimuli.

In addition, the authors asked why and how responses overlap and the reason for the bitter to overcome sweet stimuli. They observed that sweet taste responses from s-neurons were largely suppressed when a bitter stimulus is presented together with a sweet flavor. This suppression of sweet response was found to be directly executed by the taste cortex. Interestingly, at the same time, the activity of b-neurons is enhanced also by the cortex but via the central amygdala (Figure 1). As a result, despite the efforts of your parents to turn those bitter medicines yummy, a team of brainstem and central amygdala neurons are raising a red flag about the potential toxicity of the food you ingest (even if this is not the case), increasing the response to bitter taste and suppressing the response to sweet taste.

Figure 1. Schematic representation of the neuronal circuits involved in the response to bitter or/and sweet taste and how there are modulated. Adapted from Jin et al., and created with Biorender.com.

Dr. Hao Jin is a postdoctoral fellow at Dr. Charles Zuker ‘s lab in the Zuckerman Mind Brain Behavior Institute at Columbia University.

Musseling through climate change

Our planet’s climate is pretty old — an estimated 3.5 billion years old, in fact. Understanding how Earth’s climate has changed since then is important for predicting and coping with climate change today and in the future. But, because it is hard to know exactly what happened a couple of billion years ago, climate scientists use mathematically constructed models that take into account abiotic, or non-living, factors like carbon dioxide levels and ocean chemistry in the past to predict weather patterns in the future. Ultimately, the goal is to predict how biotic factors — living things like us — will be affected. 

These mathematical models are a work in progress. Often, they are made using data from field studies conducted over periods of only one to two years. Additionally, many models do not factor in biological mechanisms for plasticity that allow organisms to adapt to changing environmental conditions. These gaps were the impetus for a study conducted by Dr. Luca Telesca and colleagues, recently published in Global Change Biology. Their work investigated shell shape and body structure in archival specimens (read: preserved in ethanol, not fossilized) of the blue mussel (Mytilus edulis) collected roughly every decade between 1904 and 2016 along 15 kilometers of Belgium’s coast (Fig. 1). Measurements of the mussels themselves were coupled with extensive long-term datasets of coastline environmental conditions over the past century, all of which were obtained from collections at the Royal Belgian Institute of Natural Sciences

Fig. 1: Study location, along 15 km of Belgian coast between the cities of Ostend and Nieuwpoort (starred). Image source: Google Maps.

The blue mussel is not your typical specimen in an archival collection. Common animals often aren’t considered worth preserving for the historical record. However, it’s precisely because they are common that species like the blue mussel make great barometers for environments gone by. The blue mussel in particular is an example of a “calcifying foundation species,” species so named for their ability to sequester and store calcium and carbon from the surrounding water (see Fig. 2), and their habitat on shallow marine floors. This calcifying ability, or biomineralization, is the process by which living organisms convert non-living organic substances into still-non-living inorganic derivatives. It is an astoundingly ubiquitous process: all six taxonomic kingdoms from single-cell organisms in Archaea to mammals like us — we’re in the kingdom Animalia — contain organisms capable of biomineralization. The bones in our bodies are an example of this, the result of binding calcium phosphate from our diets into a different, crystallized form of calcium called hydroxyapatite. Furthermore, because biomineralization is an easy-to-measure, direct interaction between biotic and abiotic factors, it is an ideal study for climate scientists. 

Fig. 2: A typical blue mussel shell and cross-section. After calcium carbonate crystals are absorbed from the surrounding water, they become layered with secreted structural proteins from the mussel’s body tissue, or mantle. These layers of calcium carbonate and secreted proteins form the mussel’s shell, the thickness of which can vary depending on how much calcium carbonate is absorbed. Image created using Biorender.com.

One of the most pressing concerns presented by rapid climate change today is ocean acidification, characterized by an increase in oceanic carbonic acid resulting from elevated levels of carbon dioxide in the atmosphere. Excess carbonic acid increases the acidity of ocean water, which can dissolve shells, and decreases the availability of calcium carbonate, the nutrient that mussels and other ocean biomineralizers use to form shells in the first place. Ocean acidification has had a negative impact on many species; one notable impact is on coral in the Great Barrier Reef. Given these known effects of climate change and ocean acidification on many ocean calcifers, the authors predicted that they would observe a steady decrease in shell size between 1904 and 2016. 

Instead, to their surprise, they observed a marked increase in blue mussel shell size since 1904. The team’s results hold a number of implications for predictive climate change modeling. First, the findings signify that archival collections of organisms from the past can and should be used to influence our current predictions about what’s to come in any given biome, 10, 20, or 100 years from now. Second, and quite hopefully, the findings suggest that mussel populations somehow acclimated to shifting environmental conditions along the Belgian coast over the past century. The authors speculate that this could be because rising ocean temperatures could actually increase calcification, combating dissolution induced by acidic conditions, or that rising water temperatures may have increased the availability of a specific food source. Altogether, the potential for compensatory mechanisms in this study population of blue mussels points to the same potential in other species for coping with rapid environmental change over the next century. As we continue to update predictive models with data from the past and study and protect the populations most vulnerable to rapid climate change, we may find ways to help them mussel through yet. 

 

Dr. Telesca is a postdoctoral research scientist affiliated with Columbia University’s Earth Institute and the Lamont-Doherty Earth Observatory.

Maternal Stress and the Developing Brain

As humans, we all experience stress. It is a normal, and sometimes even beneficial, part of life. A small amount of stress can help motivate someone to prepare for a job interview or study for an important exam. There are times, however, when stressors become too overwhelming and even detrimental to health. Scientists, from medical researchers to psychologists, have studied stress for decades and documented some of these negative impacts on the brain. When thinking about the importance of the foundational, early years of a person, the presence or lack of stress can play a crucial role in development. For instance, extensive research shows that living in poverty is extraordinarily stressful for families and can negatively influence children’s brain development. The impacts of stress resulting from situations such as growing up in poverty warrant further investigation, especially considering that in 2020, one in six children in the U.S. was living in poverty.

Researchers can use various methods to assess how factors like stress impact the brain of growing children. Developmental scientists can use a tool called EEG, short for electroencephalography, to study the brain. EEG measures electrical activity in the brain by recording the communication between brain cells. It is an ideal neuroimaging method for understanding infant brain development since it allows for infants to be awake and moving, and even sitting on their caregiver’s lap during recording. Besides being infant-friendly, EEG is a useful tool for looking at brain development, given that there is a known pattern of how brain activity changes across the first few years of life.

Specifically, when using EEG to look at brain development, scientists typically see two different patterns. Broadly, infants have a mix of different types of brain activity that we call low-frequency and high-frequency power. Low-frequency power (e.g., theta) tends to be higher when the brain is at rest, while high-frequency power (e.g., alpha, beta, and gamma) tends to be used for more complex thinking like reasoning or language. As infants grow, scientists see that low-frequency power decreases and high-frequency power increases. Importantly, we can use EEG to assess how factors like stress impact the tradeoff of low-frequency and high-frequency power in the developing brain.

Image of a one-month-old infant with an EEG cap.
Figure 1. A one-month-old infant with an EEG cap. Courtesy of the Neurocognition, Early Experience and Development Lab.

Research shows that children growing up in chronically stressful environments often show alterations in the typical pattern of brain activity development. To further understand the mechanisms underlying this pattern of development, scientists have begun to study which biological and environmental factors may be at play. For instance, researchers can examine the role of caregiver stress, socioeconomic status, home environment, and neighborhood factors, just to name a few.

A recent paper by Dr. Sonya V. Troller-Renfree and colleagues examined maternal stress by looking at the amount of stress hormone (cortisol) found in hair. This measure assesses chronic stress and provides researchers with the average cortisol level of the mother from the preceding 3 months. Dr. Troller-Renfree’s research group hypothesized that infants who have mothers with higher stress hormone, compared to mothers with lower levels of stress, would show differences in their brain activity. Specifically, the researchers predicted that infants of more chronically stressed mothers would exhibit proportionally more low-frequency power and proportionally less high-frequency power compared to infants with physiologically less-stressed mothers.

Indeed, their results showed that infants of mothers who had higher levels of hair cortisol demonstrated higher levels of low-frequency (theta) activity and lower levels of high-frequency (alpha and gamma) brain activity. This finding is consistent with previous research showing that stress and adversity impacts early neural development. Importantly, Dr. Troller-Renfree’s team sampled a diverse pool of participants (both in terms of socioeconomic status and race), therefore bolstering the generalizability of their findings.

So what are the implications of these alterations? Research suggests that similar patterns of neural activity are associated with negative outcomes later in a child’s life, including language development and psychiatric problems. Nevertheless, this does not mean that a child will undoubtedly experience these issues. Additionally, it may be possible that these patterns, while associated with negative outcomes in some areas, may also be adaptive in other circumstances. Furthermore, the issue of the mechanisms by which a mother’s stress impacts the developing child still remains unclear. How exactly does a mother’s stress level impact the brain of her child?

Based on previous research by other scientists, Dr. Troller-Renfree posits a few mechanisms that must be further explored. For example, it is possible that stress impacts crucial mother-child interactions. It could be that stress hormones are passed from mother to baby in utero or through breastmilk. Moreover, it is also possible that environmental factors impact stress and brain development.

It is crucial that developmental scientists continue studying these mechanisms so that targeted intervention programs can be formed for families facing stress. Indeed, the esteemed pediatrician and researcher Dr. Jack Shonkoff of the Center on the Developing Child said in an episode of The Brain Architects Podcast: “In fact, one of the cardinal principles of the science of early childhood development is that if we want to create the best kind of environment for learning and healthy development for young children, we have to make sure that the adults who care for them are having their needs met as well.” As a society, we must recognize how detrimental stress can be to the developing child and invest in finding effective ways to alleviate caregiver stress.

Dr. Sonya V. Troller-Renfree is a Goldberg Postdoctoral Fellow in the Neurocognition, Early Experience and Development Lab at Teachers College, Columbia University. Her research focuses on the effects of early adversity and poverty on cognitive and neural development. She intends to continue examining these questions as part of her new, federally-funded Pathway to Independence Award (K99/00). You can stay up-to-date on her research findings on Twitter at @STRscience or on her website: www.sonyatrollerrenfree.com.

New Technology allowing gene switch to study multiple sclerosis

Our genetic blueprint consists of thousands of genes (more than 30,000) with new genes being discovered and added to the growing list. Our genes provide DNA instructions to the protein-making machinery in our bodies. These instructions can influence our health and dictate if we will get debilitating diseases. Have you ever wondered how scientists unlock which genes are responsible for what? For example, does gene A control our hair colour or gene B dictates if we will develop an autoimmune disease such as multiple sclerosis? The answer lies in DNA recombination technology which allows scientists to delete, invert or replace DNA instructions. The technology called Cre-lox recombination relies on the use of an enzyme called Cre recombinase which can bind, cut and recombine DNA at specific sites that are inserted in pairs in the DNA. The Cre-binding site in DNA is called the LoxP sequence that consists of 34 nucleotides DNA sequence made up of two inverted repeats separated by a spacer.  Cre enzymes can recognize these LoxP sequences and edit the stretch of DNA resulting in gene deletion or inversion.

In a recent research article, Dr. Olaya Fernandez Gayol and colleagues use an advanced version of Cre-lox technology called DIO (Double Floxed Inverted Open reading frame) to understand the role of the Interleukin-6 (IL-6) gene in multiple sclerosis (MS). MS is a chronic disease of the brain and spinal cord in which our immune system eats away the myelin sheath around nerves disrupting the communication between the brain and the body. IL-6 is a proinflammatory cytokine known to promote MS. Gayol et al use an experimental mouse model of MS which acutely develops brain inflammation called encephalitis (Encephalo- “the brain” + itis “inflammation”) within 3 weeks of disease induction. This mouse is referred to as EAE (Experimental Acute Encephalomyelitis) which closely mimics human MS disease.  

Scientists have conventionally studied the role of IL6 in EAE mice by irreversibly deleting the IL6 gene in one cell type. However, the results were confounding due to the compensatory expression of IL6 from other cell types. Gayol et al circumvent this problem by wiping out IL6 from all the cells and then recover IL-6 expression specifically in the microglial cells. It is akin to entering a dark room and turning ON a light switch at one corner of the room to clearly see what’s lying there. 

Figure 1.  Cartoon depicting the genetic strategy used by Goyal et al to recover IL6 gene expression exclusively in microglial cells in the mouse brain. Created with Biorender.com.

Olaya and the team use the cutting edge DIO method to wipe out IL6 and introduce the inverted form of the IL6 gene which makes this gene non-functional (Figure 1A). This inverted form of the IL6 gene does not produce IL-6 protein and mice carrying the inverted IL-6 gene (referred to as IL6-DIO-KO) are healthy (Figure 1A). As shown in figure 1B, Cre mediated recombination flips the IL6 gene in the correct orientation to make it active. The IL6 gene flipping occurs exclusively in the microglial cells and only upon treatment of mice with tamoxifen (TAM) drug. Mice in which IL-6 expression is active (referred to as IL6-DIO-ON) develop EAE disease (Figure 1B).

The team carefully optimized the duration of tamoxifen treatment in mice. Just 5 days of TAM did not flip the IL6 gene, so they extended the drug treatment to 11 days and found the IL6 gene turned on in all IL6-DIO-ON mice. Olaya says it is important to validate when creating new mouse models. “We used EAE to validate the mouse because it was a model readily available in our lab and IL6KO [deficient] mice happen to be completely resistant to the disease.” Their interesting finding that IL6-DIO-ON with IL6 gene active exclusively in microglia indicate that IL6 made in the brain promotes disease in the EAE mouse model. 

As compared to more traditional methods of generating gene mutation which requires extensive mice breedings or continuous drug treatment, the strategy presented by Olaya and colleagues is labour and cost-effective. Their findings showed that in the absence of IL-6, EAE disease does not develop in mice. On the other hand, turning on the IL-6 gene (like a gene-switch) using DIO technology, mice develop the disease.  Overall, this technology is highly customizable to understand the role of different genes in specific cell types in the disease context. It paves the way to gain a deeper insight and more thorough analysis of different molecular blocks involved in disease.

 

Dr. Olaya Fernandez Gayol is a postdoctoral research scientist in the Department of Pediatrics and co-president of Columbia University Postdoc Society(CUPS).  She also manages the CUPS Press office that provides postdocs with a platform to publicize their science while improving their science communication skills. 

Transcription factors and cellular fixer-uppers

Self-renewing stem cells are capable of developing into certain specialized cell types thus making them ideal candidates to study human development and as potential treatment modalities for a range of diseases. There are three types of stem cells: embryonic stem cells, adult stem cells and induced pluripotent stem cells. As the name suggests, embryonic stem cells are found in the embryo at very early stages of development. Adult stem cells are found in specific tissues post development. However, using human embryonic stem cells in research is quite restricted due to ethical, religious, and political reasons. This limitation has resulted in the identification of cell reprogramming techniques to convert differentiated cells, such as skin cells, back to an embryonic stem cell state through a process called induced pluripotency. The resulting induced pluripotent stem cells (iPSCs) are equivalent to the natural human embryonic stem cells and can be differentiated to any desired cell type using a mixture of biological molecules.

Cell reprogramming techniques can be likened to fixer-uppers. Imagine trying to remodel a building for a different purpose – converting an office building into a residential one for instance. Though the building material can be reused, with the aid of experts, there would be some structural changes and remodeling necessary to make it a home. Similarly, cellular reprogramming is the technique by which one cell type can be converted to another cell type in the lab with the help of certain gene expression regulators called transcription factors (Fig. 1). The process of inducing pluripotency has been studied extensively and the overexpression of four transcription factors – OCT4, SOX2, KLF4, cMYC (collectively referred to as “OSKM”) – has been shown to induce pluripotency in mouse skin cells.

Many studies have tried to identify other transcription factors with the potential to induce pluripotency or to replace OSKM in an effort to enhance the efficiency of iPSC generation. Of these four transcription factors, SOX2, KLF4 and cMYC have been successfully replaced by members of their protein family to induce pluripotency. However, replacing OCT4 with structurally similar and evolutionarily related factors failed to show similar reprogramming capabilities. This could indicate the presence of special molecular features on OCT4 that give it the ability to reprogram cells. However, these special features and the molecular mechanisms that enable OCT4 to induce pluripotency remain to be identified.

Fig.1. Depiction of pluripotency induction in differentiated cells. Transcription factors regulate the process of converting a mature cell into an induced pluripotent stem cell which can then be directed to differentiate into any desired cell type. Illustration created with BioRender.com

In the current study, Dr. Malik and colleagues hypothesized that the ability of a transcription factor to reconfigure chromatin (the complex of macromolecules composed of DNA, RNA, and protein, which is found inside the nucleus of eukaryotic cells), is one of the features that distinguishes a reprogramming competent transcription factor from a non-competent one (Fig. 2). To test this hypothesis, they studied the well-established OCT4-SOX2 relationship from initiation to maintenance of pluripotency. They performed their study by comparing DNA-accessibility, DNA-binding,  and transcriptional control by OCT4, OCT6 and an OCT4 mutant that does not interact with SOX2 (OCT4defSOX2) during early, mid and late phases of cell reprogramming. What makes this study particularly interesting is the fact that a previous study by the same group has shown that OCT4 naturally interacts with SOX2 to induce pluripotency, whereas OCT6 can only induce pluripotency when OCT6 was mutated to enhance its interaction with SOX2. Dr. Malik’s current study focuses on the mechanisms by which the above-mentioned transcription factors interact with chromatin and in turn bind to the transcription factor binding sites on the genes that are involved in processes from the initiation to maintenance of induced pluripotency.

Fig. 2. Depiction of chromatin remodeling by competent vs non-competent transcription factors. Opening up the chromatin by competent transcription factors and making transcription factor binding sites accessible is required to induce pluripotency. Failure to do so by non-competent transcription factors results in a failure to induce pluripotency. Illustration created with BioRender.com.

From this study, the researchers found that OCT4, OCT6 and OCT4defSOX2 have unique transcription factor binding sites on the pluripotency-related genes which could explain why substituting OCT4 with related transcription factors does not activate these genes. The results from this study challenge previously established roles for OCT4 in driving pluripotency. Dr. Malik and colleagues have identified distinct modes of chromatin interaction and roles for SOX2 and OCT4 during initiation, progression and maintenance of pluripotency. They found SOX2 to be a better facilitator of chromatin opening and initiator of pluripotency compared to OCT4. Once the cells have been initiated towards pluripotency, OCT4-SOX2 binding is required to see the process through and once the cells are pluripotent OCT4-SOX2 binding becomes less essential. The most important role of OCT4, they found, was to maintain the cells in a pluripotent state as opposed to its previously investigated role as an initiator of pluripotency. 

The results from this study contribute new insights to a rapidly progressing field. Identifying the roles of key factors during the stages of reprogramming would add vital pieces of information to the big puzzle of cellular reprogramming. These pieces of information would considerably enhance the use of stem cells as potential therapeutic candidates for a number of diseases .

Dr. Vikas Malik is a Postdoctoral Research Fellow in Dr. Jianlong Wang’s lab in the Department of Medicine at Columbia University Medical Center and is a member of CUPS and the Outreach and Communications Committee.

 

 

No more lazybones

Contrary to what many people think, bone is a highly dynamic tissue that is constantly being broken down and reformed in order to maintain a healthy and strong skeleton. This process of bone remodeling is enabled by specialized bone cells called osteoclasts and osteoblasts. Osteoclasts produce enzymes to degrade old and damaged bone, which is replaced with new bone by osteoblasts. However, these cells do more than simply breaking down and rebuilding your bones. Recent advances in bone biology have shown that bone cells also have an important endocrine function, meaning that they release hormones into the circulation to affect other tissues and organs in the body. As such, the bone-derived hormone osteocalcin was shown to promote muscle function in a mouse model. Dr. Subrata Chowdhury from the Karsenty lab of the Department of Genetics and Development at CUMC followed up on this remarkable finding, and investigated the regulation of osteocalcin in animal models as well as humans, as recently published in the Journal of Clinical Investigation.

Dr. Chowdhury and colleagues found that circulating osteocalcin levels are increased after a 12-week exercise program in humans, and that this effect requires the signaling molecule, or “cytokine”, interleukin-6 (IL-6). The latter was shown by inhibiting IL-6, which completely blocked the induction of osteocalcin by exercise. They continued by using a mouse model to show that IL-6 is actually derived from the muscle itself, and that its production is necessary for maximal exercise capacity. In other words, mice that could not produce IL-6 in their muscles were not able to run as far on a treadmill as compared to mice that were able to produce IL-6.

They further investigated the interplay between IL-6 and osteocalcin in mice, and found that IL-6 stimulates osteoblasts in the bone tissue to produce RANKL, a protein that is necessary for osteoclast differentiation. As a result, more active osteoclasts are formed within the tissue. These osteoclasts produce high amounts of osteocalcin, which signal back to the muscle to promote the uptake and breakdown of glucose and fatty acids by muscle cells. In addition, osteocalcin stimulates the muscle to produce more IL-6, thereby generating a positive feedback loop between muscle and bone (see Figure below). The end result of this loop is a muscle tissue which can utilize more nutrients from the circulation, and is therefore more functional during exercise.

Exercise capacity, also referred to as fitness, is a strong predictor of chronic disease and mortality. The research by Dr. Chowdhury and colleagues has shown that exercise capacity can be improved by stimulating the IL-6-osteocalcin axis. Although their findings are very convincing, according to Dr. Chowdhury the scientific community initially reacted with disbelief. IL-6 is classically known as an inflammatory cytokine, and is one of the components of the detrimental “cytokine storm” that occurs during, for example, a COVID-19 infection. However, while the high levels of IL-6 under pro-inflammatory conditions are damaging for the body, low sustained levels of IL-6 may actually be beneficial. Follow-up studies are now being performed with low doses of long-acting IL-6 analogues, to study their potential to safely and effectively promote exercise capacity and improve health.

Dr. Chowdhury showed us the importance of not being led by scientific biases, but by our observations. And who would guess that our skeleton does not weigh us down, but actually makes us run faster?

Figure adapted from Chowdhury, JCI 2020, and created with BioRender.com.

Follow this blog

Get every new post delivered right to your inbox.