Better Work Environments Make Super Nurses Even More Super!

We might all be familiar with the term “burnout” – the feeling of emotional exhaustion or feeling cynical or ineffective with respect to productivity at work, or in relationships with colleagues or clients. The World Health Organization classifies burnout as an occupational, not personal, phenomenon. Studies suggest that burnout can result from poor work environments – not necessarily dependent on the content of the work itself, but instead the setting in which the work is completed and how the work is managed or distributed. Burnout can be prevented or resolved by improving work environments.

Because it is dependent on the environment, the rate of burnout may vary between different job settings. For example, studies suggest that around 40% of the Nursing workforce in the United States is burned out. That’s almost half of all nurses! Nurses, along with Social Workers who also have a burnout rate of about 40%, are among the professions with the highest burnout rates in the country. Nurses have a unique position, as their actions and responsibilities at work directly impact the wellbeing of their patients. Because the lives of their patients may depend on it, it is important that nurses are attentive, motivated, and effective while at their jobs. In other words, nurses should not be burned out in order to properly care for their patients. 

To prevent or resolve burnout in nursing, work environments should aloow appropriate autonomy, or the ability for nurses to use their own discretion and depend on their own expertise to respond to patient care issues. Additionally, positive work environments for nurses include having good working relationships with physicians and hospital administration, and have adequate staffing and resources. If an environment does not include these positive factors, then nurse burnout will likely be prevalent in that clinical setting. Additionally, the combination of a poor work environment and burned out nurses is associated with lower levels of patient care quality and patient outcomes.

A recent study by Columbia postdoc Dr. Amelia Schlak explored how nurse burnout is related to patient care, with the expectation that more nurse burnout would correspond with poorer patient outcomes. Additionally, the researchers investigated how the nurse work environment affects the relationship between nurse burnout and respective patient outcomes. The authors expected to see that nurse burnout will have less of an effect on poorer patient outcomes in better work environments.

In order to investigate these relationships, Dr. Schlak and colleagues measured nurse burnout in over 20,000 nurses across 4 states (CA, PA, FL, and NJ) between 2015–2016 by using the emotional exhaustion subscale of the Maslach Burnout Inventory, which quantifies nurse burnout on a scale from 0 to 54, where higher scores correspond to more burnout. On average, the nurse burnout score in the study was 21/54. They also measured work environment using the Practice Environment Scale of Nursing Work Index survey completed by the same nurses. This measurement accounts for environmental aspects like staffing, access to resources, and nurse-physician relations. The researchers ranked the average hospital environment scores into categories of “poor” (bottom 25%), “mixed” (middle 50%), and “good” (top 25%) environments. They found that the degree of nurse burnout was skewed across the hospital quality category, where most (60%) nurses working in good environments ranked among the lowest burnout levels, while more than 50% of nurses working in poor environments ranked among the most burned out. So, better work environments typically means less burned out and more productive nurses! 

The ultimate priority in healthcare work is, of course, the patient! To see how the environment and nurse burnout affects patients, the researchers also collected patient outcome measurements for each hospital such as (1) patient mortality, (2) failure to rescue, or in-hospital mortality after experiencing an adverse event caused by medical treatment, and (3) length of stay, where only patients with length of stay less than 30 days were considered. The authors found that greater nurse burnout was associated with a higher incidence of patient mortality, an increased rate of failure to rescue and a longer patient stay. Nurses who are not burned out, who are energized and effective, tended to have patients that had better outcomes.

The authors also explored how the nurse work environment affects the relationship between nurse burnout and the patient outcome measurements. When the researchers compared hospitals with poor vs. mixed work environments, as well as mixed vs. good environments, they found that the frequency of burned out nurses decreased, the 30-day in-hospital mortality rate was 14% lower, the failure to rescue rate was 12% lower, and the length of stay was 4% lower in the mixed and good work environments, respectively. This means that by simply improving the work environment (i.e. improving employee relations or providing better resources), hospitals can greatly improve nurse burnout and patient outcomes! This relationship is shown in Figure 1 below. 

Figure 1: Clinical Work environment has an effect on the level of burn out in nurses. Nurse burn out, in turn, has an effect on patient outcomes, where higher levels of burn out result in poorer patient outcomes, and lower levels of outcome result in better patient outcomes. Additionally, the quality of the clinical work environment can also impact patient outcomes, where better outcomes are associated with better hospital environments, while poorer outcomes are associated with poorer hospital environments. Created with

Though this study was based on data from 2015, nurses and other healthcare workers have only become even more burned out in the face of the COVID-19 pandemic, intensified by the overwhelming demand, the pain of losing patients, and the risk of infection that they take every time they go to work. In light of this, hospital management and administration should be proactively addressing healthcare worker burnout, by ensuring that the needs of their healthcare workers are met. This includes, but is not limited to, allowing nurses autonomy or control over their practices, adequate staffing to avoid overworking or long shifts, encouraging and supporting positive relationships among nurses, physicians, and administrative staff, and providing proper resources for nurses to successfully fulfill their responsibilities. 

Also, this past week (May 6th – May 12th, 2022) was Nurses Appreciation Week. Thank you to the Super Nurses for the hard work that you do, oftentimes under relentless and stressful circumstances! You truly are Healthcare Heroes! I hope your hospitals, clinics, or other places of work are prioritizing your work environments, to help reduce the burnout you feel from this pandemic. If they aren’t, send them this article 🙂 

Edited by: Trang Nguyen, Vikas Malik, Maaike Schilperoort

What can we do to enter a new era in antimalarial research? A promising story from genetics to genomics.

Plasmodium falciparum is a unicellular organism known as one of the deadliest parasites in humans. This parasite is transmitted through bites of female Anopheles mosquitoes and causes the most dangerous form of malaria, falciparum malaria. Each year, over 200 million cases of malaria result in hundreds of millions of deaths. Moreover, P. falciparum has also been involved in the development of blood cancer. Therefore, study of malaria-causing Plasmodium species and the development of anti-malarial treatment constitute a high-impact domain of biological research.

Antimalarial drugs have been the pillar of malaria control and prophylaxis. Treatments combine rapid compounds to reduce parasite biomass with longer-lasting drugs that eliminate surviving parasites. These strategies have led to significant reductions in malaria-associated deaths. However, Plasmodium is constantly developing resistance to existing treatments. The situation is further complicated by the spread of mosquitoes resistant to insecticides. Additionally, asymptomatic chronic infections serve as parasite reservoirs and the single candidate vaccine has limited efficacy. Thus, the fight against malaria requires sustained efforts. A detailed understanding of P. falciparum biology is still crucial to identify and develop novel and efficient therapeutic targets.

Recent progress in genomics and molecular genetics empowered novel approaches to study the parasite gene functions. Application of genome-based analyses, genome editing, and genetic systems that allow for temporal regulation of gene and protein expression have proven to be crucial in identifying P. falciparum genes involved in antimalarial resistance. In their recent review, Columbia postdoc John Okombo and colleagues summarize the contributions and limitations  of some of these approaches in advancing our understanding of Plasmodium biology and in characterizing regions of its genome associated with antimalarial drug responses.

P. falciparum requires two hosts for its development and transmission: humans and Anopheles mosquito species. The parasite life cycle involves numerous developmental stages. In humans take place stages of Plasmodium’s development that are part of its “so-called” asexual development. On the other hand, mosquitos harbour other stages of the parasite development, associated with its sexual reproduction (Figure 1). Humans are infected by a stage called “sporozoites” upon the bite of an infected mosquito. Sporozoites enter the bloodstream and migrate through to the liver where they invade the liver cells (hepatocytes), multiply and form “hepatic schizonts”. Then, the schizonts rupture and release in the circulation the stage of “merozoites” which invade red blood cells (RBCs).  The clinical symptoms of malaria such as fever, anemia, and neurological disorder are produced during the blood stage. In RBCs are formed “trophozoites”, that have two alternative paths of development. They can either form “blood-stage schizonts” that produce more RBC-infecting merozoites or can alternatively differentiate to sexual forms, male and female “gametocytes”. Finally, gametocytes get ingested by new mosquitoes during blood meal where they undergo sexual reproduction forming a “zygote”. The zygotes then pass through several additional stages until maturation to a new generation of sporozoites, closing the parasite life cycle (Figure 1).

Figure 1: Life cycle of Plasmodium falciparum. Image created with

This complexity of the Plasmodium life cycle presents opportunities to generate drugs acting on various stages of its development. The review of Okombo and colleagues underlines how new genomic data have enabled the identification of genes contributing to various parasite traits, particularly those of antimalarial drug responses. The authors recap genetic- and genomic-based approaches that have set the stage for current investigations into antimalarial drug resistance and Plasmodium biology and have thus led to expanding and improving the available antimalarial arsenal.

For instance, in “genome-wide association studies” (GWAS), parasites isolated from infected patients are profiled for resistance against antimalarial drugs of interest, and their genomes are studied in order to identify genetic variants associated with resistance. In “genetic crosses and linkage analyses”, gametocytes from genetically distinct parental parasites are fed to mosquitoes in which they undergo sexual reproduction. The resulting progeny are inoculated into humanized human liver-chimeric mice-models that support P. falciparum infections and development. The progeny is later analyzed to identify the DNA changes associated with resistance and drug response variation. In “in vitro evolution and whole-genome analysis” antiplasmodial compounds are used to pressure P. falciparum progeny to undergo evolution to drug-resistant parasites. Their genome is then analyzed to identify the genetic determinants that may underlie the resistance. “Phenotype-driven functional Plasmodium mutant screens” are based on random genome-wide mutation generation and selection of mutants that either are resistant to drugs or have affected development, pathogenicity, or virulence. Such an approach has also led to the discovery of novel important genes. In addition, the review covers a number of cutting-edge methods for genome editing used to study antimalarial resistance and mode of action. Experiments using genetically engineered parasites constitute a critical step in uncovering the functional role of the identified genes. Finally, the reader can also find an overview of Plasmodium “regulatable expression strategies”. These approaches are particularly valuable in the study of non-dispensable (essential) genes. Additional information on other intriguing and powerful techniques are further described in the original paper.

Article reviewed by: Trang Nguyen, Samantha Rossano, Maaike Schilperoort

Survival of the fittest – how brain tumor cells adapt their metabolism to resist treatment

Glioblastoma WHO grade IV (GBM) is the most common primary brain tumor in adults. The therapeutic options for this recalcitrant malignancy are very limited with no durable response. A recent research article published in Nature Communications identified how the tumor cells alternate their metabolism to survive using targeted drug treatment in cell lines and mouse models. The project was led by Dr. Nguyen, a postdoctoral research scientist in Dr. Siegelin’s lab at Columbia University. 

Most cancer cells produce energy in a less efficient process called “aerobic glycolysis”, consisting of high levels of glucose uptake and generate lactic acid in the cytosol in the presence of abundant oxygen. This classic type of metabolic change provides substrates required for cancer cell proliferation and division, which is involved in tumor growth, metastatic progression and long-term survival. Dr. Siegelin’s laboratory at the Department of Pathology and Cell Biology at Columbia University Medical Center focuses on targeting cell metabolism and the epigenome for brain tumor therapy by using clinical validated drugs to suppress the tumor growth in glioblastoma. In this study, the authors used Alisertib (MLN8237), a clinically validated highly specific Aurora A inhibitor to target brain tumors. Aurora A kinases (AURKAs) are important for the proliferation and growth of solid tumors, including glioblastomas. Here, the authors found that Aurora A simultaneously interacts with both c-Myc (MYC Proto-Oncogene) and GSK3β (Glycogen Synthase Kinase 3 Beta). AURKA stabilizes the c-Myc protein and promotes cell growth. AURKA inhibitor, displayed substantial downregulation of the c-Myc protein. c-Myc (MYC) is an oncogenic transcription factor that facilitates tumor proliferation in part through the regulation of metabolism. Inhibition of Aurora A will lead to a degradation of c-Myc mediated by GSK3β. The authors also found that inhibition of Aurora kinase A suppressed the glycolysis signaling pathway in glioblastoma cells which was related to the degradation of c-Myc protein (Figure 1).

Figure 1: In the cells, Aurora A binds to c-Myc and facilitates cell proliferation. Inhibition of Aurora A will stop the cell from generating energy through glycolysis, a metabolic pathway that converts glucose to energy in cytosol, due to the degradation of c-Myc. c-Myc is marked for degradation by its phosphorylation at position T58 mediated by GSK3β. To survive, the cells start to use different pathways to generate energy by e.g. burning fat or proteins. Figures created with

In addition to the acute treatment, it is important to understand how tumor cells acquire mechanisms to escape from chemotherapy following constant exposure to a drug and identify means to prevent this phenomenon from occurring. The research group generated drug-resistant cells by culturing them in the presence of alisertib for two weeks. These cells acquire partial resistance to alisertib and display a hyper-oxidative phenotype with an increase in the size of mitochondria with a tubulated shape. 

The chronic Aurora A inhibited cells were analyzed for the expression of genes that were modified after constantly applying the same dose of alisertib for a long term period. Researchers found that  resistance alisertib cells activate oxidative metabolism and fatty acid oxidation such as an increase in the generation of fatty acid proteins. These observations prompted them to test the hypothesis that alisertib along with fatty acid oxidation inhibitors such as etomoxir will reduce the cellular viability of glioblastoma cells. Etomoxir is a clinically validated drug that binds and blocks the mitochondrial fatty acid transporter. The authors found that the combination treatment of alisertib and etomoxir resulted in enhanced cell death as compared to single treatments and vehicles.

Given the significant promise of in vitro studies, the researchers extended their study in vivo by injecting the patient-derived glioblastoma cells acquired from the patient brain tumors in immunocompromised mice. Such model systems are currently considered to be in closest resemblance to the patient scenario. They found that the combination treatment extended animal survival significantly longer as compared to single treatment with alisertib or etomoxir, suggesting potential clinical efficacy. Taken together, these data suggest that simultaneous targeting of oxidative metabolism and Aurora A inhibition might be a potential novel therapy against deadliest cancers.

Article reviewed by: Maaike Schilperoort, Vikas Malik, Molly Scott, Pei-Yin Shih and Samantha.

“CRACK”ing cocaine addiction with medication

Cocaine is a highly addictive stimulant drug made from the leaves of the coca plant that alters mood, perception, and consciousness. It is consumed by smoking, injecting or snorting. According to the United Nations Office on Drugs and Crime, an estimated 20 million people have used cocaine in 2019, almost 2 million more than the previous year. Cocaine causes an increase in the accumulation of dopamine in the brain, which is a chemical messenger that plays an important role in how we feel pleasure and encourages us to repeat pleasurable activities. This dopamine rush causes people to continue using the drug despite the cognitive, behavioral and physical problems it causes,leading to a condition referred to as cocaine use disorder (CUD). CUD related physical and mental health issues range from cardiovascular diseases like heart attack, stroke, hypertension, and atherosclerosis, to psychiatric disorders and sexually transmitted infections. 

According to the CDC, cocaine use was responsible for 1 in 5 overdose deaths. Though almost all users who seek treatment for CUD are given psychosocial interventions like counseling, most continue to use cocaine. Pharmaceutical medication may increase the effectiveness of psychosocial interventions. Medications for other substance abuse disorders (opiod and alcohol) have shown to block euphoric effects, alleviate cravings and stabilize brain chemistry. However, there are currently no FDA approved drugs to treat CUD. 

Dr. Laura Brandt and colleagues have systematically reviewed available research up until 2020 in the area of pharmacological CUD treatment. In this review, they discuss the potential benefits and shortcomings of current pharmacological approaches for CUD treatment and highlight plausible avenues and critical considerations for future study. The authors reviewed clinical trials where the primary disorder is cocaine use and medication tested falls into four categories: dopamine agonists, dopamine antagonists/blockers, new mechanisms that are being tested and a combination of medications. 

Dopamine agonists are medications that have a similar mechanism of action as cocaine, i.e., they can act as substitutes for cocaine without the potential adverse health effects. Dopamine releasers and uptake inhibitors fall under this category and have shown the most promising signs thus far for reduced cocaine self-administration in cocaine-dependent participants. Dopamine uptake inhibitors bind to the dopamine transporter and prevents dopamine reuptake from the extracellular space into the brain cell. The medications that act as substitutes result in the users exhibiting blunted dopamine effects such as low levels of dopamine release and reduced availability of dopamine receptors for dopamine to bind to. They help reduce dopamine hypoactivity by slow release of dopamine which in turn helps reduce responses such as cravings for cocaine and withdrawal symptoms which is usually a cause for relapse. A common concern associated with using dopamine agonists is the possibility of replacing cocaine addiction with the medication. However, there is no strong evidence for this secondary abuse as well as for the cardiovascular risk when using the agonist as a means of treatment. 

Dopamine antagonists/blockers are substances that bind to dopamine receptors, preventing the binding of dopamine and thereby blocking the euphoric effects of cocaine. This approach facilitates the decrease in cocaine use as the effects of cocaine use are absent. Antipsychotics medications, anti-cocaine vaccines, modulators of the reward system, and noradrenergic agents fall under this category. This approach is generally considered to be less effective in treatment for CUD as they require high levels of motivation to start the treatment as well as to maintain it. 

New medications are those that are currently in clinical trials and are being tested in humans for the treatment of CUD. Combination pharmacotheraphy is an interesting approach for treatment and involves combining two medications to treat CUD. An absence of FDA approved medications limits exploration in this direction. 

Having reviewed these data and their shortcomings, the authors point out a very important factor in these studies – the shortcomings of the studies depend on more than just the medication. On one hand, limitations due to medical procedures such as the dosage of medication and its formulation, completion of the medication course, providing/not providing incentives to participants of the study may have hindered the success of these studies. On the other , individuals seeking treatment are not all the same. They differ in terms of cocaine use severity, presence of mental health illness, substance use disorders apart from cocaine use, and their genetics may also play a role in the success of their treatment.  Pharmacotherapy formulations for CUD is not a one size fits all but needs to be tailored to the individual seeking treatment as well as the substance used.  A combination approach targeting withdrawal of the drug and allowing patients to benefit more from behavioral/psychosocial interventions would be more helpful on their path to recovery. Another very important point that requires some attention is the method for determination if the medication has worked. Most studies use the gold standard of performing qualitative urine screens to determine sustained abstinence in clinical trials of pharmacotherapies for CUD. Urine toxicology as evidence of treatment success is not a clear-cut method as various factors impact interpretation of the results. Second the medication is considered to successfully treat CUD only when there is complete abstinence from cocaine use. As many physical and psychological issues accompany substance abuse, considering CUD treatment to be linear is not very beneficial. Considering other aspects such as improvement in quality of life and ability to carry out daily activities would be a better indicator of the effectiveness of the medications used. 

With an increase in cocaine use and abuse in recent years, there is an urgent need to identify medications to treat CUD. The review consolidates the current approaches to treating CUD with medication and points out factors that are overlooked while interpreting the results from these studies. Tailoring medications to each individual would greatly improve clinical trial outcomes and have higher success rates for treating substance use disorders- a promising avenue that needs to be explored.

Dr. Laura Brandt is a Postdoctoral Research Fellow in the Division on Substance Use Disorders, New York State Psychiatric Institute and Department of Psychiatry Columbia University Irving Medical Center.

Reviewed by: Trang Nguyen, Maaike Schilperoort, Sam Rossano, Pei-Yin Shih


Novel treatment strategies for fatty liver-related cancer – reality or fanTAZy?

The liver is one of the largest organs in the body, with a weight of approximately 3 pounds. Some of its vital functions include the filtration of blood to remove toxic substances from the body, the production of bile which helps with digestion, and the regulation of fat metabolism. Fats or lipids from the diet are taken up by the liver and processed into fat-carrying proteins called lipoproteins. These lipoproteins are released into circulation to fuel tissues that require energy. However, when there is a positive energy balance, for example due to overeating and/or a sedentary lifestyle, liver cells increasingly store lipids. This can result in metabolic associated fatty liver disease (MAFLD, formerly known as NAFLD), characterized by a liver fat content above 5%. When fat keeps accumulating in the liver, chronic inflammation ensues and the liver progresses to a stage called metabolic associated steatohepatitis (MASH, formerly known as NASH). There are currently no FDA-approved drugs available to treat MASH. At this stage, patients are at risk for developing a type of liver cancer called hepatocellular carcinoma (HCC). HCC is the primary cause of liver cancer in the US, affecting more than 30,000 individuals per year. Progression from a healthy liver to MAFLD, MASH, and HCC is shown in the Figure below.

The different stages of fatty liver disease progression – from a healthy liver, to metabolic associated fatty liver disease (MAFLD), metabolic associated steatohepatitis (MASH), and eventually hepatocellular carcinoma (HCC). Figure created with

Although MASH is the leading cause of HCC, the mechanisms of how MASH predisposes to HCC tumor formation are largely unknown. The research from Columbia postdoc Xiaobo Wang and colleagues tries to fill this knowledge gap. Dr. Wang investigated TAZ; a gene regulator that was found to be increased in MASH livers. He fed experimental mice with a diet containing high sugar, fat, and cholesterol (the equivalent of human “fast food”), to induce MASH development. Then, he diminished TAZ expression in the liver by using a viral-mediated gene delivery system, by which an engineered virus enters mouse liver cells to specifically turn off the TAZ gene. Silencing of the TAZ gene largely prevented the development of tumors in MASH liver, indicating that TAZ is an important player in MASH-HCC progression.

Dr. Wang continued his research by investigating how TAZ could enable the liver cells to turn into tumor cells. He focused on DNA damage, a process which is important in HCC development, and found clear indications of damaged DNA in the livers of mice and humans with MASH. Most importantly, silencing of TAZ prevented an increase in the DNA damage, suggesting that TAZ promotes genomic instability in liver cells. Since the buildup of oxidative stress within cells is an important cause of DNA damage, Dr. Wang next looked at a specific indicator of oxidative DNA damage. Indeed, this indicator was increased in MASH and decreased with TAZ silencing. He then measured various oxidant-related proteins to find out how TAZ could promote oxidative DNA damage. He discovered that Cybb, a gene involved in the formation of harmful reactive oxygen species, is involved in TAZ-induced liver cancer. Together, these findings show a TAZ-Cybb-oxidative DNA damage pathway (see Figure below) that creates malignant liver cells and promotes the progression from MASH to HCC. This work has been published in the prestigious Journal of Hepatology.

Metabolic associated steatohepatitis (MASH) liver cells highly express a protein called TAZ, which promotes the Cybb gene which is involved in production of reactive oxygen species. These reactive oxygen species induce oxidative DNA damage, which transforms healthy liver cells into tumor cells and thereby promotes progression to hepatocellular carcinoma (HCC). Figure adapted from Wang et al. J Hepatol 2021, and created with

The new pathway that Dr. Wang and colleagues discovered suggests that TAZ-based therapy could prevent MASH-HCC progression in humans with fatty liver disease. Such a therapy could have a big clinical impact, since it is estimated that about 12% of US adults have MASH. However, a limitation to MASH therapies is that the disease is often asymptomatic and difficult to identify until the late stages of the disease. Aside from focusing on treatments that reduce MASH progression after a large part of the damage has already occurred, our society should increasingly focus on preventative strategies. MASH is often caused by lifestyle-related factors, and risk of MASH can be significantly reduced by maintaining a healthy weight, eating a healthy diet and exercising regularly. Especially in the US where MASH prevalence is expected to increase by 63% by 2030, raising awareness of the importance of living healthy to prevent liver disease and liver cancer is paramount.


Reviewed by:

Sam Rossano, Vikas Malik, Molly Scott



A small investment could reap a big reward

It is well-established that children growing up in economically disadvantaged circumstances can experience a wide variety of challenges to their development. For example, in the domain of language, psychological research shows that even before children reach the age of 3, those from lower socioeconomic (SES) backgrounds hear fewer and less complex words from caregivers compared to their more advantaged peers. This phenomenon places these children at risk of later language learning difficulties. Research groups and other stakeholders around the globe are working tirelessly to craft intervention programs, in a variety of contexts, to target the optimal development of children from under-resourced environments. To target language outcomes, for instance, an initiative called Talk With Me Baby aims to help parents learn about and engage in crucial linguistic interactions with their children from their earliest days.

One prominent intervention in this field is taking a different approach. The Baby’s First Years (BFY) project, directed by a team of leaders in the realms of neuroscience, psychology, and economics, provides families living in poverty with cash and is studying how this money might impact developmental outcomes. These scientists recruited 1,000 mothers from around the US just hours after their babies were born. Half of the mothers are assigned to a high-cash gift group ($333 per month) and the other half are assigned to a low-cash gift group ($20 per month). These payments come in the form of a debit card and the moms are allowed to spend these funds however they’d like.

The “4 My Baby” card, the debit card that BFY mothers receive. Courtesy of Baby’s First Years.

After one year of these payments, the researchers went into the homes of the families and measured a variety of both maternal and child outcomes. One of these measures was EEG, or electroencephalography, to capture the brain activity of the infants. EEG measures the electrical activity occurring between cells in the brain. The findings, published in a recent paper in PNAS by lead author and Columbia postdoc Dr. Sonya Troller-Renfree, captured global attention. She and her team found that infants whose mothers were assigned to the high-cash gift group displayed more high-frequency brain activity compared to children in the low-cash group. Importantly, previous research links this high-frequency activity to the development of thinking and learning. Although the evidence is not air tight, Dr. Troller-Renfree and her team are the first to show there may be a causal link between poverty reduction and changes in brain activity.

There are many pathways through which the gifted money might be impacting children’s brains, which is currently under investigation. It is important to understand that this study is still ongoing and the children in the sample are all currently around 3-years-old. The research team plans to assess the mothers and preschoolers at age 4 on a variety of outcomes. This finding on the potential link between poverty reduction and the brain, along with other work demonstrating that economic support for parents could greatly benefit children’s outcomes, has important implications for public policies that support families and children. While the US has a long way to go in supporting its youth, growing evidence indeed supports the idea that relatively minor investments in children can positively impact their trajectory.

Dr. Sonya V. Troller-Renfree is a Goldberg Postdoctoral Fellow in the Neurocognition, Early Experience and Development Lab at Teachers College, Columbia University. Her research focuses on the effects of early adversity and poverty on cognitive and neural development. She intends to continue examining these questions as part of her new, federally-funded Pathway to Independence Award (K99/00). You can stay up-to-date on her research findings on Twitter at @STRscience or on her website:


Tau about that! Alzheimer’s protein found in brains of COVID patients

It’s hard not to have COVID on the brain in today’s world – it seems like every conversation ends up on the topic! A recent study completed at Columbia explored the effect of COVID in the brain, by collecting brain samples from the mesial temporal cortex, a brain region implicated in Alzheimer’s disease and responsible for memory, and the cerebellum, a brain region responsible for coordination of movement and balance. Different cellular markers that indicate inflammation and protein build-up in the brain were measured in 10 patients who had passed away from COVID-19 and were compared to brains of those who did not have COVID-19 at the time of death. From this, the researchers were able to infer how COVID-19 infection may alter the brain, potentially causing the neurological symptoms in some COVID patients.

COVID-19 infection can lead to respiratory, cardiac, and neurological symptoms. About one in three COVID patients experience neurological symptoms including loss of taste (hypogeusia), loss of smell (hyposmia), headache, disturbed consciousness, and tingling sensations in their limbs (paresthesia). The exact reason why these neurological symptoms occur is not well understood. In a recent publication, Dr. Steve Reiken and colleagues from the Department of Physiology and Cellular Biophysics at Columbia University Vagelos College of Physicians and Surgeons explore how factors associated with COVID infection, like inflammation, led to these neurological symptoms.

SARS-CoV-2, the virus that causes COVID-19, enters the body through the airways. The spike proteins on the surface of SARS-CoV-2 virus facilitate the entry into cells through the angiotensin converting enzyme 2 (ACE2) receptor. This leads to inflammation in the lungs and other organs. ACE2 receptors are downregulated during COVID infection, a pattern which has been tied to an upregulation of inflammatory marker transforming growth factor-𝛃 (TFG-𝛃) in other disease models including cancer. Lower ACE2 activity has also been tied to greater concentrations of Alzheimer’s disease (AD) related proteins amyloid-𝛃 (A𝛃) and phosphorylated tau. Perhaps the entry-point of the SARS-CoV-2 virus activates inflammation pathways that can affect the brain similar to the way it is affected in AD, and might cause the neurological issues that sometimes come with COVID infection.

In the study, inflammation markers that represent TFG-𝛃 levels were measured in the brain samples of COVID patients and were compared to non-patients. Each of these measures were higher in brain samples of COVID patients, suggesting that COVID infection contributed to more inflammation in the brain.

Inflammation may have downstream effects that can impact the function of healthy tissues. For example, the highly-regulated use of the calcium ion (Ca2+), which is a key player in cell-to-cell communication, can become impaired in conditions of inflammation. Specifically, the ryanodine receptor (RyR) is an ion channel protein which is responsible for Ca2+ release. When in an open configuration, Ca2+ can flow freely through the channel. To stop Ca2+ flow, helper proteins interact with the RyR to stabilize the closed configuration of the channel. Previous studies have suggested that these helper proteins are downregulated in inflammation, which means that the RyR is more likely to be unstable, resulting in excess Ca2+ flow, or a Ca2+ leak. Ca2+ leaks have been thought to contribute to a number of diseases, including the development of tau pathology in AD.

In Dr. Reiken and colleagues’ study, indicators of typically functioning RyR were measured in the brain samples of COVID patients and non-patients. These measures included the amount of RyR channel in the open configuration (which means a lot of free flowing Ca2+) and the concentration of the helper proteins that helps the RyR remain stable in the closed configuration. The researchers found that there were less helper proteins in the COVID brains compared to the non-COVID brains. Additionally, more of the RyR channels were in an open configuration in the COVID brains compared to non-COVID brains. This means that Ca2+ leaks were more likely to happen in the brains of those infected with COVID-19.

In addition to cellular markers of inflammation and Ca2+ leaks, Dr. Reiken and colleagues also investigated levels of AD-related proteins A𝛃 and tau aggregation in the brains of control subjects and COVID patients. For A𝛃, relevant protein levels were similar between COVID patients and controls, suggesting that COVID does not cause the collection of A𝛃 in the brain. However, the concentration of phosphorylated tau, another protein that is highly implicated in AD pathology, was higher in the temporal lobe and cerebellum regions in COVID patients compared to control subjects.

To take this one step further, the researchers treated the COVID patients’ brain samples with Rycal ARM210, a drug that is currently in clinical trials for other applications at the NIH (NCT04141670) and helps to reduce Ca2+ leak. With ARM210, helper protein levels in the COVID brain samples increased from the original levels in the COVID brain samples that were not treated with the drug. Additionally, the amount of RyR in the open configuration decreased in the COVID brain samples with ARM210, compared to the un-treated samples. Thus, treatment with this drug may combat Ca2+ leak in brain tissue. If unstable RyR leads to the Ca2+ leak, and Ca2+ leak can promote tau phosphorylation and build up in the brain, then using the Rycal drug ARM210 to target and limit Ca2+ release may potentially be a way to treat of these brain abnormalities in COVID-19 and possibly minimize neurological symptoms.

Given these results, the authors propose a mechanism by which infection with the SARS-CoV-2 virus may lead to protein aggregation similar to tau deposition in AD. An adaptation of the proposed mechanism is shown in the Figure below.

Figure 1 (above): Proposed mechanism for neurological symptoms of COVID-19 infection. Adapted from Reiken et al., 2022. Created with

Though this is a very exciting study exploring the neurobiology in COVID brains, there are some additional things to consider. Firstly, while inflammatory markers were elevated in the brains of COVID patients, SARS-CoV-2 virus particles were not detectable in the brain. This suggests that these effects are caused by systemic factors, and are not localized to cells that are infected with SARS-CoV-2. Additionally, in terms of the AD-related proteins, elevated phosphorylated tau protein was detected in the mesial temporal cortex and the cerebellum of COVID patients compared to controls. In AD, tau protein collects in the medial temporal cortex early in disease progression, but does not collect in the cerebellum. This, in addition to the lack of A𝛃 aggregation in the COVID patients’ brain samples, is a marked difference between the pathology of the brain in AD and in COVID. However, distribution and amount of tau protein in AD is linked to cognitive abilities, so perhaps the collection of tau in the brain of COVID contributes to cognitive symptoms like “brain fog”. The current study used brain samples from 10 COVID patients, but did not collect cerebrospinal fluid samples or use animal models to validate these findings yet. Future work that addresses these limitations and further questions may help us fully understand the role of COVID in the brain, and may help with treatments for those who are struggling with prolonged neurological symptoms of COVID.

Dr. Reiken, the first author of this work, is an Assistant Professor at Columbia University Department of Physiology. Dr. Dridi, a Postdoc Fellow at Columbia, and Dr. Liu, a Postdoc Research Scientist at Columbia, also contributed to this work. Find the original research article here.

Reiken, S, Sittenfeld, L, Dridi, H, Liu, Y, Liu, X, Marks, AR. Alzheimer’s-like signaling in brains of COVID-19 patients. Alzheimer’s Dement. 2022; 1- 11.

Covid-19 and Immunity after Organ Transplant

Over the last two years, the SARS-CoV-2 (Covid-19) pandemic has been at the forefront of media coverage. Hospitals have been overwhelmed, full cities locked down, travel banned, and we are all desperately waiting for a return to the normalcy that immunity promises. However, the development and retention of immunity can be dependent on the individual, and Covid-19 has been particularly daunting to individuals  with weakened immune systems (people who are immunocompromised). These individuals are at an increased risk of succumbing to Covid-19. Overall, it has been easy to identify the individuals that fall into this risk category. However, there has been limited research on the immunity of individuals that  have undergone organ transplants. In a new article by Dr. Mithil Soni, researchers have identified the effects of a solid organ transplant (SOT) on the development and retention of immunity to a plethora of viruses including SARS-CoV-2. SOTs are transplants that  include the kidney, liver, heart, lungs, intestines, and pancreas .

 Dr. Soni and colleagues also focus on the immunity generated by T cells, or cells beyond antibodies that play a role in killing viruses that enter the body. In this study, they focused on the immune response of one patient, a 33-year old male  suffering from erythropoietic protoporphyria, a genetic metabolic disorder that results in excessive liver damage. This subject underwent a SOT and received a liver. When undergoing a SOT, individuals are usually put through a stringent course of immunosuppressants to prevent organ rejection, which places them in the category of immunocompromised.  To their surprise, during a check-up this patient was found to have antibodies for SARS-CoV-2, indicating a previous Covid-19 infection without any serious symptoms. The ability to overcome Covid-19 with minimal symptoms while being classified as immunocompromised intrigued Dr. Soni and colleagues, and the patient agreed to provide his blood for further testing.  

The team went on to test the patient’s immune response to many infections that usually impact immunocompromised individuals. They tested the blood’s immune response to cytomegalovirus and BK virus, two viral infections that immunocompromised and SOT patients are prone to. They also tested the response to Epstein-Barr virus, which can cause Mononucleosis. From the blood, Dr. Soni and colleagues were able to collect and grow the T cells in their lab, exposed them to viruses, and measured their release of cytokines, proteins that are important for a strong immune response. They found a very strong T cell immune response against both cytomegalovirus and BK virus. They also tested the immune response to SARS CoV-2 and other coronaviruses and found a similar level T cell immune response as seen with cytomegalovirus and BK virus. 

These findings overall indicated that the SOT patient continued to have a robust immune response to multiple viruses despite the immunocompromised status. This study shows that it is possible to have robust immune responses to viruses including SARS CoV-2 in an immunocompromised state such as seen after a SOT. However, this research is based on a single case study. To truly understand T-cell memory and activity in immunocompromised individuals much more research has to be done. This means Dr. Soni and colleagues still have their work cut out for them and are actively expanding the research done here. Their next immediate steps are to repeat this study with blood from a larger group of healthy and immunocompromised individuals in the hopes that they will eventually be able to answer the question of SOT and immunity.

Figure: Depiction of increased immunity after SOT.  Top: Liver transplant. Bottom: Expected T cell activity in response to virus vs actual T cell activity in response to virus. 


Dr. Mithil Soni, is a previous Postdoctoral Research Fellow and current Associate Research Scientist at Columbia University.

Let’s get MDM2 and MDMX out of the shadow of p53

When it comes to cancer, one molecule stands out as being among the most extensively studied: the p53 tumor suppressor protein. p53 can exist in cells in several different forms. When p53 is in its so-called wild-type form, it is capable of activating various responses that contribute to tumor suppression. In their recent review, Columbia postdoc Rafaela Muniz de Quieroz and colleagues summarize the vast scientific literature on two key regulators of p53: MDM2 and MDMX. Both MDM2 and MDMX are known to interact with p53 and disrupt its function. Their absence has been linked not only to increased cancer development, but also to a number of dysfunctions, including embryonic lethality in mice. MDM2 has been shown to negatively regulate p53 by diverse mechanisms spanning from expression of the p53 gene to degradation of the p53 protein or its expulsion from the cellular nucleus, where the protein accomplishes its function. Although very similar to MDM2, MDMX is less well studied. We do know, however, that MDMX is a protein that can work together with the MDM2  in p53 degradation.

While many reviews and studies have pointed to the roles of MDM2, and to a lesser extent of MDMX, in p53 regulation, the current review by Quieroz and her colleagues  puts a larger focus on the myriad of p53-independent activities of MDM2 and MDMX. The authors provide important details about the p53-independent functions of both MDMX alone and as part of a MDM2–MDMX complex. The review discusses some key features in the structure and function of the proteins, including  key parts  that are relevant for their function, for some associated abnormalities, or for the formation of MDM2 and MDMX complexes.

MDM2 and MDMX are regulated on multiple levels within cells. These include regulation on the DNA level, including usage of several alternative promoters (DNA sequences needed to turn a gene on or off). One of the promoters of MDM2 and MDMX is regulated by their target p53, but there are also p53-independent promoters capable of switching on the genes of MDM2 and MDMS regardless of p53. In addition, numerous variations in the DNA sequences, the so-called single nucleotide polymorphisms (SNPs), affect the expression of the two genes and are relevant to different pathologies. Regulation on the RNA level includes co-transcriptional regulation like alternative splicing, as well as post-transcriptional regulation by microRNAs, long non-coding RNAs, circular RNAs, or RNA binding proteins. The review also presents a detailed characterization of the regulation of MDM2 and MDMX at the protein level, by summarizing data on numerous post-translational modifications or interacting partners of the two proteins, with regards to the different p53 contexts of the cited studies. Amongst the presented binding partners are some of the more recently identified interactors of the MDMs, which include proteins involved in the defense against several viruses. Overall, both MDM2 and MDMX stand out as extensively regulated at virtually every known level which according to the authors “attests to their relevance not only as inhibitors of p53 but of myriad other cellular activities and outcomes on their own”.

Since MDM2 and MDMX have majorly been studied in their relation to inhibit wild-type p53, of a particular interest stands a section of the review summarizing numerous processes in which the two proteins have been shown to be involved in cells lacking wild-type p53 (Figure 1).

Figure 1: Nonmalignant disease (left) and cancer-related (right) p53-independent functions of MDM2 and MDMX (adapted from Figure 4 of the review).

As shown in Figure 1, the p53-independent roles of MDM2 and MDMX in cancer and in other pathologies are versatile. That hints to the importance of uncovering molecules that can modulate the deleterious effects associated with dysfunctions of the two MDMs. A summary of numerous molecules that were shown to regulate the two proteins and thus consist of potential therapeutic targets, are also discussed in the review. Again the authors put an emphasis on how such small molecules might be useful in cells that lack wild-type p53. This is important not only because the two proteins have multiple functions other than regulating wild-type p53 which can be studied in such cells, but also because an important percentage of tumors is characterized by absence of wild-type p53.

The last section of the review points out some outstanding questions and directions for future research. If the fascinating questions of the versatile p53-independent roles MDM2 and MDMX have sparked your interest, find out more from the original paper.

Cosmic Water

Where does water actually come from? Most people would say, from the tap. While this certainly is true, scientists are – fortunately I would say, unfortunately my significant other might say – not like most people. They want to know more.

Before answering this question we should step back and ask, what is water? Water is a molecule, H2O. That means it consists of one oxygen atom, O, and two hydrogen atoms, H2. One way to produce water is to mix hydrogen and oxygen and ignite it. While on earth this can easily be done, on a cosmic scale initiating the reaction is far more complex. The biggest problem is that cosmic space  is cold. Like, really cold. The official record for the coldest temperature measured on earth is held by the arctic Vostok Station with −128.6 °F (−89.2 °C). In comparison, diffuse dense clouds, common cosmic structures where a lot of cosmic chemistry happens, have temperatures of -441.7°F to -279.7°F (-263.2°C to -173.2°C).  Anybody who has ever tried to cook but forgot to turn on the stove knows that for chemistry to happen, heat often has to be supplied, like through the flame in the above experiment. So, how can chemistry happen in the coldness of space?

The key to understanding this lies in the state of matter of cosmic gas. On earth, matter is mostly electrically neutral. That means, it contains exactly the same number of positively and negatively charged particles which therefore cancel each other out. To electrostatically charge an object, we have to actively make an effort, think of rubbing a balloon against your hair. This is not true for the universe in general. Actually, most matter in space is not neutral but charged. One notable example is the molecular ion H3+, a molecule consisting of three hydrogen atoms which are missing one electron, leading to a singly positively charged ion. Charged molecules can undergo reactions which are not possible for their neutral counterparts. For example, they react at temperatures at which their neutral counterparts do not react. In chemistry charged molecules are called radicals and are widely known for having a bad influence on your health. So stay away from cosmic clouds to avoid wrinkles! One reaction network starts with the reaction of atomic oxygen O with H3+. In a first step, two outcomes are possible: either they react to OH+ and H2 which in a second step reacts to H2O+ which subsequently neutralises, or they directly react to H2O+ and H before undergoing the neutralisation. Until recently, little was known about which of the two outcomes was more likely. Therefore, astronomical modelling assumptions had to be made. A precise knowledge of the pathway of the reaction network shown in figure 1 is especially interesting for interstellar regions in which the interstellar OH+ can be destroyed before reacting to H2O+. Here the direct reaction is the only efficient way of forming water, since potentially every intermediate product can undergo reactions not resulting in H2O+, therefore less steps directly increase the reaction yield.

Fig. 1: A strongly simplified excerpt of the reaction network of oxygen resulting in water.
Fig. 1: A strongly simplified excerpt of the reaction network of oxygen resulting in water.

This gap of knowledge was filled by Hillenbrand and colleagues, who accurately measured the reaction O + H3+ for both possible outcomes and therefore were able to give the ratio between them. But wait, didn’t we just learn that for the cosmic area of interest, this reaction takes place at highly unpleasant freezing temperatures? How on earth can this be reproduced on earth in the laboratory while still being able to control the setup? For this, the scientists came up with a nice little trick. On a microscopic level, the temperature of an object can be linked to the velocity of the particles it is made up of. Hotter particles move faster, colder ones move slower. If packed densely together, they constantly hit each other and change their direction of movement, leading to a constant vibration of the whole particle cloud. And the stronger the vibrations, the hotter they are.

This phenomenon was first discovered in 1827 by the Scottish physicist Robert Brown and linked to their temperature in the PhD thesis of Albert Einstein in 1905. The scientists made use of this phenomenon to study the reaction with “cold” reactants without actually cooling them down. Instead of mixing gases of cold O and H3+ together, they created two directed particle beams and let them overlap so the reaction could take place. Even though the beams were produced at room temperature and their individual velocity was quite high, the velocities of the beams relative to each other could be controlled to be very small. Think of driving on the highway and passing another car: You may be travelling at a speed well above 60 mph, corresponding to over 5200 feet per minute. Still, it can actually take you multiple seconds to fully pass a vehicle 10 feet long or more if you are not driving much faster than it, therefore having a low relative speed. And as we just learned, a small velocity corresponds to a low temperature.

Fig. 2: The dual source merged beam setup to measure the O + H3+ reaction.
Fig. 2: The dual source merged beam setup to measure the O + H3+ reaction.

To study the reaction the scientists used the setup shown in figure 2. They used two ion sources to produce either a beam of H3+ or O- ions. Since the experiment requires neutral oxygen atoms, the negatively charged O- ions are first neutralised by a laser, which kicks away the additional electron. These two beams are then overlapped in an interaction region allowing the chemical reaction to take place. Varying the relative velocity of the beams, corresponding to varying the temperature at which the reaction takes place, it can be studied over a broad range of temperatures, ranging from close to absolute zero to more than 1000°F.


Using this setup they could measure the so-called branching ratio, meaning the ratio of the outcomes H2O+ to OH+, over a wide temperature range. For low temperatures they found a ratio close to 1:1, whereas for higher temperatures only 20% of the reactions resulted directly in H2O+. In astrochemical models over the whole temperature range a fixed ratio of 30:70 was used, originating from a single measurement at room temperature, which was found to be not true. This implies that the frequently used model underestimates the production of water in cold interstellar regions and has to be adapted.

Follow this blog

Get every new post delivered right to your inbox.