A how-to guide for improving the potency of stem cells

You may remember Dolly, the sheep that became famous in the ‘90s as the first mammal to be cloned from an adult cell. Dolly was created through somatic cell nuclear transfer (SCNT), in which the nucleus from a somatic donor cell, i.e., a cell from the body other than a sperm or egg cell, is transferred into an enucleated egg cell. In this case, the donor cell was derived from a sheep’s mammary glands, a medical term for the breasts. The scientists named the cloned sheep Dolly since they could not think of a more impressive pair of mammary glands than Dolly Parton’s, or so the story goes. Aside from generating viable embryos in the laboratory, SCNT can be used to generate human stem cell lines for research and therapeutic purposes. However, this procedure is technically challenging and requires egg cells, which raises ethical concerns.

Artist’s impression of Dolly Parton, the famous American country singer, holding the cloned sheep named after her.
© 2022, Maaike Schilperoort

In 2007, a lab in Kyoto, Japan, found another way of generating human stem cells. The group infected human skin cells with a virus that carried a set of genes known to be important for embryonic stem cells. This resulted in so-called “induced pluripotent stem cells”, or iPSCs, that are functionally identical to embryonic stem cells. Although therapeutically promising, these iPSCs do not have the same potency as the cells generated through SCNT. SCNT generates cells that are totipotent at an early stage, meaning that they can form viable embryos as well as extraembryonic tissues such as the placenta and yolk sack. In contrast, iPSCs are pluripotent and are not able to give rise to extraembryonic tissues. They also have an inferior differentiation potential and lower proliferation rate as compared to totipotent cells.

Efforts have been made by scientists to make embryonic stem cells and iPSCs more totipotent by treating them with small molecule inhibitors, resulting in so-called expanded potential stem cells (EPSCs) that that can give rise to the embryo as well as placenta tissues and thus are more versatile as compared to their pluripotent counterparts. However, the developmental potential of EPSCs is still inferior to true totipotent cells or cells generated through SCNT. To gain insight into how the developmental potential of EPSCs can be improved, Columbia postdoc Vikas Malik and colleagues performed a deep analysis of pluripotent embryonic stem cells vs. the more totipotent EPSCs. They examined gene expression, DNA accessibility, and protein expression, and found some unique genes and proteins that are upregulated in EPSCs as compared to embryonic stem cells, such as Zscan4c, Rara, Zfp281, and UTF1. This pioneering work, published in Life Science Alliance, shows us which genes and proteins to target to generate authentic totipotent stem cells in a petri dish.

The work of Dr. Malik and colleagues has improved our understanding of how to generate totipotent cells outside of the human body without having to deal with the technical and ethical challenges of SCNT. These cells can further improve stem cell therapy through a greater ability to regenerate and repair tissues affected by damage or disease. In addition, totipotent cells are more suitable to study early development and problems of the reproductive system, and are optimal for gene therapy to correct genetic defects that cause disease. As the word indicates, totipotent cells really hold all the power, and could greatly advance scientific knowledge and regenerative medicine.

More information on the pursuit of totipotency can be found in this comprehensive review article by Dr. Malik and his PI Jianlong Wang published in Trends in Genetics.

Reviewed by: Trang Nguyen and Vikas Malik

Lactic acid – a new energy fuel source in brain tumor

What does lactic acid do to the body?

Lactic acid is produced when the body breaks down carbohydrates in low oxygen levels to generate energy. It is mainly found in muscle cells and red blood cells. An example of lactic production is when we perform intense exercise. 

Glucose, glutamine, fatty acids, and amino acids are well-known energy sources for cell growth and division. In the past, lactic acid has been known as a by-product of glycolysis, a process in which glucose is broken down through several enzyme reactions without the involvement of oxygen. However, recent studies showed that lactic acid is a key player in cancer cells to regulate tumor cell growth and division, blood vessel formation, and invasion. The tumor cells prefer to use glycolysis to produce energy and lactic acid despite the abundance of oxygen levels. Lactic acid is an alternative fuel source for glucose-deprived tumors to avoid cell death.

Lactic acid is transported through the membrane via the monocarboxylate transporter 1 (MCT1). A research group at Columbia University led by Dr. Markus Siegelin in the department of Pathology and Cell Biology showed a substantial presence of lactic acid in the citric acid cycle (TCA cycle), a series of chemical reactions to generate energy, in the glioblastoma cells cultured in the nutrient deprivation condition (low glucose and glutamine concentration). When the glucose and/or glutamine concentrations increased, less lactic acid was involved in the TCA-cycle metabolites. The uptaken lactic acid in the TCA-cycle was traced by using a method called C13 carbon tracing and was analyzed by liquid chromatography-mass spectrometry to identify the structure of different molecules. The researchers concluded that lactic acid is used as a fuel source to generate the energy in the brain tumor cells. Furthermore, lactic acid is converted to Actetyl-CoA and contributed to the gene modification in glioblastoma cells (Figure 1). These novel findings were published in a prestigious journal,  Molecular Cell

Figure 1: Role of lactic acid in the epigenetic modification of glioblastoma cells. Lactic acid is transported to the membrane via the monocarboxylate transporter 1 (MCT1) and contributed to the TCA cycle as a fuel source to generate the energy. Lactic acid is converted to Actetyl-CoA and contributed to the gene modification in glioblastoma cells. Suppressing the TCA cycle by using the targeted drug, namely CPI-613 (devimistat) leads to the abrogation of lactic acid in the energy production. The figure was generated by Biorender.

From these findings, the authors proposed to use CPI-613 (devimistat) drug, which targets TCA-cycle metabolites (Figure 1), to  treat glioblastoma cells. Indeed, CPI-613 showed a suppression of cellular viability in vitro of glioblastoma cells and an extension of the animal survival curve in the mouse model. The authors suggested that the combination of CPI-613 with other standard care treatment in glioblastoma such as temozolomide and radiation could be a potential clinical therapy for patients with glioblastoma.

Read more about this exciting finding here:

https://www.sciencedirect.com/science/article/pii/S1097276522006475 

Reviewed by: Pei-Yin Shih, Sam Rossano, Emily Hokett

Alcohol Use Disorder – are we making the right diagnosis?

Do you and your friends enjoy the occasional cocktail or two over the weekend? Maybe we know someone who enjoys the more-than-occasional cocktail. But, at what point do our drinking habits significantly affect our health? Recent studies suggest that 6% of adults in the United States report heavy or high-risk consumption of alcohol, which is defined as an average of more than 7 drinks/week for women and more than 14 drinks/week for men. This high risk-consumption may lead to Alcohol Use Disorder (AUD) if it is repeated for one year or more. AUD is associated with a number of medical and psychiatric problems, and can even increase risk of death in patients who have cancer and cardiovascular disease.

To diagnose AUD, medical and mental health professionals use the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), which explores 11 criteria, including alcohol-related cravings, strains on relationships caused by alcohol use, feeling unable to cut back or stop drinking, dangerous or risky behavior when under the influence of alcohol, etc. Unlike previous versions of the DSM, these AUD diagnoses are divided based on severity, where people who experience 0 or 1 of the diagnostic criteria do not have AUD (no-AUD), 2-3 criteria have mild AUD, 4-5 criteria have moderate AUD, and 6+ have severe AUD. However, it’s not well understood whether other factors like the extent of alcohol use, the degree of cravings or impairments, etc. can help classify mild, moderate, and severe AUD diagnoses. 

Last year, Dr. Zachary L. Mannes, a postdoc in the Department of Epidemiology at Columbia University Mailman School of Public Health and New York State Psychiatric Institute, and colleagues published a study in which they aimed to explore any potential relationships between the severity of AUD (no-AUD, mild, moderate, or severe, based on the DSM-5) and self-reported measures of other factors or “external validators”, such as levels of alcohol craving, functional impairment, and psychiatric conditions. To do this, they collected AUD diagnosis as well as measures of external validators in 588 participants. These validators included alcohol specific validators (i.e. Craving, Problematic Use, Harmful Use, Binge Drinking Frequency), psychiatric validators (i.e. Major Depressive Disorder/MDD and posttraumatic stress disorder/PTSD), and functioning validators (social impairments; physical and mental impairments).

Dr. Mannes and colleagues reported that in this cohort of subjects, participants with alcohol use validators had a significantly greater likelihood of a diagnosis with mild, moderate, and severe AUD than a no-AUD diagnosis. Psychiatric validators like MDD and PTSD had a significantly greater likelihood of a severe AUD diagnosis than no-AUD; this relationship was not seen for either mild or moderate AUD. Participants who had social, physical, and mental impairments had a greater likelihood of having severe AUD than no-AUD, but this was not seen for participants with mild or moderate AUD. When looking within participants with an AUD diagnosis (i.e. excluding a no-AUD diagnosis), participants with many alcohol-specific, psychiatric, and functional validators were more likely to have a severe AUD than either mild or moderate AUD.

Overall, the results of this study support the structure of the DSM-5 diagnosis for AUD, as those diagnoses with mild and moderate AUD had significant associations with alcohol use validators, while those with severe AUD had significant associations with alcohol use, psychiatric and functional validators. In other words, people with severe AUD had a higher likelihood of symptoms that affected other aspects of their lives including impairments in social functioning and presence of psychiatric conditions including MDD and BPD. This study emphasizes the importance of looking at levels of severity in AUD as the current DSM-5 does, as opposed to a binary yes/no diagnosis as older versions of the DSM had incorporated. This study also helps further the understanding of optimal ways to diagnose AUD and may help better understand potential treatment implications for various AUD severity. The study published by Dr. Mannes and colleagues supports and progresses the field of AUD research in order to better understand and characterize the symptoms, comorbidities, and diagnosis of AUD, so that medical professionals can better assist those who are struggling with the disorder. 

Edited by: Trang Nguyen, Maaike Schilperoort

Metastatic cancer cells have unstable DNA which helps them to evade the body’s immune system

Melanoma brain metastasis (MBM) frequently occurs in patients with late stages of melanoma (skin cancer). It is the third leading cause of brain metastases after lung and breast cancers. Cancer cells break away from the primary tumor and travel to the brain through the bloodstream. Despite significant therapeutic advances in the treatment of metastatic cancers, MBM  remains a challenging problem for therapeutic treatment due to the blood brain barrier. The MBM may develop a variety of symptoms that are similar to primary brain tumors such as headache, difficulty walking, or seizures. To provide comprehensive studies of the cells inside melanoma brain metastases, Jana Biermann, a postdoc in Dr. Benjamin Izar’s lab at Columbia University, performed single-cell-sequencing, nucleus RNA-sequencing, and CT scans of 22 treatment-naive MBM and 10 extracranial melanoma metastases that could spur the development of a new generation of therapies (Figure 1).

Figure 1: A comprehensive study of melanoma brain metastasis and extracranial melanoma metastases by performing single-cell genetic analyses of frozen brain samples. snRNA-seq: single nuclei RNA sequencing; TCR-seq: T cells sequencing. Image was created from BioRender based on Figure 1A of the original article that was published in CellPress with title “Dissecting the treatment-naive ecosystem of human melanoma brain metastasis”.

The authors also analyzed the genes expressed in 17 melanoma brain metastases and 10 extracranial melanoma metastases patients. The data revealed unstable DNA in the melanoma brain metastases compared with extracranial melanoma metastases. The unstable DNA triggers signaling pathways that enable the tumor cells to spread around the body and to suppress the body’s natural immune response that normally fights off the tumor cells. The researchers also found that the relocated melanoma cells adopt a neuronal-like state that might help tumor cells adapt and survive after they migrate to the brain. Furthermore, by using CT scans of multiple slices of the tumors, researchers created three-dimensional images of the tumors and uncovered heterogeneity in metabolic and immune pathways within and between tumors. 

The authors also found that the cancer cells in the brain significantly expressed  several genes that are known to promote cancer progression, such as MET and PI3K, while the extracranial melanoma metastases strongly expressed genes related to epithelial cells, which are the cells that cover the inside and outside of the surfaces of your body such as skin and blood vessels. Understanding these pathways will help for the therapeutic targets. 

A limitation of the study is that the authors did not compare melanoma brain metastasis and extracranial melanoma metastases within the same patients, which could have introduced variability in their dataset. Nevertheless, the atlas that they built provides a foundation for further mechanistic studies on how different perturbations could influence brain metastasis ecosystems.

Reviewed by: Pei-Yin Shih, Sam Rossano, Maaike Schilperoort

A pharmacological approach to lifestyle related metabolic disorders- does it make anti-sense?

Non-Alcoholic fatty liver disease (NAFLD) is a group of liver diseases that include NAFL and Non-alcoholic steatohepatitis (NASH). It is a chronic progressive disease of the liver not caused by alcohol. It starts with fat accumulation, progresses into inflammation, swelling and liver enlargement (NASH), fibrosis, cell death and replacement of dead cells by scar tissue(cirrhosis) and finally results in cancer (hepatocellular carcinoma). A figure describing this progression has been published by my colleague Maaike Schilperoort in an article describing emerging therapeutic strategies for fatty liver-related cancer. 

NASH is the most severe form of NAFLD before progressing into the irreversible stages of cirrhosis and cancer. It remains under-diagnosed as it is asymptomatic, or it is accompanied by non-specific symptoms. Individuals with hypertension, high cholesterol, who are over-weight or obese, have diabetes or insulin resistance are at a greater risk to develop NASH. It is largely a lifestyle associated metabolic disorder made worse in individuals with obesity and type 2 diabetes. Current treatment modalities focus on lifestyle interventions and management of co-existing conditions. A lack of specific and targeted pharmacological recommendations with proven efficacy complicates NAFLD management.

Junjie Yu and colleagues have conducted a comprehensive study by using data available from a clinical trial of patients with NASH and studying mouse models of NASH. They identified a gene – Jagged 1 (JAG1) that was increased in patients who had NASH and fibrosis. They used this key finding to conduct experiments on mouse models to further study the role of JAG1 in either reducing or worsening NASH and liver fibrosis. They have also used a cell targeted strategy to test potential therapeutic interventions associated with the development of NASH. 

To mimic human NASH, they fed mice with a NASH inducing diet, rich in saturated fat, sucrose, and cholesterol as well as fructose containing drinking water. The mice develop liver steatosis, inflammation, fibrosis, weight gain and insulin resistance which are symptoms seen in patients with NASH. JAG1 was increased in the liver, and this correlated with an increase in fibrotic markers in mice fed a NASH-inducing diet. An interesting observation that directed the rest of the study was that JAG1 was increased in the liver specific cell type called hepatocytes. They used a virus mediated gene delivery method to increase or decrease Jag1 in the hepatocytes of the mice that were fed the NASH-inducing diet. Increasing Jag1 increased fibrosis that is induced by the NASH diet in the mice and decreasing Jag1 protected the mice liver from developing fibrosis. Based on these insights they used a technology called antisense oligonucleotides (ASO) to block Jag1 expression in mice that were fed a NASH-inducing diet. They found mice treated with Jag1-ASO had reduced expression of JAG1 at the gene and protein levels along with a reduction in inflammatory and fibrotic markers. However, as this method would target all cell types, a hepatocyte specific Jag1 inhibitor was developed. Mice fed with NASH-inducing diet on treatment with hepatocyte specific Jag1 inhibition show decreased Jag1 in the liver as well as reduction in liver fibrosis.

This is a very interesting approach that could lead to specific and targeted pharmacological treatment of NASH. ASOs are short single strand nucleotide sequences that can be produced to target specific genes of interest (like Jag1 in this case) in cells. They alter protein expression during the process of translation from RNA to protein (Fig 1). As they are made to target specific genes and cells, they have a higher chance of success. Currently there are 15 FDA approved ASO based drugs for disorders ranging from neurodegenerative disorders to cancer. The main limitation of ASO is the enzymatic degradation of oligonucleotides and removal from the body by the kidneys.  Further research into improving ASO to optimize delivery and safety could lead to development of therapies for disorders that require targeted pharmacological interventions.

Figure 1. A. A schematic of regular transcription and translation processes involved in protein synthesis. B. ASO mediated disruption of protein synthesis. Figure created using Biorender.com

Currently, pharmacological therapies for NAFLDs are recommended for individuals who do not achieve expected weight loss and for those individuals with stage 2 or greater NASH-induced fibrosis. Lifestyle changes may not be possible for all individuals with metabolic disorders due to various reasons including socio-economic reasons, limited food resources, disabilities, etc. Though lifestyle interventions like weight management, maintaining a healthy diet and regular exercise have been shown to reduce symptoms and manage the disorder, it works well if it is diagnosed at earlier stages. However, given that NASH does not have specific symptoms and is grossly under-diagnosed, an option for treatment of the disorder when it is at later stages may alleviate the disease burden. Specific targeted pharmacological approaches towards treating a metabolic disorder would be a feasible approach and may be a more efficient way to treat NAFLD when combined with lifestyle changes.

Reviewed by : Trang Nguyen and Samantha Rossano

 

  

 

Why do COVID-19 patients have trouble breathing?

The COVID-19 pandemic has resulted in over 145 million positive cases and 3.1 million deaths globally (32 million and 570,000 in the USA, respectively), as reported on April 26, 2021. Approximately 15% of infected patients with SARS-CoV-2 die from respiratory failure, making it the leading cause of death in COVID-19 patients.

A research group at Columbia University led by Dr. Benjamin Izar identified substantial alterations in cellular composition, transcriptional cell states, and cell-to-cell interactions in the lungs of COVID19 patients. These findings were published in the prestigious journal Nature. The team performed single-nucleus RNA sequencing, which is a method for profiling gene expression in cells, of the lungs of 19 patients who died of COVID-19 and underwent rapid autopsy. The control group included seven control patients who underwent lung resection or biopsy in the pre-COVID-19 era (Figure 1).

Figure 1: An overview of the study design wherein single-nucleus RNA sequencing was used to characterize lungs of patients who died from COVID-19-related respiratory failure. A) The lung tissue was extracted for mRNA, a genetic sequencing of a gene. B) The mRNA sequence will be read by a computer system. C) The gene expression of cells in the lung of COVID-19 patients samples and control samples. PMI: post-mortem interval. snRNA-seq: single nucleus RNA sequencing. QC: quality control.

The lungs from individuals with COVID-19 were highly inflamed but had impaired T cell responses. The single-nucleus RNA sequencing showed significant differences in cell fractions between COVID-19 and control lungs both globally and within the immune and non-immune compartments. There was a reduction in the epithelial cell compartment, which are the surfaces of organs in the body and function as a protective barrier. There was also an increase in monocytes (i.e., white blood cells that are important for the adaptive immunity process) and macrophages (i.e., cells involved in the detection, phagocytosis and destruction of bacteria and other harmful organisms), and a decrease in fibroblasts (i.e., cells that contribute to the formation of connective tissue) and neuronal cells. These observations were independent of donor sex. 

Monocyte/macrophage and epithelial cells were unique features of a SARS-CoV-2 infection compared to other viral and bacterial causes of pneumonia. The reduction in the epithelial cell compartment was due to the loss of both alveolar type II and type I cells. Alveolar type II cells repopulate the epithelium after injury, and provide important components of the innate immune system. Alveolar type II cells adopted an inflammation-associated transient progenitor cell state and failed to undergo full transition into alveolar type I cells, resulting in impaired lung regeneration. 

Myeloid cells (i.e., monocytes, macrophages, and dendritic cells) represented a major cellular constituent in COVID-19 lungs and were more prevalent as compared to control lungs. The authors found that the receptor tyrosine kinase that is important for coordinated clearance of dying/dead cells and subsequent anti-inflammatory regulation during tissue regeneration was downregulated. These data suggest that myeloid cells are a major source of dysregulated inflammation in COVID-19.

The authors also found significantly more fibroblasts in COVID-19 lungs than in control lungs. The degree of fibrosis correlated with disease duration, indicating that lung fibrosis increases over time in COVID-19. 

In this article, the authors mentioned the limitation of the study that they  analyzed lung tissues from patients who died of COVID-19, and therefore they only examined a subset of potential disease phenotypes. Based on the author’s observation, the rapid development of pulmonary fibrosis is likely to be relevant for patients who survive from severe COVID-19. This atlas may inform our understanding of long-term complications of COVID-19 survivors and provide an important resource for therapeutic development.

Read more about this article here: A molecular single-cell lung atlas of lethal COVID-19

Reviewed by: Molly Scott and Maaike Schilperoort

Making sense of COVID-induced loss of smell

The coronavirus SARS-CoV-2 has led to more than six million confirmed deaths worldwide to date throughout the course of the COVID-19 pandemic. While SARS-CoV-2 enters the body through the respiratory system into the lungs, it can also induce damage in other organs. For instance, the sense of smell, which is mediated by the olfactory sensory neurons in our nose along with our brain, is lost in some COVID patients. How this virus affects our ability to smell is a puzzling question, and one that has been investigated by a team led by Dr. Zazhytska in the Lomvardas lab at Columbia University. They have tirelessly worked on solving this puzzle throughout the COVID shutdown period, and their discoveries, which have recently been published in the journal Cell, have started to provide some key answers.

We can smell the scents around us because the olfactory receptors in our olfactory sensory neurons bind to odorant molecules, relay the information through signaling molecules, and eventually signal to our brain (Figure 1). Dr. Zazhytska and her colleagues found that SARS-CoV-2 was rarely detected in the olfactory sensory neurons themselves, indicating that the virus probably doesn’t gain access to our brain through these sensory neurons. In fact, the most commonly infected cells are the neighboring sustentacular cells (Figure 1b), which are important in maintaining the health of the layer of olfactory cells, including the neurons. If the sustentacular cells die, the sensory neurons can be exposed to a stressful environment without support. Thus, the shutdown of the olfactory system might be an indirect effect of SARS-CoV-2 infection.

Figure 1 The basic structure of the olfactory system.
(A) Signal transduction in olfactory sensory neurons. The cell membrane separates the interior of the cell (cell cytoplasm, bottom) from the outside environment (top).
(B) Anatomy of cells in the nose that are involved in smell perception.
(Figure was made using BioRender).

There are about four hundred olfactory receptor genes scattered across our genome, and each neuron only expresses one of them. This stringent setup is achieved by interactions between multiple chromosomes that bring all the dispersed olfactory receptor genes together and form a cluster in the nucleus of the neuron. This cluster arrangement of olfactory receptor genes allows the gene expression machinery to access and turn on only one receptor at a time. Remarkably, Dr. Zazhytska and her colleagues discovered that this organization is disrupted dramatically after SARS-CoV-2 infection in both hamsters and humans. Infected individuals also show reduced expression of not only receptor genes, but also key molecules that are involved in smell perception, likely as consequences of the disrupted organization.

Interestingly, when the team of scientists exposed uninfected hamsters to UV-treated serum from SARS-CoV-2 infected hamsters, which no longer contain virus, they still observed this same disorganization of olfactory receptor genes in the animals. This observation suggests that not the virus itself but some other circulating molecule(s) trigger the abnormal organization. Identifying these molecules may provide potential treatments for COVID-induced loss of smell, as well as other diseases that can affect our olfaction, including early onset Alzheimer’s disease.

Edited by: Sam Rossano, Eric Smith, James Lee, Trang Nguyen, Maaike Schilperoort

What’s in Your Water? Arsenic Contamination and Inequality.

Water is one of the most essential elements for life. Every living creature requires access to a water source, humans being no exception. Unfortunately, access to clean drinking water continues to be a challenge for many individuals across the globe.  Systematic studies of water inequalities in the U.S. alone indicate increased contamination in areas often dismissed or underserved. Arsenic, a human carcinogen, or cancer causing substance, predominantly released from water flowing through rock formations has previously been measured at dangerous levels in  U.S. water sources. This finding led to the U.S. Environmental Protection Agency (EPA) mandating that arsenic contamination levels must be below a maximum level (10 µg/L) in 2001, resulting in enhanced water filtration and arsenic removal. However, whether this mandate was effective across all demographic areas remained unknown until Dr. Nigra,  previous postdoc and current assistant professor at Columbia’s Mailman School of Public Health, and colleagues took on the challenge of finding out.

Through extremely diligent research, Dr. Nigra and colleagues examined the arsenic exposure in community water systems across the U.S. to identify whether certain populations are exposed to arsenic levels above the maximum  mandated  by the EPA. Dr. Nigra and colleagues examined arsenic exposure levels through gathering data from the EPA’s public water database, which monitors public water for contaminants. They analyzed data of water contaminants across 46 states, Washington DC, the Navajo Nation, and American Indian tribes from the years 2006-2008 and 2009-2011, for overall arsenic concentrations across the different regions (Figure). They also separated the data for concentrations across different subgroups of individuals.

Overall, Dr. Nigra and colleagues identified a 10% reduction in water arsenic exposure.  They found  a reduction in arsenic concentrations in the New England, Eastern Midwest, and Southwest regions of the U.S. over the six year period. They also found reductions in subgroups that fit the following descriptions: most rural mid socioeconomic status (SES), semi urban high SES, and rural high SES. However, there were still communities that had arsenic levels that exceeded the maximum mandated by the EPA (Figure). These communities were predominantly Hispanic communities located in the Southwestern U.S. Furthermore, there was not enough data to identify whether there was a significant reduction in arsenic levels in tribal community water sources. Therefore, while there was an overall reduction in arsenic levels, there is still room for improvement. These Hispanic communities in the Southwestern U.S. are still at an elevated risk for cancer due to this increased exposure to carcinogens. To combat this increased exposure, more financial and technical resources such as an increase in arsenic treatment systems are necessary to reduce these arsenic levels.  Moreover, it is very possible that the under-reported arsenic levels in tribal communities could be putting those individuals at an increased risk. Dr. Nigra and colleagues have investigated an extremely impactful environmental factor and now, with their research, we are all a bit more aware of what’s in our water.  

Figure: Maps of counties in compliance with the EPA’s maximum arsenic concentration cut off of 10ug/L (top) and the average water arsenic concentrations across a six year period (bottom). Top Map: Low/Low: less 10μg/L over the six years; High/Low: greater than 10μg/L in 2006–2008, but less than 10μg/L in 2009–2011; Low/High: less than 10μg/L in 2006–2008 but greater than 10μg/L in 2009–2011; and High/High greater than 10μg/L in both periods. Figure was adapted from Figure 3 and Figure 4, Nigra et al., 2020 

 

Dr. Anne Nigra is a current assistant professor and previous postdoc  in Environmental Health Sciences at Columbia University’s Mailman School of Public Health. 

Reviewed by: Molly Scott, Maaike Schilperoort

Better Work Environments Make Super Nurses Even More Super!

We might all be familiar with the term “burnout” – the feeling of emotional exhaustion or feeling cynical or ineffective with respect to productivity at work, or in relationships with colleagues or clients. The World Health Organization classifies burnout as an occupational, not personal, phenomenon. Studies suggest that burnout can result from poor work environments – not necessarily dependent on the content of the work itself, but instead the setting in which the work is completed and how the work is managed or distributed. Burnout can be prevented or resolved by improving work environments.

Because it is dependent on the environment, the rate of burnout may vary between different job settings. For example, studies suggest that around 40% of the Nursing workforce in the United States is burned out. That’s almost half of all nurses! Nurses, along with Social Workers who also have a burnout rate of about 40%, are among the professions with the highest burnout rates in the country. Nurses have a unique position, as their actions and responsibilities at work directly impact the wellbeing of their patients. Because the lives of their patients may depend on it, it is important that nurses are attentive, motivated, and effective while at their jobs. In other words, nurses should not be burned out in order to properly care for their patients. 

To prevent or resolve burnout in nursing, work environments should aloow appropriate autonomy, or the ability for nurses to use their own discretion and depend on their own expertise to respond to patient care issues. Additionally, positive work environments for nurses include having good working relationships with physicians and hospital administration, and have adequate staffing and resources. If an environment does not include these positive factors, then nurse burnout will likely be prevalent in that clinical setting. Additionally, the combination of a poor work environment and burned out nurses is associated with lower levels of patient care quality and patient outcomes.

A recent study by Columbia postdoc Dr. Amelia Schlak explored how nurse burnout is related to patient care, with the expectation that more nurse burnout would correspond with poorer patient outcomes. Additionally, the researchers investigated how the nurse work environment affects the relationship between nurse burnout and respective patient outcomes. The authors expected to see that nurse burnout will have less of an effect on poorer patient outcomes in better work environments.

In order to investigate these relationships, Dr. Schlak and colleagues measured nurse burnout in over 20,000 nurses across 4 states (CA, PA, FL, and NJ) between 2015–2016 by using the emotional exhaustion subscale of the Maslach Burnout Inventory, which quantifies nurse burnout on a scale from 0 to 54, where higher scores correspond to more burnout. On average, the nurse burnout score in the study was 21/54. They also measured work environment using the Practice Environment Scale of Nursing Work Index survey completed by the same nurses. This measurement accounts for environmental aspects like staffing, access to resources, and nurse-physician relations. The researchers ranked the average hospital environment scores into categories of “poor” (bottom 25%), “mixed” (middle 50%), and “good” (top 25%) environments. They found that the degree of nurse burnout was skewed across the hospital quality category, where most (60%) nurses working in good environments ranked among the lowest burnout levels, while more than 50% of nurses working in poor environments ranked among the most burned out. So, better work environments typically means less burned out and more productive nurses! 

The ultimate priority in healthcare work is, of course, the patient! To see how the environment and nurse burnout affects patients, the researchers also collected patient outcome measurements for each hospital such as (1) patient mortality, (2) failure to rescue, or in-hospital mortality after experiencing an adverse event caused by medical treatment, and (3) length of stay, where only patients with length of stay less than 30 days were considered. The authors found that greater nurse burnout was associated with a higher incidence of patient mortality, an increased rate of failure to rescue and a longer patient stay. Nurses who are not burned out, who are energized and effective, tended to have patients that had better outcomes.

The authors also explored how the nurse work environment affects the relationship between nurse burnout and the patient outcome measurements. When the researchers compared hospitals with poor vs. mixed work environments, as well as mixed vs. good environments, they found that the frequency of burned out nurses decreased, the 30-day in-hospital mortality rate was 14% lower, the failure to rescue rate was 12% lower, and the length of stay was 4% lower in the mixed and good work environments, respectively. This means that by simply improving the work environment (i.e. improving employee relations or providing better resources), hospitals can greatly improve nurse burnout and patient outcomes! This relationship is shown in Figure 1 below. 

Figure 1: Clinical Work environment has an effect on the level of burn out in nurses. Nurse burn out, in turn, has an effect on patient outcomes, where higher levels of burn out result in poorer patient outcomes, and lower levels of outcome result in better patient outcomes. Additionally, the quality of the clinical work environment can also impact patient outcomes, where better outcomes are associated with better hospital environments, while poorer outcomes are associated with poorer hospital environments. Created with BioRender.com

Though this study was based on data from 2015, nurses and other healthcare workers have only become even more burned out in the face of the COVID-19 pandemic, intensified by the overwhelming demand, the pain of losing patients, and the risk of infection that they take every time they go to work. In light of this, hospital management and administration should be proactively addressing healthcare worker burnout, by ensuring that the needs of their healthcare workers are met. This includes, but is not limited to, allowing nurses autonomy or control over their practices, adequate staffing to avoid overworking or long shifts, encouraging and supporting positive relationships among nurses, physicians, and administrative staff, and providing proper resources for nurses to successfully fulfill their responsibilities. 

Also, this past week (May 6th – May 12th, 2022) was Nurses Appreciation Week. Thank you to the Super Nurses for the hard work that you do, oftentimes under relentless and stressful circumstances! You truly are Healthcare Heroes! I hope your hospitals, clinics, or other places of work are prioritizing your work environments, to help reduce the burnout you feel from this pandemic. If they aren’t, send them this article 🙂 

Edited by: Trang Nguyen, Vikas Malik, Maaike Schilperoort

What can we do to enter a new era in antimalarial research? A promising story from genetics to genomics.

Plasmodium falciparum is a unicellular organism known as one of the deadliest parasites in humans. This parasite is transmitted through bites of female Anopheles mosquitoes and causes the most dangerous form of malaria, falciparum malaria. Each year, over 200 million cases of malaria result in hundreds of millions of deaths. Moreover, P. falciparum has also been involved in the development of blood cancer. Therefore, study of malaria-causing Plasmodium species and the development of anti-malarial treatment constitute a high-impact domain of biological research.

Antimalarial drugs have been the pillar of malaria control and prophylaxis. Treatments combine rapid compounds to reduce parasite biomass with longer-lasting drugs that eliminate surviving parasites. These strategies have led to significant reductions in malaria-associated deaths. However, Plasmodium is constantly developing resistance to existing treatments. The situation is further complicated by the spread of mosquitoes resistant to insecticides. Additionally, asymptomatic chronic infections serve as parasite reservoirs and the single candidate vaccine has limited efficacy. Thus, the fight against malaria requires sustained efforts. A detailed understanding of P. falciparum biology is still crucial to identify and develop novel and efficient therapeutic targets.

Recent progress in genomics and molecular genetics empowered novel approaches to study the parasite gene functions. Application of genome-based analyses, genome editing, and genetic systems that allow for temporal regulation of gene and protein expression have proven to be crucial in identifying P. falciparum genes involved in antimalarial resistance. In their recent review, Columbia postdoc John Okombo and colleagues summarize the contributions and limitations  of some of these approaches in advancing our understanding of Plasmodium biology and in characterizing regions of its genome associated with antimalarial drug responses.

P. falciparum requires two hosts for its development and transmission: humans and Anopheles mosquito species. The parasite life cycle involves numerous developmental stages. In humans take place stages of Plasmodium’s development that are part of its “so-called” asexual development. On the other hand, mosquitos harbour other stages of the parasite development, associated with its sexual reproduction (Figure 1). Humans are infected by a stage called “sporozoites” upon the bite of an infected mosquito. Sporozoites enter the bloodstream and migrate through to the liver where they invade the liver cells (hepatocytes), multiply and form “hepatic schizonts”. Then, the schizonts rupture and release in the circulation the stage of “merozoites” which invade red blood cells (RBCs).  The clinical symptoms of malaria such as fever, anemia, and neurological disorder are produced during the blood stage. In RBCs are formed “trophozoites”, that have two alternative paths of development. They can either form “blood-stage schizonts” that produce more RBC-infecting merozoites or can alternatively differentiate to sexual forms, male and female “gametocytes”. Finally, gametocytes get ingested by new mosquitoes during blood meal where they undergo sexual reproduction forming a “zygote”. The zygotes then pass through several additional stages until maturation to a new generation of sporozoites, closing the parasite life cycle (Figure 1).

Figure 1: Life cycle of Plasmodium falciparum. Image created with BioRender.com

This complexity of the Plasmodium life cycle presents opportunities to generate drugs acting on various stages of its development. The review of Okombo and colleagues underlines how new genomic data have enabled the identification of genes contributing to various parasite traits, particularly those of antimalarial drug responses. The authors recap genetic- and genomic-based approaches that have set the stage for current investigations into antimalarial drug resistance and Plasmodium biology and have thus led to expanding and improving the available antimalarial arsenal.

For instance, in “genome-wide association studies” (GWAS), parasites isolated from infected patients are profiled for resistance against antimalarial drugs of interest, and their genomes are studied in order to identify genetic variants associated with resistance. In “genetic crosses and linkage analyses”, gametocytes from genetically distinct parental parasites are fed to mosquitoes in which they undergo sexual reproduction. The resulting progeny are inoculated into humanized human liver-chimeric mice-models that support P. falciparum infections and development. The progeny is later analyzed to identify the DNA changes associated with resistance and drug response variation. In “in vitro evolution and whole-genome analysis” antiplasmodial compounds are used to pressure P. falciparum progeny to undergo evolution to drug-resistant parasites. Their genome is then analyzed to identify the genetic determinants that may underlie the resistance. “Phenotype-driven functional Plasmodium mutant screens” are based on random genome-wide mutation generation and selection of mutants that either are resistant to drugs or have affected development, pathogenicity, or virulence. Such an approach has also led to the discovery of novel important genes. In addition, the review covers a number of cutting-edge methods for genome editing used to study antimalarial resistance and mode of action. Experiments using genetically engineered parasites constitute a critical step in uncovering the functional role of the identified genes. Finally, the reader can also find an overview of Plasmodium “regulatable expression strategies”. These approaches are particularly valuable in the study of non-dispensable (essential) genes. Additional information on other intriguing and powerful techniques are further described in the original paper.

Article reviewed by: Trang Nguyen, Samantha Rossano, Maaike Schilperoort

Follow this blog

Get every new post delivered right to your inbox.