Alcohol Use Disorder – are we making the right diagnosis?

Do you and your friends enjoy the occasional cocktail or two over the weekend? Maybe we know someone who enjoys the more-than-occasional cocktail. But, at what point do our drinking habits significantly affect our health? Recent studies suggest that 6% of adults in the United States report heavy or high-risk consumption of alcohol, which is defined as an average of more than 7 drinks/week for women and more than 14 drinks/week for men. This high risk-consumption may lead to Alcohol Use Disorder (AUD) if it is repeated for one year or more. AUD is associated with a number of medical and psychiatric problems, and can even increase risk of death in patients who have cancer and cardiovascular disease.

To diagnose AUD, medical and mental health professionals use the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), which explores 11 criteria, including alcohol-related cravings, strains on relationships caused by alcohol use, feeling unable to cut back or stop drinking, dangerous or risky behavior when under the influence of alcohol, etc. Unlike previous versions of the DSM, these AUD diagnoses are divided based on severity, where people who experience 0 or 1 of the diagnostic criteria do not have AUD (no-AUD), 2-3 criteria have mild AUD, 4-5 criteria have moderate AUD, and 6+ have severe AUD. However, it’s not well understood whether other factors like the extent of alcohol use, the degree of cravings or impairments, etc. can help classify mild, moderate, and severe AUD diagnoses. 

Last year, Dr. Zachary L. Mannes, a postdoc in the Department of Epidemiology at Columbia University Mailman School of Public Health and New York State Psychiatric Institute, and colleagues published a study in which they aimed to explore any potential relationships between the severity of AUD (no-AUD, mild, moderate, or severe, based on the DSM-5) and self-reported measures of other factors or “external validators”, such as levels of alcohol craving, functional impairment, and psychiatric conditions. To do this, they collected AUD diagnosis as well as measures of external validators in 588 participants. These validators included alcohol specific validators (i.e. Craving, Problematic Use, Harmful Use, Binge Drinking Frequency), psychiatric validators (i.e. Major Depressive Disorder/MDD and posttraumatic stress disorder/PTSD), and functioning validators (social impairments; physical and mental impairments).

Dr. Mannes and colleagues reported that in this cohort of subjects, participants with alcohol use validators had a significantly greater likelihood of a diagnosis with mild, moderate, and severe AUD than a no-AUD diagnosis. Psychiatric validators like MDD and PTSD had a significantly greater likelihood of a severe AUD diagnosis than no-AUD; this relationship was not seen for either mild or moderate AUD. Participants who had social, physical, and mental impairments had a greater likelihood of having severe AUD than no-AUD, but this was not seen for participants with mild or moderate AUD. When looking within participants with an AUD diagnosis (i.e. excluding a no-AUD diagnosis), participants with many alcohol-specific, psychiatric, and functional validators were more likely to have a severe AUD than either mild or moderate AUD.

Overall, the results of this study support the structure of the DSM-5 diagnosis for AUD, as those diagnoses with mild and moderate AUD had significant associations with alcohol use validators, while those with severe AUD had significant associations with alcohol use, psychiatric and functional validators. In other words, people with severe AUD had a higher likelihood of symptoms that affected other aspects of their lives including impairments in social functioning and presence of psychiatric conditions including MDD and BPD. This study emphasizes the importance of looking at levels of severity in AUD as the current DSM-5 does, as opposed to a binary yes/no diagnosis as older versions of the DSM had incorporated. This study also helps further the understanding of optimal ways to diagnose AUD and may help better understand potential treatment implications for various AUD severity. The study published by Dr. Mannes and colleagues supports and progresses the field of AUD research in order to better understand and characterize the symptoms, comorbidities, and diagnosis of AUD, so that medical professionals can better assist those who are struggling with the disorder. 

Edited by: Trang Nguyen, Maaike Schilperoort

Metastatic cancer cells have unstable DNA which helps them to evade the body’s immune system

Melanoma brain metastasis (MBM) frequently occurs in patients with late stages of melanoma (skin cancer). It is the third leading cause of brain metastases after lung and breast cancers. Cancer cells break away from the primary tumor and travel to the brain through the bloodstream. Despite significant therapeutic advances in the treatment of metastatic cancers, MBM  remains a challenging problem for therapeutic treatment due to the blood brain barrier. The MBM may develop a variety of symptoms that are similar to primary brain tumors such as headache, difficulty walking, or seizures. To provide comprehensive studies of the cells inside melanoma brain metastases, Jana Biermann, a postdoc in Dr. Benjamin Izar’s lab at Columbia University, performed single-cell-sequencing, nucleus RNA-sequencing, and CT scans of 22 treatment-naive MBM and 10 extracranial melanoma metastases that could spur the development of a new generation of therapies (Figure 1).

Figure 1: A comprehensive study of melanoma brain metastasis and extracranial melanoma metastases by performing single-cell genetic analyses of frozen brain samples. snRNA-seq: single nuclei RNA sequencing; TCR-seq: T cells sequencing. Image was created from BioRender based on Figure 1A of the original article that was published in CellPress with title “Dissecting the treatment-naive ecosystem of human melanoma brain metastasis”.

The authors also analyzed the genes expressed in 17 melanoma brain metastases and 10 extracranial melanoma metastases patients. The data revealed unstable DNA in the melanoma brain metastases compared with extracranial melanoma metastases. The unstable DNA triggers signaling pathways that enable the tumor cells to spread around the body and to suppress the body’s natural immune response that normally fights off the tumor cells. The researchers also found that the relocated melanoma cells adopt a neuronal-like state that might help tumor cells adapt and survive after they migrate to the brain. Furthermore, by using CT scans of multiple slices of the tumors, researchers created three-dimensional images of the tumors and uncovered heterogeneity in metabolic and immune pathways within and between tumors. 

The authors also found that the cancer cells in the brain significantly expressed  several genes that are known to promote cancer progression, such as MET and PI3K, while the extracranial melanoma metastases strongly expressed genes related to epithelial cells, which are the cells that cover the inside and outside of the surfaces of your body such as skin and blood vessels. Understanding these pathways will help for the therapeutic targets. 

A limitation of the study is that the authors did not compare melanoma brain metastasis and extracranial melanoma metastases within the same patients, which could have introduced variability in their dataset. Nevertheless, the atlas that they built provides a foundation for further mechanistic studies on how different perturbations could influence brain metastasis ecosystems.

Reviewed by: Pei-Yin Shih, Sam Rossano, Maaike Schilperoort

A pharmacological approach to lifestyle related metabolic disorders- does it make anti-sense?

Non-Alcoholic fatty liver disease (NAFLD) is a group of liver diseases that include NAFL and Non-alcoholic steatohepatitis (NASH). It is a chronic progressive disease of the liver not caused by alcohol. It starts with fat accumulation, progresses into inflammation, swelling and liver enlargement (NASH), fibrosis, cell death and replacement of dead cells by scar tissue(cirrhosis) and finally results in cancer (hepatocellular carcinoma). A figure describing this progression has been published by my colleague Maaike Schilperoort in an article describing emerging therapeutic strategies for fatty liver-related cancer. 

NASH is the most severe form of NAFLD before progressing into the irreversible stages of cirrhosis and cancer. It remains under-diagnosed as it is asymptomatic, or it is accompanied by non-specific symptoms. Individuals with hypertension, high cholesterol, who are over-weight or obese, have diabetes or insulin resistance are at a greater risk to develop NASH. It is largely a lifestyle associated metabolic disorder made worse in individuals with obesity and type 2 diabetes. Current treatment modalities focus on lifestyle interventions and management of co-existing conditions. A lack of specific and targeted pharmacological recommendations with proven efficacy complicates NAFLD management.

Junjie Yu and colleagues have conducted a comprehensive study by using data available from a clinical trial of patients with NASH and studying mouse models of NASH. They identified a gene – Jagged 1 (JAG1) that was increased in patients who had NASH and fibrosis. They used this key finding to conduct experiments on mouse models to further study the role of JAG1 in either reducing or worsening NASH and liver fibrosis. They have also used a cell targeted strategy to test potential therapeutic interventions associated with the development of NASH. 

To mimic human NASH, they fed mice with a NASH inducing diet, rich in saturated fat, sucrose, and cholesterol as well as fructose containing drinking water. The mice develop liver steatosis, inflammation, fibrosis, weight gain and insulin resistance which are symptoms seen in patients with NASH. JAG1 was increased in the liver, and this correlated with an increase in fibrotic markers in mice fed a NASH-inducing diet. An interesting observation that directed the rest of the study was that JAG1 was increased in the liver specific cell type called hepatocytes. They used a virus mediated gene delivery method to increase or decrease Jag1 in the hepatocytes of the mice that were fed the NASH-inducing diet. Increasing Jag1 increased fibrosis that is induced by the NASH diet in the mice and decreasing Jag1 protected the mice liver from developing fibrosis. Based on these insights they used a technology called antisense oligonucleotides (ASO) to block Jag1 expression in mice that were fed a NASH-inducing diet. They found mice treated with Jag1-ASO had reduced expression of JAG1 at the gene and protein levels along with a reduction in inflammatory and fibrotic markers. However, as this method would target all cell types, a hepatocyte specific Jag1 inhibitor was developed. Mice fed with NASH-inducing diet on treatment with hepatocyte specific Jag1 inhibition show decreased Jag1 in the liver as well as reduction in liver fibrosis.

This is a very interesting approach that could lead to specific and targeted pharmacological treatment of NASH. ASOs are short single strand nucleotide sequences that can be produced to target specific genes of interest (like Jag1 in this case) in cells. They alter protein expression during the process of translation from RNA to protein (Fig 1). As they are made to target specific genes and cells, they have a higher chance of success. Currently there are 15 FDA approved ASO based drugs for disorders ranging from neurodegenerative disorders to cancer. The main limitation of ASO is the enzymatic degradation of oligonucleotides and removal from the body by the kidneys.  Further research into improving ASO to optimize delivery and safety could lead to development of therapies for disorders that require targeted pharmacological interventions.

Figure 1. A. A schematic of regular transcription and translation processes involved in protein synthesis. B. ASO mediated disruption of protein synthesis. Figure created using Biorender.com

Currently, pharmacological therapies for NAFLDs are recommended for individuals who do not achieve expected weight loss and for those individuals with stage 2 or greater NASH-induced fibrosis. Lifestyle changes may not be possible for all individuals with metabolic disorders due to various reasons including socio-economic reasons, limited food resources, disabilities, etc. Though lifestyle interventions like weight management, maintaining a healthy diet and regular exercise have been shown to reduce symptoms and manage the disorder, it works well if it is diagnosed at earlier stages. However, given that NASH does not have specific symptoms and is grossly under-diagnosed, an option for treatment of the disorder when it is at later stages may alleviate the disease burden. Specific targeted pharmacological approaches towards treating a metabolic disorder would be a feasible approach and may be a more efficient way to treat NAFLD when combined with lifestyle changes.

Reviewed by : Trang Nguyen and Samantha Rossano

 

  

 

Why do COVID-19 patients have trouble breathing?

The COVID-19 pandemic has resulted in over 145 million positive cases and 3.1 million deaths globally (32 million and 570,000 in the USA, respectively), as reported on April 26, 2021. Approximately 15% of infected patients with SARS-CoV-2 die from respiratory failure, making it the leading cause of death in COVID-19 patients.

A research group at Columbia University led by Dr. Benjamin Izar identified substantial alterations in cellular composition, transcriptional cell states, and cell-to-cell interactions in the lungs of COVID19 patients. These findings were published in the prestigious journal Nature. The team performed single-nucleus RNA sequencing, which is a method for profiling gene expression in cells, of the lungs of 19 patients who died of COVID-19 and underwent rapid autopsy. The control group included seven control patients who underwent lung resection or biopsy in the pre-COVID-19 era (Figure 1).

Figure 1: An overview of the study design wherein single-nucleus RNA sequencing was used to characterize lungs of patients who died from COVID-19-related respiratory failure. A) The lung tissue was extracted for mRNA, a genetic sequencing of a gene. B) The mRNA sequence will be read by a computer system. C) The gene expression of cells in the lung of COVID-19 patients samples and control samples. PMI: post-mortem interval. snRNA-seq: single nucleus RNA sequencing. QC: quality control.

The lungs from individuals with COVID-19 were highly inflamed but had impaired T cell responses. The single-nucleus RNA sequencing showed significant differences in cell fractions between COVID-19 and control lungs both globally and within the immune and non-immune compartments. There was a reduction in the epithelial cell compartment, which are the surfaces of organs in the body and function as a protective barrier. There was also an increase in monocytes (i.e., white blood cells that are important for the adaptive immunity process) and macrophages (i.e., cells involved in the detection, phagocytosis and destruction of bacteria and other harmful organisms), and a decrease in fibroblasts (i.e., cells that contribute to the formation of connective tissue) and neuronal cells. These observations were independent of donor sex. 

Monocyte/macrophage and epithelial cells were unique features of a SARS-CoV-2 infection compared to other viral and bacterial causes of pneumonia. The reduction in the epithelial cell compartment was due to the loss of both alveolar type II and type I cells. Alveolar type II cells repopulate the epithelium after injury, and provide important components of the innate immune system. Alveolar type II cells adopted an inflammation-associated transient progenitor cell state and failed to undergo full transition into alveolar type I cells, resulting in impaired lung regeneration. 

Myeloid cells (i.e., monocytes, macrophages, and dendritic cells) represented a major cellular constituent in COVID-19 lungs and were more prevalent as compared to control lungs. The authors found that the receptor tyrosine kinase that is important for coordinated clearance of dying/dead cells and subsequent anti-inflammatory regulation during tissue regeneration was downregulated. These data suggest that myeloid cells are a major source of dysregulated inflammation in COVID-19.

The authors also found significantly more fibroblasts in COVID-19 lungs than in control lungs. The degree of fibrosis correlated with disease duration, indicating that lung fibrosis increases over time in COVID-19. 

In this article, the authors mentioned the limitation of the study that they  analyzed lung tissues from patients who died of COVID-19, and therefore they only examined a subset of potential disease phenotypes. Based on the author’s observation, the rapid development of pulmonary fibrosis is likely to be relevant for patients who survive from severe COVID-19. This atlas may inform our understanding of long-term complications of COVID-19 survivors and provide an important resource for therapeutic development.

Read more about this article here: A molecular single-cell lung atlas of lethal COVID-19

Reviewed by: Molly Scott and Maaike Schilperoort

Making sense of COVID-induced loss of smell

The coronavirus SARS-CoV-2 has led to more than six million confirmed deaths worldwide to date throughout the course of the COVID-19 pandemic. While SARS-CoV-2 enters the body through the respiratory system into the lungs, it can also induce damage in other organs. For instance, the sense of smell, which is mediated by the olfactory sensory neurons in our nose along with our brain, is lost in some COVID patients. How this virus affects our ability to smell is a puzzling question, and one that has been investigated by a team led by Dr. Zazhytska in the Lomvardas lab at Columbia University. They have tirelessly worked on solving this puzzle throughout the COVID shutdown period, and their discoveries, which have recently been published in the journal Cell, have started to provide some key answers.

We can smell the scents around us because the olfactory receptors in our olfactory sensory neurons bind to odorant molecules, relay the information through signaling molecules, and eventually signal to our brain (Figure 1). Dr. Zazhytska and her colleagues found that SARS-CoV-2 was rarely detected in the olfactory sensory neurons themselves, indicating that the virus probably doesn’t gain access to our brain through these sensory neurons. In fact, the most commonly infected cells are the neighboring sustentacular cells (Figure 1b), which are important in maintaining the health of the layer of olfactory cells, including the neurons. If the sustentacular cells die, the sensory neurons can be exposed to a stressful environment without support. Thus, the shutdown of the olfactory system might be an indirect effect of SARS-CoV-2 infection.

Figure 1 The basic structure of the olfactory system.
(A) Signal transduction in olfactory sensory neurons. The cell membrane separates the interior of the cell (cell cytoplasm, bottom) from the outside environment (top).
(B) Anatomy of cells in the nose that are involved in smell perception.
(Figure was made using BioRender).

There are about four hundred olfactory receptor genes scattered across our genome, and each neuron only expresses one of them. This stringent setup is achieved by interactions between multiple chromosomes that bring all the dispersed olfactory receptor genes together and form a cluster in the nucleus of the neuron. This cluster arrangement of olfactory receptor genes allows the gene expression machinery to access and turn on only one receptor at a time. Remarkably, Dr. Zazhytska and her colleagues discovered that this organization is disrupted dramatically after SARS-CoV-2 infection in both hamsters and humans. Infected individuals also show reduced expression of not only receptor genes, but also key molecules that are involved in smell perception, likely as consequences of the disrupted organization.

Interestingly, when the team of scientists exposed uninfected hamsters to UV-treated serum from SARS-CoV-2 infected hamsters, which no longer contain virus, they still observed this same disorganization of olfactory receptor genes in the animals. This observation suggests that not the virus itself but some other circulating molecule(s) trigger the abnormal organization. Identifying these molecules may provide potential treatments for COVID-induced loss of smell, as well as other diseases that can affect our olfaction, including early onset Alzheimer’s disease.

Edited by: Sam Rossano, Eric Smith, James Lee, Trang Nguyen, Maaike Schilperoort

What’s in Your Water? Arsenic Contamination and Inequality.

Water is one of the most essential elements for life. Every living creature requires access to a water source, humans being no exception. Unfortunately, access to clean drinking water continues to be a challenge for many individuals across the globe.  Systematic studies of water inequalities in the U.S. alone indicate increased contamination in areas often dismissed or underserved. Arsenic, a human carcinogen, or cancer causing substance, predominantly released from water flowing through rock formations has previously been measured at dangerous levels in  U.S. water sources. This finding led to the U.S. Environmental Protection Agency (EPA) mandating that arsenic contamination levels must be below a maximum level (10 µg/L) in 2001, resulting in enhanced water filtration and arsenic removal. However, whether this mandate was effective across all demographic areas remained unknown until Dr. Nigra,  previous postdoc and current assistant professor at Columbia’s Mailman School of Public Health, and colleagues took on the challenge of finding out.

Through extremely diligent research, Dr. Nigra and colleagues examined the arsenic exposure in community water systems across the U.S. to identify whether certain populations are exposed to arsenic levels above the maximum  mandated  by the EPA. Dr. Nigra and colleagues examined arsenic exposure levels through gathering data from the EPA’s public water database, which monitors public water for contaminants. They analyzed data of water contaminants across 46 states, Washington DC, the Navajo Nation, and American Indian tribes from the years 2006-2008 and 2009-2011, for overall arsenic concentrations across the different regions (Figure). They also separated the data for concentrations across different subgroups of individuals.

Overall, Dr. Nigra and colleagues identified a 10% reduction in water arsenic exposure.  They found  a reduction in arsenic concentrations in the New England, Eastern Midwest, and Southwest regions of the U.S. over the six year period. They also found reductions in subgroups that fit the following descriptions: most rural mid socioeconomic status (SES), semi urban high SES, and rural high SES. However, there were still communities that had arsenic levels that exceeded the maximum mandated by the EPA (Figure). These communities were predominantly Hispanic communities located in the Southwestern U.S. Furthermore, there was not enough data to identify whether there was a significant reduction in arsenic levels in tribal community water sources. Therefore, while there was an overall reduction in arsenic levels, there is still room for improvement. These Hispanic communities in the Southwestern U.S. are still at an elevated risk for cancer due to this increased exposure to carcinogens. To combat this increased exposure, more financial and technical resources such as an increase in arsenic treatment systems are necessary to reduce these arsenic levels.  Moreover, it is very possible that the under-reported arsenic levels in tribal communities could be putting those individuals at an increased risk. Dr. Nigra and colleagues have investigated an extremely impactful environmental factor and now, with their research, we are all a bit more aware of what’s in our water.  

Figure: Maps of counties in compliance with the EPA’s maximum arsenic concentration cut off of 10ug/L (top) and the average water arsenic concentrations across a six year period (bottom). Top Map: Low/Low: less 10μg/L over the six years; High/Low: greater than 10μg/L in 2006–2008, but less than 10μg/L in 2009–2011; Low/High: less than 10μg/L in 2006–2008 but greater than 10μg/L in 2009–2011; and High/High greater than 10μg/L in both periods. Figure was adapted from Figure 3 and Figure 4, Nigra et al., 2020 

 

Dr. Anne Nigra is a current assistant professor and previous postdoc  in Environmental Health Sciences at Columbia University’s Mailman School of Public Health. 

Reviewed by: Molly Scott, Maaike Schilperoort

Better Work Environments Make Super Nurses Even More Super!

We might all be familiar with the term “burnout” – the feeling of emotional exhaustion or feeling cynical or ineffective with respect to productivity at work, or in relationships with colleagues or clients. The World Health Organization classifies burnout as an occupational, not personal, phenomenon. Studies suggest that burnout can result from poor work environments – not necessarily dependent on the content of the work itself, but instead the setting in which the work is completed and how the work is managed or distributed. Burnout can be prevented or resolved by improving work environments.

Because it is dependent on the environment, the rate of burnout may vary between different job settings. For example, studies suggest that around 40% of the Nursing workforce in the United States is burned out. That’s almost half of all nurses! Nurses, along with Social Workers who also have a burnout rate of about 40%, are among the professions with the highest burnout rates in the country. Nurses have a unique position, as their actions and responsibilities at work directly impact the wellbeing of their patients. Because the lives of their patients may depend on it, it is important that nurses are attentive, motivated, and effective while at their jobs. In other words, nurses should not be burned out in order to properly care for their patients. 

To prevent or resolve burnout in nursing, work environments should aloow appropriate autonomy, or the ability for nurses to use their own discretion and depend on their own expertise to respond to patient care issues. Additionally, positive work environments for nurses include having good working relationships with physicians and hospital administration, and have adequate staffing and resources. If an environment does not include these positive factors, then nurse burnout will likely be prevalent in that clinical setting. Additionally, the combination of a poor work environment and burned out nurses is associated with lower levels of patient care quality and patient outcomes.

A recent study by Columbia postdoc Dr. Amelia Schlak explored how nurse burnout is related to patient care, with the expectation that more nurse burnout would correspond with poorer patient outcomes. Additionally, the researchers investigated how the nurse work environment affects the relationship between nurse burnout and respective patient outcomes. The authors expected to see that nurse burnout will have less of an effect on poorer patient outcomes in better work environments.

In order to investigate these relationships, Dr. Schlak and colleagues measured nurse burnout in over 20,000 nurses across 4 states (CA, PA, FL, and NJ) between 2015–2016 by using the emotional exhaustion subscale of the Maslach Burnout Inventory, which quantifies nurse burnout on a scale from 0 to 54, where higher scores correspond to more burnout. On average, the nurse burnout score in the study was 21/54. They also measured work environment using the Practice Environment Scale of Nursing Work Index survey completed by the same nurses. This measurement accounts for environmental aspects like staffing, access to resources, and nurse-physician relations. The researchers ranked the average hospital environment scores into categories of “poor” (bottom 25%), “mixed” (middle 50%), and “good” (top 25%) environments. They found that the degree of nurse burnout was skewed across the hospital quality category, where most (60%) nurses working in good environments ranked among the lowest burnout levels, while more than 50% of nurses working in poor environments ranked among the most burned out. So, better work environments typically means less burned out and more productive nurses! 

The ultimate priority in healthcare work is, of course, the patient! To see how the environment and nurse burnout affects patients, the researchers also collected patient outcome measurements for each hospital such as (1) patient mortality, (2) failure to rescue, or in-hospital mortality after experiencing an adverse event caused by medical treatment, and (3) length of stay, where only patients with length of stay less than 30 days were considered. The authors found that greater nurse burnout was associated with a higher incidence of patient mortality, an increased rate of failure to rescue and a longer patient stay. Nurses who are not burned out, who are energized and effective, tended to have patients that had better outcomes.

The authors also explored how the nurse work environment affects the relationship between nurse burnout and the patient outcome measurements. When the researchers compared hospitals with poor vs. mixed work environments, as well as mixed vs. good environments, they found that the frequency of burned out nurses decreased, the 30-day in-hospital mortality rate was 14% lower, the failure to rescue rate was 12% lower, and the length of stay was 4% lower in the mixed and good work environments, respectively. This means that by simply improving the work environment (i.e. improving employee relations or providing better resources), hospitals can greatly improve nurse burnout and patient outcomes! This relationship is shown in Figure 1 below. 

Figure 1: Clinical Work environment has an effect on the level of burn out in nurses. Nurse burn out, in turn, has an effect on patient outcomes, where higher levels of burn out result in poorer patient outcomes, and lower levels of outcome result in better patient outcomes. Additionally, the quality of the clinical work environment can also impact patient outcomes, where better outcomes are associated with better hospital environments, while poorer outcomes are associated with poorer hospital environments. Created with BioRender.com

Though this study was based on data from 2015, nurses and other healthcare workers have only become even more burned out in the face of the COVID-19 pandemic, intensified by the overwhelming demand, the pain of losing patients, and the risk of infection that they take every time they go to work. In light of this, hospital management and administration should be proactively addressing healthcare worker burnout, by ensuring that the needs of their healthcare workers are met. This includes, but is not limited to, allowing nurses autonomy or control over their practices, adequate staffing to avoid overworking or long shifts, encouraging and supporting positive relationships among nurses, physicians, and administrative staff, and providing proper resources for nurses to successfully fulfill their responsibilities. 

Also, this past week (May 6th – May 12th, 2022) was Nurses Appreciation Week. Thank you to the Super Nurses for the hard work that you do, oftentimes under relentless and stressful circumstances! You truly are Healthcare Heroes! I hope your hospitals, clinics, or other places of work are prioritizing your work environments, to help reduce the burnout you feel from this pandemic. If they aren’t, send them this article 🙂 

Edited by: Trang Nguyen, Vikas Malik, Maaike Schilperoort

What can we do to enter a new era in antimalarial research? A promising story from genetics to genomics.

Plasmodium falciparum is a unicellular organism known as one of the deadliest parasites in humans. This parasite is transmitted through bites of female Anopheles mosquitoes and causes the most dangerous form of malaria, falciparum malaria. Each year, over 200 million cases of malaria result in hundreds of millions of deaths. Moreover, P. falciparum has also been involved in the development of blood cancer. Therefore, study of malaria-causing Plasmodium species and the development of anti-malarial treatment constitute a high-impact domain of biological research.

Antimalarial drugs have been the pillar of malaria control and prophylaxis. Treatments combine rapid compounds to reduce parasite biomass with longer-lasting drugs that eliminate surviving parasites. These strategies have led to significant reductions in malaria-associated deaths. However, Plasmodium is constantly developing resistance to existing treatments. The situation is further complicated by the spread of mosquitoes resistant to insecticides. Additionally, asymptomatic chronic infections serve as parasite reservoirs and the single candidate vaccine has limited efficacy. Thus, the fight against malaria requires sustained efforts. A detailed understanding of P. falciparum biology is still crucial to identify and develop novel and efficient therapeutic targets.

Recent progress in genomics and molecular genetics empowered novel approaches to study the parasite gene functions. Application of genome-based analyses, genome editing, and genetic systems that allow for temporal regulation of gene and protein expression have proven to be crucial in identifying P. falciparum genes involved in antimalarial resistance. In their recent review, Columbia postdoc John Okombo and colleagues summarize the contributions and limitations  of some of these approaches in advancing our understanding of Plasmodium biology and in characterizing regions of its genome associated with antimalarial drug responses.

P. falciparum requires two hosts for its development and transmission: humans and Anopheles mosquito species. The parasite life cycle involves numerous developmental stages. In humans take place stages of Plasmodium’s development that are part of its “so-called” asexual development. On the other hand, mosquitos harbour other stages of the parasite development, associated with its sexual reproduction (Figure 1). Humans are infected by a stage called “sporozoites” upon the bite of an infected mosquito. Sporozoites enter the bloodstream and migrate through to the liver where they invade the liver cells (hepatocytes), multiply and form “hepatic schizonts”. Then, the schizonts rupture and release in the circulation the stage of “merozoites” which invade red blood cells (RBCs).  The clinical symptoms of malaria such as fever, anemia, and neurological disorder are produced during the blood stage. In RBCs are formed “trophozoites”, that have two alternative paths of development. They can either form “blood-stage schizonts” that produce more RBC-infecting merozoites or can alternatively differentiate to sexual forms, male and female “gametocytes”. Finally, gametocytes get ingested by new mosquitoes during blood meal where they undergo sexual reproduction forming a “zygote”. The zygotes then pass through several additional stages until maturation to a new generation of sporozoites, closing the parasite life cycle (Figure 1).

Figure 1: Life cycle of Plasmodium falciparum. Image created with BioRender.com

This complexity of the Plasmodium life cycle presents opportunities to generate drugs acting on various stages of its development. The review of Okombo and colleagues underlines how new genomic data have enabled the identification of genes contributing to various parasite traits, particularly those of antimalarial drug responses. The authors recap genetic- and genomic-based approaches that have set the stage for current investigations into antimalarial drug resistance and Plasmodium biology and have thus led to expanding and improving the available antimalarial arsenal.

For instance, in “genome-wide association studies” (GWAS), parasites isolated from infected patients are profiled for resistance against antimalarial drugs of interest, and their genomes are studied in order to identify genetic variants associated with resistance. In “genetic crosses and linkage analyses”, gametocytes from genetically distinct parental parasites are fed to mosquitoes in which they undergo sexual reproduction. The resulting progeny are inoculated into humanized human liver-chimeric mice-models that support P. falciparum infections and development. The progeny is later analyzed to identify the DNA changes associated with resistance and drug response variation. In “in vitro evolution and whole-genome analysis” antiplasmodial compounds are used to pressure P. falciparum progeny to undergo evolution to drug-resistant parasites. Their genome is then analyzed to identify the genetic determinants that may underlie the resistance. “Phenotype-driven functional Plasmodium mutant screens” are based on random genome-wide mutation generation and selection of mutants that either are resistant to drugs or have affected development, pathogenicity, or virulence. Such an approach has also led to the discovery of novel important genes. In addition, the review covers a number of cutting-edge methods for genome editing used to study antimalarial resistance and mode of action. Experiments using genetically engineered parasites constitute a critical step in uncovering the functional role of the identified genes. Finally, the reader can also find an overview of Plasmodium “regulatable expression strategies”. These approaches are particularly valuable in the study of non-dispensable (essential) genes. Additional information on other intriguing and powerful techniques are further described in the original paper.

Article reviewed by: Trang Nguyen, Samantha Rossano, Maaike Schilperoort

Survival of the fittest – how brain tumor cells adapt their metabolism to resist treatment

Glioblastoma WHO grade IV (GBM) is the most common primary brain tumor in adults. The therapeutic options for this recalcitrant malignancy are very limited with no durable response. A recent research article published in Nature Communications identified how the tumor cells alternate their metabolism to survive using targeted drug treatment in cell lines and mouse models. The project was led by Dr. Nguyen, a postdoctoral research scientist in Dr. Siegelin’s lab at Columbia University. 

Most cancer cells produce energy in a less efficient process called “aerobic glycolysis”, consisting of high levels of glucose uptake and generate lactic acid in the cytosol in the presence of abundant oxygen. This classic type of metabolic change provides substrates required for cancer cell proliferation and division, which is involved in tumor growth, metastatic progression and long-term survival. Dr. Siegelin’s laboratory at the Department of Pathology and Cell Biology at Columbia University Medical Center focuses on targeting cell metabolism and the epigenome for brain tumor therapy by using clinical validated drugs to suppress the tumor growth in glioblastoma. In this study, the authors used Alisertib (MLN8237), a clinically validated highly specific Aurora A inhibitor to target brain tumors. Aurora A kinases (AURKAs) are important for the proliferation and growth of solid tumors, including glioblastomas. Here, the authors found that Aurora A simultaneously interacts with both c-Myc (MYC Proto-Oncogene) and GSK3β (Glycogen Synthase Kinase 3 Beta). AURKA stabilizes the c-Myc protein and promotes cell growth. AURKA inhibitor, displayed substantial downregulation of the c-Myc protein. c-Myc (MYC) is an oncogenic transcription factor that facilitates tumor proliferation in part through the regulation of metabolism. Inhibition of Aurora A will lead to a degradation of c-Myc mediated by GSK3β. The authors also found that inhibition of Aurora kinase A suppressed the glycolysis signaling pathway in glioblastoma cells which was related to the degradation of c-Myc protein (Figure 1).

Figure 1: In the cells, Aurora A binds to c-Myc and facilitates cell proliferation. Inhibition of Aurora A will stop the cell from generating energy through glycolysis, a metabolic pathway that converts glucose to energy in cytosol, due to the degradation of c-Myc. c-Myc is marked for degradation by its phosphorylation at position T58 mediated by GSK3β. To survive, the cells start to use different pathways to generate energy by e.g. burning fat or proteins. Figures created with Biorender.com.

In addition to the acute treatment, it is important to understand how tumor cells acquire mechanisms to escape from chemotherapy following constant exposure to a drug and identify means to prevent this phenomenon from occurring. The research group generated drug-resistant cells by culturing them in the presence of alisertib for two weeks. These cells acquire partial resistance to alisertib and display a hyper-oxidative phenotype with an increase in the size of mitochondria with a tubulated shape. 

The chronic Aurora A inhibited cells were analyzed for the expression of genes that were modified after constantly applying the same dose of alisertib for a long term period. Researchers found that  resistance alisertib cells activate oxidative metabolism and fatty acid oxidation such as an increase in the generation of fatty acid proteins. These observations prompted them to test the hypothesis that alisertib along with fatty acid oxidation inhibitors such as etomoxir will reduce the cellular viability of glioblastoma cells. Etomoxir is a clinically validated drug that binds and blocks the mitochondrial fatty acid transporter. The authors found that the combination treatment of alisertib and etomoxir resulted in enhanced cell death as compared to single treatments and vehicles.

Given the significant promise of in vitro studies, the researchers extended their study in vivo by injecting the patient-derived glioblastoma cells acquired from the patient brain tumors in immunocompromised mice. Such model systems are currently considered to be in closest resemblance to the patient scenario. They found that the combination treatment extended animal survival significantly longer as compared to single treatment with alisertib or etomoxir, suggesting potential clinical efficacy. Taken together, these data suggest that simultaneous targeting of oxidative metabolism and Aurora A inhibition might be a potential novel therapy against deadliest cancers.

Article reviewed by: Maaike Schilperoort, Vikas Malik, Molly Scott, Pei-Yin Shih and Samantha.

“CRACK”ing cocaine addiction with medication

Cocaine is a highly addictive stimulant drug made from the leaves of the coca plant that alters mood, perception, and consciousness. It is consumed by smoking, injecting or snorting. According to the United Nations Office on Drugs and Crime, an estimated 20 million people have used cocaine in 2019, almost 2 million more than the previous year. Cocaine causes an increase in the accumulation of dopamine in the brain, which is a chemical messenger that plays an important role in how we feel pleasure and encourages us to repeat pleasurable activities. This dopamine rush causes people to continue using the drug despite the cognitive, behavioral and physical problems it causes,leading to a condition referred to as cocaine use disorder (CUD). CUD related physical and mental health issues range from cardiovascular diseases like heart attack, stroke, hypertension, and atherosclerosis, to psychiatric disorders and sexually transmitted infections. 

According to the CDC, cocaine use was responsible for 1 in 5 overdose deaths. Though almost all users who seek treatment for CUD are given psychosocial interventions like counseling, most continue to use cocaine. Pharmaceutical medication may increase the effectiveness of psychosocial interventions. Medications for other substance abuse disorders (opiod and alcohol) have shown to block euphoric effects, alleviate cravings and stabilize brain chemistry. However, there are currently no FDA approved drugs to treat CUD. 

Dr. Laura Brandt and colleagues have systematically reviewed available research up until 2020 in the area of pharmacological CUD treatment. In this review, they discuss the potential benefits and shortcomings of current pharmacological approaches for CUD treatment and highlight plausible avenues and critical considerations for future study. The authors reviewed clinical trials where the primary disorder is cocaine use and medication tested falls into four categories: dopamine agonists, dopamine antagonists/blockers, new mechanisms that are being tested and a combination of medications. 

Dopamine agonists are medications that have a similar mechanism of action as cocaine, i.e., they can act as substitutes for cocaine without the potential adverse health effects. Dopamine releasers and uptake inhibitors fall under this category and have shown the most promising signs thus far for reduced cocaine self-administration in cocaine-dependent participants. Dopamine uptake inhibitors bind to the dopamine transporter and prevents dopamine reuptake from the extracellular space into the brain cell. The medications that act as substitutes result in the users exhibiting blunted dopamine effects such as low levels of dopamine release and reduced availability of dopamine receptors for dopamine to bind to. They help reduce dopamine hypoactivity by slow release of dopamine which in turn helps reduce responses such as cravings for cocaine and withdrawal symptoms which is usually a cause for relapse. A common concern associated with using dopamine agonists is the possibility of replacing cocaine addiction with the medication. However, there is no strong evidence for this secondary abuse as well as for the cardiovascular risk when using the agonist as a means of treatment. 

Dopamine antagonists/blockers are substances that bind to dopamine receptors, preventing the binding of dopamine and thereby blocking the euphoric effects of cocaine. This approach facilitates the decrease in cocaine use as the effects of cocaine use are absent. Antipsychotics medications, anti-cocaine vaccines, modulators of the reward system, and noradrenergic agents fall under this category. This approach is generally considered to be less effective in treatment for CUD as they require high levels of motivation to start the treatment as well as to maintain it. 

New medications are those that are currently in clinical trials and are being tested in humans for the treatment of CUD. Combination pharmacotheraphy is an interesting approach for treatment and involves combining two medications to treat CUD. An absence of FDA approved medications limits exploration in this direction. 

Having reviewed these data and their shortcomings, the authors point out a very important factor in these studies – the shortcomings of the studies depend on more than just the medication. On one hand, limitations due to medical procedures such as the dosage of medication and its formulation, completion of the medication course, providing/not providing incentives to participants of the study may have hindered the success of these studies. On the other , individuals seeking treatment are not all the same. They differ in terms of cocaine use severity, presence of mental health illness, substance use disorders apart from cocaine use, and their genetics may also play a role in the success of their treatment.  Pharmacotherapy formulations for CUD is not a one size fits all but needs to be tailored to the individual seeking treatment as well as the substance used.  A combination approach targeting withdrawal of the drug and allowing patients to benefit more from behavioral/psychosocial interventions would be more helpful on their path to recovery. Another very important point that requires some attention is the method for determination if the medication has worked. Most studies use the gold standard of performing qualitative urine screens to determine sustained abstinence in clinical trials of pharmacotherapies for CUD. Urine toxicology as evidence of treatment success is not a clear-cut method as various factors impact interpretation of the results. Second the medication is considered to successfully treat CUD only when there is complete abstinence from cocaine use. As many physical and psychological issues accompany substance abuse, considering CUD treatment to be linear is not very beneficial. Considering other aspects such as improvement in quality of life and ability to carry out daily activities would be a better indicator of the effectiveness of the medications used. 

With an increase in cocaine use and abuse in recent years, there is an urgent need to identify medications to treat CUD. The review consolidates the current approaches to treating CUD with medication and points out factors that are overlooked while interpreting the results from these studies. Tailoring medications to each individual would greatly improve clinical trial outcomes and have higher success rates for treating substance use disorders- a promising avenue that needs to be explored.

Dr. Laura Brandt is a Postdoctoral Research Fellow in the Division on Substance Use Disorders, New York State Psychiatric Institute and Department of Psychiatry Columbia University Irving Medical Center.

Reviewed by: Trang Nguyen, Maaike Schilperoort, Sam Rossano, Pei-Yin Shih

 

Follow this blog

Get every new post delivered right to your inbox.