Making sense of COVID-induced loss of smell

The coronavirus SARS-CoV-2 has led to more than six million confirmed deaths worldwide to date throughout the course of the COVID-19 pandemic. While SARS-CoV-2 enters the body through the respiratory system into the lungs, it can also induce damage in other organs. For instance, the sense of smell, which is mediated by the olfactory sensory neurons in our nose along with our brain, is lost in some COVID patients. How this virus affects our ability to smell is a puzzling question, and one that has been investigated by a team led by Dr. Zazhytska in the Lomvardas lab at Columbia University. They have tirelessly worked on solving this puzzle throughout the COVID shutdown period, and their discoveries, which have recently been published in the journal Cell, have started to provide some key answers.

We can smell the scents around us because the olfactory receptors in our olfactory sensory neurons bind to odorant molecules, relay the information through signaling molecules, and eventually signal to our brain (Figure 1). Dr. Zazhytska and her colleagues found that SARS-CoV-2 was rarely detected in the olfactory sensory neurons themselves, indicating that the virus probably doesn’t gain access to our brain through these sensory neurons. In fact, the most commonly infected cells are the neighboring sustentacular cells (Figure 1b), which are important in maintaining the health of the layer of olfactory cells, including the neurons. If the sustentacular cells die, the sensory neurons can be exposed to a stressful environment without support. Thus, the shutdown of the olfactory system might be an indirect effect of SARS-CoV-2 infection.

Figure 1 The basic structure of the olfactory system.
(A) Signal transduction in olfactory sensory neurons. The cell membrane separates the interior of the cell (cell cytoplasm, bottom) from the outside environment (top).
(B) Anatomy of cells in the nose that are involved in smell perception.
(Figure was made using BioRender).

There are about four hundred olfactory receptor genes scattered across our genome, and each neuron only expresses one of them. This stringent setup is achieved by interactions between multiple chromosomes that bring all the dispersed olfactory receptor genes together and form a cluster in the nucleus of the neuron. This cluster arrangement of olfactory receptor genes allows the gene expression machinery to access and turn on only one receptor at a time. Remarkably, Dr. Zazhytska and her colleagues discovered that this organization is disrupted dramatically after SARS-CoV-2 infection in both hamsters and humans. Infected individuals also show reduced expression of not only receptor genes, but also key molecules that are involved in smell perception, likely as consequences of the disrupted organization.

Interestingly, when the team of scientists exposed uninfected hamsters to UV-treated serum from SARS-CoV-2 infected hamsters, which no longer contain virus, they still observed this same disorganization of olfactory receptor genes in the animals. This observation suggests that not the virus itself but some other circulating molecule(s) trigger the abnormal organization. Identifying these molecules may provide potential treatments for COVID-induced loss of smell, as well as other diseases that can affect our olfaction, including early onset Alzheimer’s disease.

Edited by: Sam Rossano, Eric Smith, James Lee, Trang Nguyen, Maaike Schilperoort

What’s in Your Water? Arsenic Contamination and Inequality.

Water is one of the most essential elements for life. Every living creature requires access to a water source, humans being no exception. Unfortunately, access to clean drinking water continues to be a challenge for many individuals across the globe.  Systematic studies of water inequalities in the U.S. alone indicate increased contamination in areas often dismissed or underserved. Arsenic, a human carcinogen, or cancer causing substance, predominantly released from water flowing through rock formations has previously been measured at dangerous levels in  U.S. water sources. This finding led to the U.S. Environmental Protection Agency (EPA) mandating that arsenic contamination levels must be below a maximum level (10 µg/L) in 2001, resulting in enhanced water filtration and arsenic removal. However, whether this mandate was effective across all demographic areas remained unknown until Dr. Nigra,  previous postdoc and current assistant professor at Columbia’s Mailman School of Public Health, and colleagues took on the challenge of finding out.

Through extremely diligent research, Dr. Nigra and colleagues examined the arsenic exposure in community water systems across the U.S. to identify whether certain populations are exposed to arsenic levels above the maximum  mandated  by the EPA. Dr. Nigra and colleagues examined arsenic exposure levels through gathering data from the EPA’s public water database, which monitors public water for contaminants. They analyzed data of water contaminants across 46 states, Washington DC, the Navajo Nation, and American Indian tribes from the years 2006-2008 and 2009-2011, for overall arsenic concentrations across the different regions (Figure). They also separated the data for concentrations across different subgroups of individuals.

Overall, Dr. Nigra and colleagues identified a 10% reduction in water arsenic exposure.  They found  a reduction in arsenic concentrations in the New England, Eastern Midwest, and Southwest regions of the U.S. over the six year period. They also found reductions in subgroups that fit the following descriptions: most rural mid socioeconomic status (SES), semi urban high SES, and rural high SES. However, there were still communities that had arsenic levels that exceeded the maximum mandated by the EPA (Figure). These communities were predominantly Hispanic communities located in the Southwestern U.S. Furthermore, there was not enough data to identify whether there was a significant reduction in arsenic levels in tribal community water sources. Therefore, while there was an overall reduction in arsenic levels, there is still room for improvement. These Hispanic communities in the Southwestern U.S. are still at an elevated risk for cancer due to this increased exposure to carcinogens. To combat this increased exposure, more financial and technical resources such as an increase in arsenic treatment systems are necessary to reduce these arsenic levels.  Moreover, it is very possible that the under-reported arsenic levels in tribal communities could be putting those individuals at an increased risk. Dr. Nigra and colleagues have investigated an extremely impactful environmental factor and now, with their research, we are all a bit more aware of what’s in our water.  

Figure: Maps of counties in compliance with the EPA’s maximum arsenic concentration cut off of 10ug/L (top) and the average water arsenic concentrations across a six year period (bottom). Top Map: Low/Low: less 10μg/L over the six years; High/Low: greater than 10μg/L in 2006–2008, but less than 10μg/L in 2009–2011; Low/High: less than 10μg/L in 2006–2008 but greater than 10μg/L in 2009–2011; and High/High greater than 10μg/L in both periods. Figure was adapted from Figure 3 and Figure 4, Nigra et al., 2020 

 

Dr. Anne Nigra is a current assistant professor and previous postdoc  in Environmental Health Sciences at Columbia University’s Mailman School of Public Health. 

Reviewed by: Molly Scott, Maaike Schilperoort

Better Work Environments Make Super Nurses Even More Super!

We might all be familiar with the term “burnout” – the feeling of emotional exhaustion or feeling cynical or ineffective with respect to productivity at work, or in relationships with colleagues or clients. The World Health Organization classifies burnout as an occupational, not personal, phenomenon. Studies suggest that burnout can result from poor work environments – not necessarily dependent on the content of the work itself, but instead the setting in which the work is completed and how the work is managed or distributed. Burnout can be prevented or resolved by improving work environments.

Because it is dependent on the environment, the rate of burnout may vary between different job settings. For example, studies suggest that around 40% of the Nursing workforce in the United States is burned out. That’s almost half of all nurses! Nurses, along with Social Workers who also have a burnout rate of about 40%, are among the professions with the highest burnout rates in the country. Nurses have a unique position, as their actions and responsibilities at work directly impact the wellbeing of their patients. Because the lives of their patients may depend on it, it is important that nurses are attentive, motivated, and effective while at their jobs. In other words, nurses should not be burned out in order to properly care for their patients. 

To prevent or resolve burnout in nursing, work environments should aloow appropriate autonomy, or the ability for nurses to use their own discretion and depend on their own expertise to respond to patient care issues. Additionally, positive work environments for nurses include having good working relationships with physicians and hospital administration, and have adequate staffing and resources. If an environment does not include these positive factors, then nurse burnout will likely be prevalent in that clinical setting. Additionally, the combination of a poor work environment and burned out nurses is associated with lower levels of patient care quality and patient outcomes.

A recent study by Columbia postdoc Dr. Amelia Schlak explored how nurse burnout is related to patient care, with the expectation that more nurse burnout would correspond with poorer patient outcomes. Additionally, the researchers investigated how the nurse work environment affects the relationship between nurse burnout and respective patient outcomes. The authors expected to see that nurse burnout will have less of an effect on poorer patient outcomes in better work environments.

In order to investigate these relationships, Dr. Schlak and colleagues measured nurse burnout in over 20,000 nurses across 4 states (CA, PA, FL, and NJ) between 2015–2016 by using the emotional exhaustion subscale of the Maslach Burnout Inventory, which quantifies nurse burnout on a scale from 0 to 54, where higher scores correspond to more burnout. On average, the nurse burnout score in the study was 21/54. They also measured work environment using the Practice Environment Scale of Nursing Work Index survey completed by the same nurses. This measurement accounts for environmental aspects like staffing, access to resources, and nurse-physician relations. The researchers ranked the average hospital environment scores into categories of “poor” (bottom 25%), “mixed” (middle 50%), and “good” (top 25%) environments. They found that the degree of nurse burnout was skewed across the hospital quality category, where most (60%) nurses working in good environments ranked among the lowest burnout levels, while more than 50% of nurses working in poor environments ranked among the most burned out. So, better work environments typically means less burned out and more productive nurses! 

The ultimate priority in healthcare work is, of course, the patient! To see how the environment and nurse burnout affects patients, the researchers also collected patient outcome measurements for each hospital such as (1) patient mortality, (2) failure to rescue, or in-hospital mortality after experiencing an adverse event caused by medical treatment, and (3) length of stay, where only patients with length of stay less than 30 days were considered. The authors found that greater nurse burnout was associated with a higher incidence of patient mortality, an increased rate of failure to rescue and a longer patient stay. Nurses who are not burned out, who are energized and effective, tended to have patients that had better outcomes.

The authors also explored how the nurse work environment affects the relationship between nurse burnout and the patient outcome measurements. When the researchers compared hospitals with poor vs. mixed work environments, as well as mixed vs. good environments, they found that the frequency of burned out nurses decreased, the 30-day in-hospital mortality rate was 14% lower, the failure to rescue rate was 12% lower, and the length of stay was 4% lower in the mixed and good work environments, respectively. This means that by simply improving the work environment (i.e. improving employee relations or providing better resources), hospitals can greatly improve nurse burnout and patient outcomes! This relationship is shown in Figure 1 below. 

Figure 1: Clinical Work environment has an effect on the level of burn out in nurses. Nurse burn out, in turn, has an effect on patient outcomes, where higher levels of burn out result in poorer patient outcomes, and lower levels of outcome result in better patient outcomes. Additionally, the quality of the clinical work environment can also impact patient outcomes, where better outcomes are associated with better hospital environments, while poorer outcomes are associated with poorer hospital environments. Created with BioRender.com

Though this study was based on data from 2015, nurses and other healthcare workers have only become even more burned out in the face of the COVID-19 pandemic, intensified by the overwhelming demand, the pain of losing patients, and the risk of infection that they take every time they go to work. In light of this, hospital management and administration should be proactively addressing healthcare worker burnout, by ensuring that the needs of their healthcare workers are met. This includes, but is not limited to, allowing nurses autonomy or control over their practices, adequate staffing to avoid overworking or long shifts, encouraging and supporting positive relationships among nurses, physicians, and administrative staff, and providing proper resources for nurses to successfully fulfill their responsibilities. 

Also, this past week (May 6th – May 12th, 2022) was Nurses Appreciation Week. Thank you to the Super Nurses for the hard work that you do, oftentimes under relentless and stressful circumstances! You truly are Healthcare Heroes! I hope your hospitals, clinics, or other places of work are prioritizing your work environments, to help reduce the burnout you feel from this pandemic. If they aren’t, send them this article 🙂 

Edited by: Trang Nguyen, Vikas Malik, Maaike Schilperoort

What can we do to enter a new era in antimalarial research? A promising story from genetics to genomics.

Plasmodium falciparum is a unicellular organism known as one of the deadliest parasites in humans. This parasite is transmitted through bites of female Anopheles mosquitoes and causes the most dangerous form of malaria, falciparum malaria. Each year, over 200 million cases of malaria result in hundreds of millions of deaths. Moreover, P. falciparum has also been involved in the development of blood cancer. Therefore, study of malaria-causing Plasmodium species and the development of anti-malarial treatment constitute a high-impact domain of biological research.

Antimalarial drugs have been the pillar of malaria control and prophylaxis. Treatments combine rapid compounds to reduce parasite biomass with longer-lasting drugs that eliminate surviving parasites. These strategies have led to significant reductions in malaria-associated deaths. However, Plasmodium is constantly developing resistance to existing treatments. The situation is further complicated by the spread of mosquitoes resistant to insecticides. Additionally, asymptomatic chronic infections serve as parasite reservoirs and the single candidate vaccine has limited efficacy. Thus, the fight against malaria requires sustained efforts. A detailed understanding of P. falciparum biology is still crucial to identify and develop novel and efficient therapeutic targets.

Recent progress in genomics and molecular genetics empowered novel approaches to study the parasite gene functions. Application of genome-based analyses, genome editing, and genetic systems that allow for temporal regulation of gene and protein expression have proven to be crucial in identifying P. falciparum genes involved in antimalarial resistance. In their recent review, Columbia postdoc John Okombo and colleagues summarize the contributions and limitations  of some of these approaches in advancing our understanding of Plasmodium biology and in characterizing regions of its genome associated with antimalarial drug responses.

P. falciparum requires two hosts for its development and transmission: humans and Anopheles mosquito species. The parasite life cycle involves numerous developmental stages. In humans take place stages of Plasmodium’s development that are part of its “so-called” asexual development. On the other hand, mosquitos harbour other stages of the parasite development, associated with its sexual reproduction (Figure 1). Humans are infected by a stage called “sporozoites” upon the bite of an infected mosquito. Sporozoites enter the bloodstream and migrate through to the liver where they invade the liver cells (hepatocytes), multiply and form “hepatic schizonts”. Then, the schizonts rupture and release in the circulation the stage of “merozoites” which invade red blood cells (RBCs).  The clinical symptoms of malaria such as fever, anemia, and neurological disorder are produced during the blood stage. In RBCs are formed “trophozoites”, that have two alternative paths of development. They can either form “blood-stage schizonts” that produce more RBC-infecting merozoites or can alternatively differentiate to sexual forms, male and female “gametocytes”. Finally, gametocytes get ingested by new mosquitoes during blood meal where they undergo sexual reproduction forming a “zygote”. The zygotes then pass through several additional stages until maturation to a new generation of sporozoites, closing the parasite life cycle (Figure 1).

Figure 1: Life cycle of Plasmodium falciparum. Image created with BioRender.com

This complexity of the Plasmodium life cycle presents opportunities to generate drugs acting on various stages of its development. The review of Okombo and colleagues underlines how new genomic data have enabled the identification of genes contributing to various parasite traits, particularly those of antimalarial drug responses. The authors recap genetic- and genomic-based approaches that have set the stage for current investigations into antimalarial drug resistance and Plasmodium biology and have thus led to expanding and improving the available antimalarial arsenal.

For instance, in “genome-wide association studies” (GWAS), parasites isolated from infected patients are profiled for resistance against antimalarial drugs of interest, and their genomes are studied in order to identify genetic variants associated with resistance. In “genetic crosses and linkage analyses”, gametocytes from genetically distinct parental parasites are fed to mosquitoes in which they undergo sexual reproduction. The resulting progeny are inoculated into humanized human liver-chimeric mice-models that support P. falciparum infections and development. The progeny is later analyzed to identify the DNA changes associated with resistance and drug response variation. In “in vitro evolution and whole-genome analysis” antiplasmodial compounds are used to pressure P. falciparum progeny to undergo evolution to drug-resistant parasites. Their genome is then analyzed to identify the genetic determinants that may underlie the resistance. “Phenotype-driven functional Plasmodium mutant screens” are based on random genome-wide mutation generation and selection of mutants that either are resistant to drugs or have affected development, pathogenicity, or virulence. Such an approach has also led to the discovery of novel important genes. In addition, the review covers a number of cutting-edge methods for genome editing used to study antimalarial resistance and mode of action. Experiments using genetically engineered parasites constitute a critical step in uncovering the functional role of the identified genes. Finally, the reader can also find an overview of Plasmodium “regulatable expression strategies”. These approaches are particularly valuable in the study of non-dispensable (essential) genes. Additional information on other intriguing and powerful techniques are further described in the original paper.

Article reviewed by: Trang Nguyen, Samantha Rossano, Maaike Schilperoort

Survival of the fittest – how brain tumor cells adapt their metabolism to resist treatment

Glioblastoma WHO grade IV (GBM) is the most common primary brain tumor in adults. The therapeutic options for this recalcitrant malignancy are very limited with no durable response. A recent research article published in Nature Communications identified how the tumor cells alternate their metabolism to survive using targeted drug treatment in cell lines and mouse models. The project was led by Dr. Nguyen, a postdoctoral research scientist in Dr. Siegelin’s lab at Columbia University. 

Most cancer cells produce energy in a less efficient process called “aerobic glycolysis”, consisting of high levels of glucose uptake and generate lactic acid in the cytosol in the presence of abundant oxygen. This classic type of metabolic change provides substrates required for cancer cell proliferation and division, which is involved in tumor growth, metastatic progression and long-term survival. Dr. Siegelin’s laboratory at the Department of Pathology and Cell Biology at Columbia University Medical Center focuses on targeting cell metabolism and the epigenome for brain tumor therapy by using clinical validated drugs to suppress the tumor growth in glioblastoma. In this study, the authors used Alisertib (MLN8237), a clinically validated highly specific Aurora A inhibitor to target brain tumors. Aurora A kinases (AURKAs) are important for the proliferation and growth of solid tumors, including glioblastomas. Here, the authors found that Aurora A simultaneously interacts with both c-Myc (MYC Proto-Oncogene) and GSK3β (Glycogen Synthase Kinase 3 Beta). AURKA stabilizes the c-Myc protein and promotes cell growth. AURKA inhibitor, displayed substantial downregulation of the c-Myc protein. c-Myc (MYC) is an oncogenic transcription factor that facilitates tumor proliferation in part through the regulation of metabolism. Inhibition of Aurora A will lead to a degradation of c-Myc mediated by GSK3β. The authors also found that inhibition of Aurora kinase A suppressed the glycolysis signaling pathway in glioblastoma cells which was related to the degradation of c-Myc protein (Figure 1).

Figure 1: In the cells, Aurora A binds to c-Myc and facilitates cell proliferation. Inhibition of Aurora A will stop the cell from generating energy through glycolysis, a metabolic pathway that converts glucose to energy in cytosol, due to the degradation of c-Myc. c-Myc is marked for degradation by its phosphorylation at position T58 mediated by GSK3β. To survive, the cells start to use different pathways to generate energy by e.g. burning fat or proteins. Figures created with Biorender.com.

In addition to the acute treatment, it is important to understand how tumor cells acquire mechanisms to escape from chemotherapy following constant exposure to a drug and identify means to prevent this phenomenon from occurring. The research group generated drug-resistant cells by culturing them in the presence of alisertib for two weeks. These cells acquire partial resistance to alisertib and display a hyper-oxidative phenotype with an increase in the size of mitochondria with a tubulated shape. 

The chronic Aurora A inhibited cells were analyzed for the expression of genes that were modified after constantly applying the same dose of alisertib for a long term period. Researchers found that  resistance alisertib cells activate oxidative metabolism and fatty acid oxidation such as an increase in the generation of fatty acid proteins. These observations prompted them to test the hypothesis that alisertib along with fatty acid oxidation inhibitors such as etomoxir will reduce the cellular viability of glioblastoma cells. Etomoxir is a clinically validated drug that binds and blocks the mitochondrial fatty acid transporter. The authors found that the combination treatment of alisertib and etomoxir resulted in enhanced cell death as compared to single treatments and vehicles.

Given the significant promise of in vitro studies, the researchers extended their study in vivo by injecting the patient-derived glioblastoma cells acquired from the patient brain tumors in immunocompromised mice. Such model systems are currently considered to be in closest resemblance to the patient scenario. They found that the combination treatment extended animal survival significantly longer as compared to single treatment with alisertib or etomoxir, suggesting potential clinical efficacy. Taken together, these data suggest that simultaneous targeting of oxidative metabolism and Aurora A inhibition might be a potential novel therapy against deadliest cancers.

Article reviewed by: Maaike Schilperoort, Vikas Malik, Molly Scott, Pei-Yin Shih and Samantha.

“CRACK”ing cocaine addiction with medication

Cocaine is a highly addictive stimulant drug made from the leaves of the coca plant that alters mood, perception, and consciousness. It is consumed by smoking, injecting or snorting. According to the United Nations Office on Drugs and Crime, an estimated 20 million people have used cocaine in 2019, almost 2 million more than the previous year. Cocaine causes an increase in the accumulation of dopamine in the brain, which is a chemical messenger that plays an important role in how we feel pleasure and encourages us to repeat pleasurable activities. This dopamine rush causes people to continue using the drug despite the cognitive, behavioral and physical problems it causes,leading to a condition referred to as cocaine use disorder (CUD). CUD related physical and mental health issues range from cardiovascular diseases like heart attack, stroke, hypertension, and atherosclerosis, to psychiatric disorders and sexually transmitted infections. 

According to the CDC, cocaine use was responsible for 1 in 5 overdose deaths. Though almost all users who seek treatment for CUD are given psychosocial interventions like counseling, most continue to use cocaine. Pharmaceutical medication may increase the effectiveness of psychosocial interventions. Medications for other substance abuse disorders (opiod and alcohol) have shown to block euphoric effects, alleviate cravings and stabilize brain chemistry. However, there are currently no FDA approved drugs to treat CUD. 

Dr. Laura Brandt and colleagues have systematically reviewed available research up until 2020 in the area of pharmacological CUD treatment. In this review, they discuss the potential benefits and shortcomings of current pharmacological approaches for CUD treatment and highlight plausible avenues and critical considerations for future study. The authors reviewed clinical trials where the primary disorder is cocaine use and medication tested falls into four categories: dopamine agonists, dopamine antagonists/blockers, new mechanisms that are being tested and a combination of medications. 

Dopamine agonists are medications that have a similar mechanism of action as cocaine, i.e., they can act as substitutes for cocaine without the potential adverse health effects. Dopamine releasers and uptake inhibitors fall under this category and have shown the most promising signs thus far for reduced cocaine self-administration in cocaine-dependent participants. Dopamine uptake inhibitors bind to the dopamine transporter and prevents dopamine reuptake from the extracellular space into the brain cell. The medications that act as substitutes result in the users exhibiting blunted dopamine effects such as low levels of dopamine release and reduced availability of dopamine receptors for dopamine to bind to. They help reduce dopamine hypoactivity by slow release of dopamine which in turn helps reduce responses such as cravings for cocaine and withdrawal symptoms which is usually a cause for relapse. A common concern associated with using dopamine agonists is the possibility of replacing cocaine addiction with the medication. However, there is no strong evidence for this secondary abuse as well as for the cardiovascular risk when using the agonist as a means of treatment. 

Dopamine antagonists/blockers are substances that bind to dopamine receptors, preventing the binding of dopamine and thereby blocking the euphoric effects of cocaine. This approach facilitates the decrease in cocaine use as the effects of cocaine use are absent. Antipsychotics medications, anti-cocaine vaccines, modulators of the reward system, and noradrenergic agents fall under this category. This approach is generally considered to be less effective in treatment for CUD as they require high levels of motivation to start the treatment as well as to maintain it. 

New medications are those that are currently in clinical trials and are being tested in humans for the treatment of CUD. Combination pharmacotheraphy is an interesting approach for treatment and involves combining two medications to treat CUD. An absence of FDA approved medications limits exploration in this direction. 

Having reviewed these data and their shortcomings, the authors point out a very important factor in these studies – the shortcomings of the studies depend on more than just the medication. On one hand, limitations due to medical procedures such as the dosage of medication and its formulation, completion of the medication course, providing/not providing incentives to participants of the study may have hindered the success of these studies. On the other , individuals seeking treatment are not all the same. They differ in terms of cocaine use severity, presence of mental health illness, substance use disorders apart from cocaine use, and their genetics may also play a role in the success of their treatment.  Pharmacotherapy formulations for CUD is not a one size fits all but needs to be tailored to the individual seeking treatment as well as the substance used.  A combination approach targeting withdrawal of the drug and allowing patients to benefit more from behavioral/psychosocial interventions would be more helpful on their path to recovery. Another very important point that requires some attention is the method for determination if the medication has worked. Most studies use the gold standard of performing qualitative urine screens to determine sustained abstinence in clinical trials of pharmacotherapies for CUD. Urine toxicology as evidence of treatment success is not a clear-cut method as various factors impact interpretation of the results. Second the medication is considered to successfully treat CUD only when there is complete abstinence from cocaine use. As many physical and psychological issues accompany substance abuse, considering CUD treatment to be linear is not very beneficial. Considering other aspects such as improvement in quality of life and ability to carry out daily activities would be a better indicator of the effectiveness of the medications used. 

With an increase in cocaine use and abuse in recent years, there is an urgent need to identify medications to treat CUD. The review consolidates the current approaches to treating CUD with medication and points out factors that are overlooked while interpreting the results from these studies. Tailoring medications to each individual would greatly improve clinical trial outcomes and have higher success rates for treating substance use disorders- a promising avenue that needs to be explored.

Dr. Laura Brandt is a Postdoctoral Research Fellow in the Division on Substance Use Disorders, New York State Psychiatric Institute and Department of Psychiatry Columbia University Irving Medical Center.

Reviewed by: Trang Nguyen, Maaike Schilperoort, Sam Rossano, Pei-Yin Shih

 

Novel treatment strategies for fatty liver-related cancer – reality or fanTAZy?

The liver is one of the largest organs in the body, with a weight of approximately 3 pounds. Some of its vital functions include the filtration of blood to remove toxic substances from the body, the production of bile which helps with digestion, and the regulation of fat metabolism. Fats or lipids from the diet are taken up by the liver and processed into fat-carrying proteins called lipoproteins. These lipoproteins are released into circulation to fuel tissues that require energy. However, when there is a positive energy balance, for example due to overeating and/or a sedentary lifestyle, liver cells increasingly store lipids. This can result in metabolic associated fatty liver disease (MAFLD, formerly known as NAFLD), characterized by a liver fat content above 5%. When fat keeps accumulating in the liver, chronic inflammation ensues and the liver progresses to a stage called metabolic associated steatohepatitis (MASH, formerly known as NASH). There are currently no FDA-approved drugs available to treat MASH. At this stage, patients are at risk for developing a type of liver cancer called hepatocellular carcinoma (HCC). HCC is the primary cause of liver cancer in the US, affecting more than 30,000 individuals per year. Progression from a healthy liver to MAFLD, MASH, and HCC is shown in the Figure below.

The different stages of fatty liver disease progression – from a healthy liver, to metabolic associated fatty liver disease (MAFLD), metabolic associated steatohepatitis (MASH), and eventually hepatocellular carcinoma (HCC). Figure created with Biorender.com.

Although MASH is the leading cause of HCC, the mechanisms of how MASH predisposes to HCC tumor formation are largely unknown. The research from Columbia postdoc Xiaobo Wang and colleagues tries to fill this knowledge gap. Dr. Wang investigated TAZ; a gene regulator that was found to be increased in MASH livers. He fed experimental mice with a diet containing high sugar, fat, and cholesterol (the equivalent of human “fast food”), to induce MASH development. Then, he diminished TAZ expression in the liver by using a viral-mediated gene delivery system, by which an engineered virus enters mouse liver cells to specifically turn off the TAZ gene. Silencing of the TAZ gene largely prevented the development of tumors in MASH liver, indicating that TAZ is an important player in MASH-HCC progression.

Dr. Wang continued his research by investigating how TAZ could enable the liver cells to turn into tumor cells. He focused on DNA damage, a process which is important in HCC development, and found clear indications of damaged DNA in the livers of mice and humans with MASH. Most importantly, silencing of TAZ prevented an increase in the DNA damage, suggesting that TAZ promotes genomic instability in liver cells. Since the buildup of oxidative stress within cells is an important cause of DNA damage, Dr. Wang next looked at a specific indicator of oxidative DNA damage. Indeed, this indicator was increased in MASH and decreased with TAZ silencing. He then measured various oxidant-related proteins to find out how TAZ could promote oxidative DNA damage. He discovered that Cybb, a gene involved in the formation of harmful reactive oxygen species, is involved in TAZ-induced liver cancer. Together, these findings show a TAZ-Cybb-oxidative DNA damage pathway (see Figure below) that creates malignant liver cells and promotes the progression from MASH to HCC. This work has been published in the prestigious Journal of Hepatology.

Metabolic associated steatohepatitis (MASH) liver cells highly express a protein called TAZ, which promotes the Cybb gene which is involved in production of reactive oxygen species. These reactive oxygen species induce oxidative DNA damage, which transforms healthy liver cells into tumor cells and thereby promotes progression to hepatocellular carcinoma (HCC). Figure adapted from Wang et al. J Hepatol 2021, and created with Biorender.com.

The new pathway that Dr. Wang and colleagues discovered suggests that TAZ-based therapy could prevent MASH-HCC progression in humans with fatty liver disease. Such a therapy could have a big clinical impact, since it is estimated that about 12% of US adults have MASH. However, a limitation to MASH therapies is that the disease is often asymptomatic and difficult to identify until the late stages of the disease. Aside from focusing on treatments that reduce MASH progression after a large part of the damage has already occurred, our society should increasingly focus on preventative strategies. MASH is often caused by lifestyle-related factors, and risk of MASH can be significantly reduced by maintaining a healthy weight, eating a healthy diet and exercising regularly. Especially in the US where MASH prevalence is expected to increase by 63% by 2030, raising awareness of the importance of living healthy to prevent liver disease and liver cancer is paramount.

 

Reviewed by:

Sam Rossano, Vikas Malik, Molly Scott

 

 

Tau about that! Alzheimer’s protein found in brains of COVID patients

It’s hard not to have COVID on the brain in today’s world – it seems like every conversation ends up on the topic! A recent study completed at Columbia explored the effect of COVID in the brain, by collecting brain samples from the mesial temporal cortex, a brain region implicated in Alzheimer’s disease and responsible for memory, and the cerebellum, a brain region responsible for coordination of movement and balance. Different cellular markers that indicate inflammation and protein build-up in the brain were measured in 10 patients who had passed away from COVID-19 and were compared to brains of those who did not have COVID-19 at the time of death. From this, the researchers were able to infer how COVID-19 infection may alter the brain, potentially causing the neurological symptoms in some COVID patients.

COVID-19 infection can lead to respiratory, cardiac, and neurological symptoms. About one in three COVID patients experience neurological symptoms including loss of taste (hypogeusia), loss of smell (hyposmia), headache, disturbed consciousness, and tingling sensations in their limbs (paresthesia). The exact reason why these neurological symptoms occur is not well understood. In a recent publication, Dr. Steve Reiken and colleagues from the Department of Physiology and Cellular Biophysics at Columbia University Vagelos College of Physicians and Surgeons explore how factors associated with COVID infection, like inflammation, led to these neurological symptoms.

SARS-CoV-2, the virus that causes COVID-19, enters the body through the airways. The spike proteins on the surface of SARS-CoV-2 virus facilitate the entry into cells through the angiotensin converting enzyme 2 (ACE2) receptor. This leads to inflammation in the lungs and other organs. ACE2 receptors are downregulated during COVID infection, a pattern which has been tied to an upregulation of inflammatory marker transforming growth factor-𝛃 (TFG-𝛃) in other disease models including cancer. Lower ACE2 activity has also been tied to greater concentrations of Alzheimer’s disease (AD) related proteins amyloid-𝛃 (A𝛃) and phosphorylated tau. Perhaps the entry-point of the SARS-CoV-2 virus activates inflammation pathways that can affect the brain similar to the way it is affected in AD, and might cause the neurological issues that sometimes come with COVID infection.

In the study, inflammation markers that represent TFG-𝛃 levels were measured in the brain samples of COVID patients and were compared to non-patients. Each of these measures were higher in brain samples of COVID patients, suggesting that COVID infection contributed to more inflammation in the brain.

Inflammation may have downstream effects that can impact the function of healthy tissues. For example, the highly-regulated use of the calcium ion (Ca2+), which is a key player in cell-to-cell communication, can become impaired in conditions of inflammation. Specifically, the ryanodine receptor (RyR) is an ion channel protein which is responsible for Ca2+ release. When in an open configuration, Ca2+ can flow freely through the channel. To stop Ca2+ flow, helper proteins interact with the RyR to stabilize the closed configuration of the channel. Previous studies have suggested that these helper proteins are downregulated in inflammation, which means that the RyR is more likely to be unstable, resulting in excess Ca2+ flow, or a Ca2+ leak. Ca2+ leaks have been thought to contribute to a number of diseases, including the development of tau pathology in AD.

In Dr. Reiken and colleagues’ study, indicators of typically functioning RyR were measured in the brain samples of COVID patients and non-patients. These measures included the amount of RyR channel in the open configuration (which means a lot of free flowing Ca2+) and the concentration of the helper proteins that helps the RyR remain stable in the closed configuration. The researchers found that there were less helper proteins in the COVID brains compared to the non-COVID brains. Additionally, more of the RyR channels were in an open configuration in the COVID brains compared to non-COVID brains. This means that Ca2+ leaks were more likely to happen in the brains of those infected with COVID-19.

In addition to cellular markers of inflammation and Ca2+ leaks, Dr. Reiken and colleagues also investigated levels of AD-related proteins A𝛃 and tau aggregation in the brains of control subjects and COVID patients. For A𝛃, relevant protein levels were similar between COVID patients and controls, suggesting that COVID does not cause the collection of A𝛃 in the brain. However, the concentration of phosphorylated tau, another protein that is highly implicated in AD pathology, was higher in the temporal lobe and cerebellum regions in COVID patients compared to control subjects.

To take this one step further, the researchers treated the COVID patients’ brain samples with Rycal ARM210, a drug that is currently in clinical trials for other applications at the NIH (NCT04141670) and helps to reduce Ca2+ leak. With ARM210, helper protein levels in the COVID brain samples increased from the original levels in the COVID brain samples that were not treated with the drug. Additionally, the amount of RyR in the open configuration decreased in the COVID brain samples with ARM210, compared to the un-treated samples. Thus, treatment with this drug may combat Ca2+ leak in brain tissue. If unstable RyR leads to the Ca2+ leak, and Ca2+ leak can promote tau phosphorylation and build up in the brain, then using the Rycal drug ARM210 to target and limit Ca2+ release may potentially be a way to treat of these brain abnormalities in COVID-19 and possibly minimize neurological symptoms.

Given these results, the authors propose a mechanism by which infection with the SARS-CoV-2 virus may lead to protein aggregation similar to tau deposition in AD. An adaptation of the proposed mechanism is shown in the Figure below.

Figure 1 (above): Proposed mechanism for neurological symptoms of COVID-19 infection. Adapted from Reiken et al., 2022. Created with BioRender.com.

Though this is a very exciting study exploring the neurobiology in COVID brains, there are some additional things to consider. Firstly, while inflammatory markers were elevated in the brains of COVID patients, SARS-CoV-2 virus particles were not detectable in the brain. This suggests that these effects are caused by systemic factors, and are not localized to cells that are infected with SARS-CoV-2. Additionally, in terms of the AD-related proteins, elevated phosphorylated tau protein was detected in the mesial temporal cortex and the cerebellum of COVID patients compared to controls. In AD, tau protein collects in the medial temporal cortex early in disease progression, but does not collect in the cerebellum. This, in addition to the lack of A𝛃 aggregation in the COVID patients’ brain samples, is a marked difference between the pathology of the brain in AD and in COVID. However, distribution and amount of tau protein in AD is linked to cognitive abilities, so perhaps the collection of tau in the brain of COVID contributes to cognitive symptoms like “brain fog”. The current study used brain samples from 10 COVID patients, but did not collect cerebrospinal fluid samples or use animal models to validate these findings yet. Future work that addresses these limitations and further questions may help us fully understand the role of COVID in the brain, and may help with treatments for those who are struggling with prolonged neurological symptoms of COVID.


Dr. Reiken, the first author of this work, is an Assistant Professor at Columbia University Department of Physiology. Dr. Dridi, a Postdoc Fellow at Columbia, and Dr. Liu, a Postdoc Research Scientist at Columbia, also contributed to this work. Find the original research article here.

Reference:
Reiken, S, Sittenfeld, L, Dridi, H, Liu, Y, Liu, X, Marks, AR. Alzheimer’s-like signaling in brains of COVID-19 patients. Alzheimer’s Dement. 2022; 1- 11. https://doi.org/10.1002/alz.12558

Covid-19 and Immunity after Organ Transplant

Over the last two years, the SARS-CoV-2 (Covid-19) pandemic has been at the forefront of media coverage. Hospitals have been overwhelmed, full cities locked down, travel banned, and we are all desperately waiting for a return to the normalcy that immunity promises. However, the development and retention of immunity can be dependent on the individual, and Covid-19 has been particularly daunting to individuals  with weakened immune systems (people who are immunocompromised). These individuals are at an increased risk of succumbing to Covid-19. Overall, it has been easy to identify the individuals that fall into this risk category. However, there has been limited research on the immunity of individuals that  have undergone organ transplants. In a new article by Dr. Mithil Soni, researchers have identified the effects of a solid organ transplant (SOT) on the development and retention of immunity to a plethora of viruses including SARS-CoV-2. SOTs are transplants that  include the kidney, liver, heart, lungs, intestines, and pancreas .

 Dr. Soni and colleagues also focus on the immunity generated by T cells, or cells beyond antibodies that play a role in killing viruses that enter the body. In this study, they focused on the immune response of one patient, a 33-year old male  suffering from erythropoietic protoporphyria, a genetic metabolic disorder that results in excessive liver damage. This subject underwent a SOT and received a liver. When undergoing a SOT, individuals are usually put through a stringent course of immunosuppressants to prevent organ rejection, which places them in the category of immunocompromised.  To their surprise, during a check-up this patient was found to have antibodies for SARS-CoV-2, indicating a previous Covid-19 infection without any serious symptoms. The ability to overcome Covid-19 with minimal symptoms while being classified as immunocompromised intrigued Dr. Soni and colleagues, and the patient agreed to provide his blood for further testing.  

The team went on to test the patient’s immune response to many infections that usually impact immunocompromised individuals. They tested the blood’s immune response to cytomegalovirus and BK virus, two viral infections that immunocompromised and SOT patients are prone to. They also tested the response to Epstein-Barr virus, which can cause Mononucleosis. From the blood, Dr. Soni and colleagues were able to collect and grow the T cells in their lab, exposed them to viruses, and measured their release of cytokines, proteins that are important for a strong immune response. They found a very strong T cell immune response against both cytomegalovirus and BK virus. They also tested the immune response to SARS CoV-2 and other coronaviruses and found a similar level T cell immune response as seen with cytomegalovirus and BK virus. 

These findings overall indicated that the SOT patient continued to have a robust immune response to multiple viruses despite the immunocompromised status. This study shows that it is possible to have robust immune responses to viruses including SARS CoV-2 in an immunocompromised state such as seen after a SOT. However, this research is based on a single case study. To truly understand T-cell memory and activity in immunocompromised individuals much more research has to be done. This means Dr. Soni and colleagues still have their work cut out for them and are actively expanding the research done here. Their next immediate steps are to repeat this study with blood from a larger group of healthy and immunocompromised individuals in the hopes that they will eventually be able to answer the question of SOT and immunity.

Figure: Depiction of increased immunity after SOT.  Top: Liver transplant. Bottom: Expected T cell activity in response to virus vs actual T cell activity in response to virus. 

 

Dr. Mithil Soni, is a previous Postdoctoral Research Fellow and current Associate Research Scientist at Columbia University.

Let’s get MDM2 and MDMX out of the shadow of p53

When it comes to cancer, one molecule stands out as being among the most extensively studied: the p53 tumor suppressor protein. p53 can exist in cells in several different forms. When p53 is in its so-called wild-type form, it is capable of activating various responses that contribute to tumor suppression. In their recent review, Columbia postdoc Rafaela Muniz de Quieroz and colleagues summarize the vast scientific literature on two key regulators of p53: MDM2 and MDMX. Both MDM2 and MDMX are known to interact with p53 and disrupt its function. Their absence has been linked not only to increased cancer development, but also to a number of dysfunctions, including embryonic lethality in mice. MDM2 has been shown to negatively regulate p53 by diverse mechanisms spanning from expression of the p53 gene to degradation of the p53 protein or its expulsion from the cellular nucleus, where the protein accomplishes its function. Although very similar to MDM2, MDMX is less well studied. We do know, however, that MDMX is a protein that can work together with the MDM2  in p53 degradation.

While many reviews and studies have pointed to the roles of MDM2, and to a lesser extent of MDMX, in p53 regulation, the current review by Quieroz and her colleagues  puts a larger focus on the myriad of p53-independent activities of MDM2 and MDMX. The authors provide important details about the p53-independent functions of both MDMX alone and as part of a MDM2–MDMX complex. The review discusses some key features in the structure and function of the proteins, including  key parts  that are relevant for their function, for some associated abnormalities, or for the formation of MDM2 and MDMX complexes.

MDM2 and MDMX are regulated on multiple levels within cells. These include regulation on the DNA level, including usage of several alternative promoters (DNA sequences needed to turn a gene on or off). One of the promoters of MDM2 and MDMX is regulated by their target p53, but there are also p53-independent promoters capable of switching on the genes of MDM2 and MDMS regardless of p53. In addition, numerous variations in the DNA sequences, the so-called single nucleotide polymorphisms (SNPs), affect the expression of the two genes and are relevant to different pathologies. Regulation on the RNA level includes co-transcriptional regulation like alternative splicing, as well as post-transcriptional regulation by microRNAs, long non-coding RNAs, circular RNAs, or RNA binding proteins. The review also presents a detailed characterization of the regulation of MDM2 and MDMX at the protein level, by summarizing data on numerous post-translational modifications or interacting partners of the two proteins, with regards to the different p53 contexts of the cited studies. Amongst the presented binding partners are some of the more recently identified interactors of the MDMs, which include proteins involved in the defense against several viruses. Overall, both MDM2 and MDMX stand out as extensively regulated at virtually every known level which according to the authors “attests to their relevance not only as inhibitors of p53 but of myriad other cellular activities and outcomes on their own”.

Since MDM2 and MDMX have majorly been studied in their relation to inhibit wild-type p53, of a particular interest stands a section of the review summarizing numerous processes in which the two proteins have been shown to be involved in cells lacking wild-type p53 (Figure 1).

Figure 1: Nonmalignant disease (left) and cancer-related (right) p53-independent functions of MDM2 and MDMX (adapted from Figure 4 of the review).

As shown in Figure 1, the p53-independent roles of MDM2 and MDMX in cancer and in other pathologies are versatile. That hints to the importance of uncovering molecules that can modulate the deleterious effects associated with dysfunctions of the two MDMs. A summary of numerous molecules that were shown to regulate the two proteins and thus consist of potential therapeutic targets, are also discussed in the review. Again the authors put an emphasis on how such small molecules might be useful in cells that lack wild-type p53. This is important not only because the two proteins have multiple functions other than regulating wild-type p53 which can be studied in such cells, but also because an important percentage of tumors is characterized by absence of wild-type p53.

The last section of the review points out some outstanding questions and directions for future research. If the fascinating questions of the versatile p53-independent roles MDM2 and MDMX have sparked your interest, find out more from the original paper.

Follow this blog

Get every new post delivered right to your inbox.