How our gut communicates with our brain to drive a preference for fat

Thanksgiving is just around the corner. The buttery sweet potato casserole, mashed potatoes, and gravy on the Thanksgiving dinner table are delicious and irresistible for most of us. Though fat from buttery food provides important building blocks for our body, overconsumption of fatty food could lead to weight gain and obesity-related diseases such as cardiovascular disease. To help keep our health in check, we need a better understanding of how fat consumption changes our desire for fatty food. A recent study led by Dr. Mengtong Li in the laboratory of Dr. Charles Zuker at the Zuckerman Mind Brain and Behavior Institute at Columbia University has started to reveal some insights. 

Previously the research team discovered how sugar preference was established. They found that among the two ways of processing the intake of sugar, taste and gut pathways, the preference for sugar arises from gut and is independent of taste. In line with this finding, the authors discovered that artificial sweeteners do not create a preference because they activate only taste receptors but not the gut pathway.

Built upon what they have learned from sugar preference, the authors first tested if mice have taste-independent preference for fat as well. They gave the mice a choice between oily water and water with artificial sweetener, and they recorded the number of times that the mice licked either of the water bottles as a measurement for preference. They found that the mice predominantly drank from the bottle with oily water two days after exposure to the two choices. Even when the authors directly delivered fat to the gut through surgery, or in mice that did not have taste receptors, the mice could still develop preference for fat. These observations suggested that mice could develop preference for fat through the gut pathway.

Figure 1. The gut-brain axis transfers information of fat intake from the gut to the brain. The orange arrow represents the direction of the information flow. The orange and red dots indicate the activation of the vagus nerve and cNST, respectively. The blue dots represent the hormone cholecystokinin (CCK). The figure was generated using BioRender.

How does the information of fat get transferred to the brain, along the so-called gut-brain axis, and make the mice want fat more than sugar? The authors traced the signals of fat stimuli from gut to brain (Figure 1) through pharmacological and genetic tools. They identified two receptors, G protein-coupled receptors GPR40 and GPR120, that function as fat detectors in the gut. Upon detecting the presence of fat, the gut then releases signaling molecules, including a satiety hormone cholecystokinin, to relay the information to the vagus nerve. Interestingly, while control mice do not have a preference for cherry- versus grape-flavored solutions, the authors were able to create a new preference in experimental mice by artificially activating the subset of vagal neurons that receive cholecystokinin signals from the gut. The vagus nerve travels from gut to brain, and eventually sends the fat signals to the brain region called the caudal nucleus of the solitary tract (cNST) in the brainstem.

Together, the identification of the gut-brain communication might help battle against overindulging in fatty foods. As stress eating could increase the consumption of high calorie foods, it would also be interesting to study how the gut-brain communication is modulated by different emotional states. 

Edited by: Maaike Schilperoort, Trang Nguyen, Sam Rossano

Cleaning Up Data to Spruce Up the Results

Drawing conclusions from scientific studies can be difficult, in part because the data collected may be biased, which leads to a misinterpretation of the data. Let’s say we’re collecting data to investigate how many hours of sleep people get per night, during the week compared to over the weekend. We can ask 100 people their average nightly sleep time on weeknights and on weekends. To avoid bias, or skewing the data toward a particular duration, we should control for a few different factors. For example, we can limit our sample to only ask people 18 years or older, to avoid surveying children who tend to require more sleep than adults. This will avoid introducing a bias in the hours slept per night measure and prevent a trend in the data towards >8 hours a night. 

 

Some biases cannot be totally avoided during data collection. The existence of this unavoidable bias motivates scientists to consider including confounding variables in their data collection. Scientists use covariates when additional variables that change or differ across groups cannot be controlled for. A covariate is a variable that changes with the variable of interest, but isn’t of particular interest or importance for the question at hand. In our example, there are some other variables that may affect the amount of sleep an adult gets. This can include age (a postdoc in their late 20’s with a grant deadline might not get as much sleep as much as a retiree in their 60’s), activity level (strenuous physical activity leads to more sleep for better recovery), and caffeine intake (maybe serial coffee drinkers sacrifice an extra hour of sleep for an extra large cup in the morning). Because these variables may be different for each participant, we can measure them as observed covariates and include them in our statistical analysis.

 

Sometimes, as in the case with many epidemiological or public health studies, it’s difficult to measure or control for these covariates because the studies commonly use observational data from population-based studies which might not measure all potential covariates. In these studies, there may be unmeasured biases in the data that produce confounds, leading to imperfect conclusions in population studies. In our example, maybe we neglect to measure time spent on social media, which can affect someone’s total sleep time (I can’t be the only one who scrolls instagram instead of going to sleep at night…). Time spent on social media would be our unobserved covariate, which contributes to unmeasured bias in our sample. 

 

One way to address the problem of unmeasured bias is to pre-process the data – to fine-tune or clean up the data after it has been collected, but before statistical analysis is performed. In a recent paper, Columbia postdoc Dr. Ilan Cerna-Turoff and colleagues explored the use of a pre-processing method that can be used prior to data analysis in order to reduce the bias introduced by unmeasured covariates in a dataset. 

 

The pre-processing method investigated in this study is called “Full matching incorporating an instrumental variable (IV)” or “Full-IV Matching”, which aims to reduce biases between groups and thereby improve the accuracy of study findings. An instrumental variable (IV) is a measured variable that is unrelated to the covariates but is related to the variable of interest. For our example, an IV could how comfortable participants find their bed – something that is related to the time spent asleep, but isn’t related to the age or amount of coffee consumed. 

 

To apply the Full-IV Matching method, the researchers define an IV and “carve out” moderate values of the variable to focus on the extreme values (highest and lowest) across the range of IV measures, essentially ignoring the center of the data set. With this abridged dataset, the researchers implement a “matching” algorithm that pairs individuals who have similar values in their covariates, but who do not have similar values in their IV. In our example, participants who have similar caffeine intake levels or similar ages would be paired with participants who have the opposite bed-comfort level. This explores how the biases in the dataset change when each measured covariate is individually controlled for. Additionally, the researchers can define how much weight should be given to the unobserved covariate, depending on how much bias may be introduced into the data by this unobserved covariate. 

 

As proof-of-concept, Dr. Cerna-Turoff and colleagues simulated data from a scenario based on the Haitian Violence against Children and Youth Survey. Specifically, data were simulated based on measurements of social characteristics and experiences of young girls in Haiti, who were displaced either to a camp (“exposure” group) or to a wider community (“comparison” group) after the 2010 earthquake. The goal of this simulation experiment was to better understand how the displacement setting may be associated with risk of sexual violence. The researchers simulated data for 5 baseline covariates based on results from the Haitian Violence against Children and Youth Survey: (1) status of restavek (indentureship of poor children for rich families), (2) prior sexual violence, (3) living with parents, (4) age, and (5) social capital, of which the latter is an unobserved covariate. They also generated data for an exposure (camp or community), an outcome (sexual violence against girls), and an IV (earthquake damage severity). The researchers explored how the outcome was affected by the covariates and IV by quantifying the standardized mean difference of the variable across the exposure and comparison groups. A standardized mean difference value close to 0 indicates that the value of the variable was not different across the two groups, suggesting that this variable is not introducing bias into the analysis of group differences. 

 

The results suggest those who were displaced to a camp were at a higher risk of sexual violence than those who were displaced to a wider community, when correcting for all observed covariates. Additionally, the method successfully balanced the groups when correcting for the unobserved covariate of social capital. If not corrected for, differences in social capital might have confounded these results, such that girls with a stronger support network may appear to be at a lower risk. However, using the Full IV Matching method, bias across exposure and comparison groups for the observed covariates and the unobserved covariate of social capital was reduced, suggesting that neither the social capital nor the observed covariates contributed to the difference in risk for sexual violence observed between the two groups. 

 

This study provides a proof-of-concept for a pre-processing method for reducing bias across a data set. The authors mention limitations including the effect of the method on sample size and the ‘bias-variance trade-off’, in which increases in accuracy (less bias) may lead to more noise (higher variability) in the data. Ultimately, this type of methodology can aid in the correction of both observed and unobserved biases in population-based data collection, which has significant implications in epidemiologic studies, where not all sources of bias can be measured effectively.

 

Edited by: Emily Hokett, Pei-Yin Shih, Maaike Schilperoort; Trang Nguyen

A how-to guide for improving the potency of stem cells

You may remember Dolly, the sheep that became famous in the ‘90s as the first mammal to be cloned from an adult cell. Dolly was created through somatic cell nuclear transfer (SCNT), in which the nucleus from a somatic donor cell, i.e., a cell from the body other than a sperm or egg cell, is transferred into an enucleated egg cell. In this case, the donor cell was derived from a sheep’s mammary glands, a medical term for the breasts. The scientists named the cloned sheep Dolly since they could not think of a more impressive pair of mammary glands than Dolly Parton’s, or so the story goes. Aside from generating viable embryos in the laboratory, SCNT can be used to generate human stem cell lines for research and therapeutic purposes. However, this procedure is technically challenging and requires egg cells, which raises ethical concerns.

Artist’s impression of Dolly Parton, the famous American country singer, holding the cloned sheep named after her.
© 2022, Maaike Schilperoort

In 2007, a lab in Kyoto, Japan, found another way of generating human stem cells. The group infected human skin cells with a virus that carried a set of genes known to be important for embryonic stem cells. This resulted in so-called “induced pluripotent stem cells”, or iPSCs, that are functionally identical to embryonic stem cells. Although therapeutically promising, these iPSCs do not have the same potency as the cells generated through SCNT. SCNT generates cells that are totipotent at an early stage, meaning that they can form viable embryos as well as extraembryonic tissues such as the placenta and yolk sack. In contrast, iPSCs are pluripotent and are not able to give rise to extraembryonic tissues. They also have an inferior differentiation potential and lower proliferation rate as compared to totipotent cells.

Efforts have been made by scientists to make embryonic stem cells and iPSCs more totipotent by treating them with small molecule inhibitors, resulting in so-called expanded potential stem cells (EPSCs) that that can give rise to the embryo as well as placenta tissues and thus are more versatile as compared to their pluripotent counterparts. However, the developmental potential of EPSCs is still inferior to true totipotent cells or cells generated through SCNT. To gain insight into how the developmental potential of EPSCs can be improved, Columbia postdoc Vikas Malik and colleagues performed a deep analysis of pluripotent embryonic stem cells vs. the more totipotent EPSCs. They examined gene expression, DNA accessibility, and protein expression, and found some unique genes and proteins that are upregulated in EPSCs as compared to embryonic stem cells, such as Zscan4c, Rara, Zfp281, and UTF1. This pioneering work, published in Life Science Alliance, shows us which genes and proteins to target to generate authentic totipotent stem cells in a petri dish.

The work of Dr. Malik and colleagues has improved our understanding of how to generate totipotent cells outside of the human body without having to deal with the technical and ethical challenges of SCNT. These cells can further improve stem cell therapy through a greater ability to regenerate and repair tissues affected by damage or disease. In addition, totipotent cells are more suitable to study early development and problems of the reproductive system, and are optimal for gene therapy to correct genetic defects that cause disease. As the word indicates, totipotent cells really hold all the power, and could greatly advance scientific knowledge and regenerative medicine.

More information on the pursuit of totipotency can be found in this comprehensive review article by Dr. Malik and his PI Jianlong Wang published in Trends in Genetics.

Reviewed by: Trang Nguyen and Vikas Malik

Lactic acid – a new energy fuel source in brain tumor

What does lactic acid do to the body?

Lactic acid is produced when the body breaks down carbohydrates in low oxygen levels to generate energy. It is mainly found in muscle cells and red blood cells. An example of lactic production is when we perform intense exercise. 

Glucose, glutamine, fatty acids, and amino acids are well-known energy sources for cell growth and division. In the past, lactic acid has been known as a by-product of glycolysis, a process in which glucose is broken down through several enzyme reactions without the involvement of oxygen. However, recent studies showed that lactic acid is a key player in cancer cells to regulate tumor cell growth and division, blood vessel formation, and invasion. The tumor cells prefer to use glycolysis to produce energy and lactic acid despite the abundance of oxygen levels. Lactic acid is an alternative fuel source for glucose-deprived tumors to avoid cell death.

Lactic acid is transported through the membrane via the monocarboxylate transporter 1 (MCT1). A research group at Columbia University led by Dr. Markus Siegelin in the department of Pathology and Cell Biology showed a substantial presence of lactic acid in the citric acid cycle (TCA cycle), a series of chemical reactions to generate energy, in the glioblastoma cells cultured in the nutrient deprivation condition (low glucose and glutamine concentration). When the glucose and/or glutamine concentrations increased, less lactic acid was involved in the TCA-cycle metabolites. The uptaken lactic acid in the TCA-cycle was traced by using a method called C13 carbon tracing and was analyzed by liquid chromatography-mass spectrometry to identify the structure of different molecules. The researchers concluded that lactic acid is used as a fuel source to generate the energy in the brain tumor cells. Furthermore, lactic acid is converted to Actetyl-CoA and contributed to the gene modification in glioblastoma cells (Figure 1). These novel findings were published in a prestigious journal,  Molecular Cell

Figure 1: Role of lactic acid in the epigenetic modification of glioblastoma cells. Lactic acid is transported to the membrane via the monocarboxylate transporter 1 (MCT1) and contributed to the TCA cycle as a fuel source to generate the energy. Lactic acid is converted to Actetyl-CoA and contributed to the gene modification in glioblastoma cells. Suppressing the TCA cycle by using the targeted drug, namely CPI-613 (devimistat) leads to the abrogation of lactic acid in the energy production. The figure was generated by Biorender.

From these findings, the authors proposed to use CPI-613 (devimistat) drug, which targets TCA-cycle metabolites (Figure 1), to  treat glioblastoma cells. Indeed, CPI-613 showed a suppression of cellular viability in vitro of glioblastoma cells and an extension of the animal survival curve in the mouse model. The authors suggested that the combination of CPI-613 with other standard care treatment in glioblastoma such as temozolomide and radiation could be a potential clinical therapy for patients with glioblastoma.

Read more about this exciting finding here:

https://www.sciencedirect.com/science/article/pii/S1097276522006475 

Reviewed by: Pei-Yin Shih, Sam Rossano, Emily Hokett

Alcohol Use Disorder – are we making the right diagnosis?

Do you and your friends enjoy the occasional cocktail or two over the weekend? Maybe we know someone who enjoys the more-than-occasional cocktail. But, at what point do our drinking habits significantly affect our health? Recent studies suggest that 6% of adults in the United States report heavy or high-risk consumption of alcohol, which is defined as an average of more than 7 drinks/week for women and more than 14 drinks/week for men. This high risk-consumption may lead to Alcohol Use Disorder (AUD) if it is repeated for one year or more. AUD is associated with a number of medical and psychiatric problems, and can even increase risk of death in patients who have cancer and cardiovascular disease.

To diagnose AUD, medical and mental health professionals use the 5th edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), which explores 11 criteria, including alcohol-related cravings, strains on relationships caused by alcohol use, feeling unable to cut back or stop drinking, dangerous or risky behavior when under the influence of alcohol, etc. Unlike previous versions of the DSM, these AUD diagnoses are divided based on severity, where people who experience 0 or 1 of the diagnostic criteria do not have AUD (no-AUD), 2-3 criteria have mild AUD, 4-5 criteria have moderate AUD, and 6+ have severe AUD. However, it’s not well understood whether other factors like the extent of alcohol use, the degree of cravings or impairments, etc. can help classify mild, moderate, and severe AUD diagnoses. 

Last year, Dr. Zachary L. Mannes, a postdoc in the Department of Epidemiology at Columbia University Mailman School of Public Health and New York State Psychiatric Institute, and colleagues published a study in which they aimed to explore any potential relationships between the severity of AUD (no-AUD, mild, moderate, or severe, based on the DSM-5) and self-reported measures of other factors or “external validators”, such as levels of alcohol craving, functional impairment, and psychiatric conditions. To do this, they collected AUD diagnosis as well as measures of external validators in 588 participants. These validators included alcohol specific validators (i.e. Craving, Problematic Use, Harmful Use, Binge Drinking Frequency), psychiatric validators (i.e. Major Depressive Disorder/MDD and posttraumatic stress disorder/PTSD), and functioning validators (social impairments; physical and mental impairments).

Dr. Mannes and colleagues reported that in this cohort of subjects, participants with alcohol use validators had a significantly greater likelihood of a diagnosis with mild, moderate, and severe AUD than a no-AUD diagnosis. Psychiatric validators like MDD and PTSD had a significantly greater likelihood of a severe AUD diagnosis than no-AUD; this relationship was not seen for either mild or moderate AUD. Participants who had social, physical, and mental impairments had a greater likelihood of having severe AUD than no-AUD, but this was not seen for participants with mild or moderate AUD. When looking within participants with an AUD diagnosis (i.e. excluding a no-AUD diagnosis), participants with many alcohol-specific, psychiatric, and functional validators were more likely to have a severe AUD than either mild or moderate AUD.

Overall, the results of this study support the structure of the DSM-5 diagnosis for AUD, as those diagnoses with mild and moderate AUD had significant associations with alcohol use validators, while those with severe AUD had significant associations with alcohol use, psychiatric and functional validators. In other words, people with severe AUD had a higher likelihood of symptoms that affected other aspects of their lives including impairments in social functioning and presence of psychiatric conditions including MDD and BPD. This study emphasizes the importance of looking at levels of severity in AUD as the current DSM-5 does, as opposed to a binary yes/no diagnosis as older versions of the DSM had incorporated. This study also helps further the understanding of optimal ways to diagnose AUD and may help better understand potential treatment implications for various AUD severity. The study published by Dr. Mannes and colleagues supports and progresses the field of AUD research in order to better understand and characterize the symptoms, comorbidities, and diagnosis of AUD, so that medical professionals can better assist those who are struggling with the disorder. 

Edited by: Trang Nguyen, Maaike Schilperoort

Metastatic cancer cells have unstable DNA which helps them to evade the body’s immune system

Melanoma brain metastasis (MBM) frequently occurs in patients with late stages of melanoma (skin cancer). It is the third leading cause of brain metastases after lung and breast cancers. Cancer cells break away from the primary tumor and travel to the brain through the bloodstream. Despite significant therapeutic advances in the treatment of metastatic cancers, MBM  remains a challenging problem for therapeutic treatment due to the blood brain barrier. The MBM may develop a variety of symptoms that are similar to primary brain tumors such as headache, difficulty walking, or seizures. To provide comprehensive studies of the cells inside melanoma brain metastases, Jana Biermann, a postdoc in Dr. Benjamin Izar’s lab at Columbia University, performed single-cell-sequencing, nucleus RNA-sequencing, and CT scans of 22 treatment-naive MBM and 10 extracranial melanoma metastases that could spur the development of a new generation of therapies (Figure 1).

Figure 1: A comprehensive study of melanoma brain metastasis and extracranial melanoma metastases by performing single-cell genetic analyses of frozen brain samples. snRNA-seq: single nuclei RNA sequencing; TCR-seq: T cells sequencing. Image was created from BioRender based on Figure 1A of the original article that was published in CellPress with title “Dissecting the treatment-naive ecosystem of human melanoma brain metastasis”.

The authors also analyzed the genes expressed in 17 melanoma brain metastases and 10 extracranial melanoma metastases patients. The data revealed unstable DNA in the melanoma brain metastases compared with extracranial melanoma metastases. The unstable DNA triggers signaling pathways that enable the tumor cells to spread around the body and to suppress the body’s natural immune response that normally fights off the tumor cells. The researchers also found that the relocated melanoma cells adopt a neuronal-like state that might help tumor cells adapt and survive after they migrate to the brain. Furthermore, by using CT scans of multiple slices of the tumors, researchers created three-dimensional images of the tumors and uncovered heterogeneity in metabolic and immune pathways within and between tumors. 

The authors also found that the cancer cells in the brain significantly expressed  several genes that are known to promote cancer progression, such as MET and PI3K, while the extracranial melanoma metastases strongly expressed genes related to epithelial cells, which are the cells that cover the inside and outside of the surfaces of your body such as skin and blood vessels. Understanding these pathways will help for the therapeutic targets. 

A limitation of the study is that the authors did not compare melanoma brain metastasis and extracranial melanoma metastases within the same patients, which could have introduced variability in their dataset. Nevertheless, the atlas that they built provides a foundation for further mechanistic studies on how different perturbations could influence brain metastasis ecosystems.

Reviewed by: Pei-Yin Shih, Sam Rossano, Maaike Schilperoort

A pharmacological approach to lifestyle related metabolic disorders- does it make anti-sense?

Non-Alcoholic fatty liver disease (NAFLD) is a group of liver diseases that include NAFL and Non-alcoholic steatohepatitis (NASH). It is a chronic progressive disease of the liver not caused by alcohol. It starts with fat accumulation, progresses into inflammation, swelling and liver enlargement (NASH), fibrosis, cell death and replacement of dead cells by scar tissue(cirrhosis) and finally results in cancer (hepatocellular carcinoma). A figure describing this progression has been published by my colleague Maaike Schilperoort in an article describing emerging therapeutic strategies for fatty liver-related cancer. 

NASH is the most severe form of NAFLD before progressing into the irreversible stages of cirrhosis and cancer. It remains under-diagnosed as it is asymptomatic, or it is accompanied by non-specific symptoms. Individuals with hypertension, high cholesterol, who are over-weight or obese, have diabetes or insulin resistance are at a greater risk to develop NASH. It is largely a lifestyle associated metabolic disorder made worse in individuals with obesity and type 2 diabetes. Current treatment modalities focus on lifestyle interventions and management of co-existing conditions. A lack of specific and targeted pharmacological recommendations with proven efficacy complicates NAFLD management.

Junjie Yu and colleagues have conducted a comprehensive study by using data available from a clinical trial of patients with NASH and studying mouse models of NASH. They identified a gene – Jagged 1 (JAG1) that was increased in patients who had NASH and fibrosis. They used this key finding to conduct experiments on mouse models to further study the role of JAG1 in either reducing or worsening NASH and liver fibrosis. They have also used a cell targeted strategy to test potential therapeutic interventions associated with the development of NASH. 

To mimic human NASH, they fed mice with a NASH inducing diet, rich in saturated fat, sucrose, and cholesterol as well as fructose containing drinking water. The mice develop liver steatosis, inflammation, fibrosis, weight gain and insulin resistance which are symptoms seen in patients with NASH. JAG1 was increased in the liver, and this correlated with an increase in fibrotic markers in mice fed a NASH-inducing diet. An interesting observation that directed the rest of the study was that JAG1 was increased in the liver specific cell type called hepatocytes. They used a virus mediated gene delivery method to increase or decrease Jag1 in the hepatocytes of the mice that were fed the NASH-inducing diet. Increasing Jag1 increased fibrosis that is induced by the NASH diet in the mice and decreasing Jag1 protected the mice liver from developing fibrosis. Based on these insights they used a technology called antisense oligonucleotides (ASO) to block Jag1 expression in mice that were fed a NASH-inducing diet. They found mice treated with Jag1-ASO had reduced expression of JAG1 at the gene and protein levels along with a reduction in inflammatory and fibrotic markers. However, as this method would target all cell types, a hepatocyte specific Jag1 inhibitor was developed. Mice fed with NASH-inducing diet on treatment with hepatocyte specific Jag1 inhibition show decreased Jag1 in the liver as well as reduction in liver fibrosis.

This is a very interesting approach that could lead to specific and targeted pharmacological treatment of NASH. ASOs are short single strand nucleotide sequences that can be produced to target specific genes of interest (like Jag1 in this case) in cells. They alter protein expression during the process of translation from RNA to protein (Fig 1). As they are made to target specific genes and cells, they have a higher chance of success. Currently there are 15 FDA approved ASO based drugs for disorders ranging from neurodegenerative disorders to cancer. The main limitation of ASO is the enzymatic degradation of oligonucleotides and removal from the body by the kidneys.  Further research into improving ASO to optimize delivery and safety could lead to development of therapies for disorders that require targeted pharmacological interventions.

Figure 1. A. A schematic of regular transcription and translation processes involved in protein synthesis. B. ASO mediated disruption of protein synthesis. Figure created using Biorender.com

Currently, pharmacological therapies for NAFLDs are recommended for individuals who do not achieve expected weight loss and for those individuals with stage 2 or greater NASH-induced fibrosis. Lifestyle changes may not be possible for all individuals with metabolic disorders due to various reasons including socio-economic reasons, limited food resources, disabilities, etc. Though lifestyle interventions like weight management, maintaining a healthy diet and regular exercise have been shown to reduce symptoms and manage the disorder, it works well if it is diagnosed at earlier stages. However, given that NASH does not have specific symptoms and is grossly under-diagnosed, an option for treatment of the disorder when it is at later stages may alleviate the disease burden. Specific targeted pharmacological approaches towards treating a metabolic disorder would be a feasible approach and may be a more efficient way to treat NAFLD when combined with lifestyle changes.

Reviewed by : Trang Nguyen and Samantha Rossano

 

  

 

Why do COVID-19 patients have trouble breathing?

The COVID-19 pandemic has resulted in over 145 million positive cases and 3.1 million deaths globally (32 million and 570,000 in the USA, respectively), as reported on April 26, 2021. Approximately 15% of infected patients with SARS-CoV-2 die from respiratory failure, making it the leading cause of death in COVID-19 patients.

A research group at Columbia University led by Dr. Benjamin Izar identified substantial alterations in cellular composition, transcriptional cell states, and cell-to-cell interactions in the lungs of COVID19 patients. These findings were published in the prestigious journal Nature. The team performed single-nucleus RNA sequencing, which is a method for profiling gene expression in cells, of the lungs of 19 patients who died of COVID-19 and underwent rapid autopsy. The control group included seven control patients who underwent lung resection or biopsy in the pre-COVID-19 era (Figure 1).

Figure 1: An overview of the study design wherein single-nucleus RNA sequencing was used to characterize lungs of patients who died from COVID-19-related respiratory failure. A) The lung tissue was extracted for mRNA, a genetic sequencing of a gene. B) The mRNA sequence will be read by a computer system. C) The gene expression of cells in the lung of COVID-19 patients samples and control samples. PMI: post-mortem interval. snRNA-seq: single nucleus RNA sequencing. QC: quality control.

The lungs from individuals with COVID-19 were highly inflamed but had impaired T cell responses. The single-nucleus RNA sequencing showed significant differences in cell fractions between COVID-19 and control lungs both globally and within the immune and non-immune compartments. There was a reduction in the epithelial cell compartment, which are the surfaces of organs in the body and function as a protective barrier. There was also an increase in monocytes (i.e., white blood cells that are important for the adaptive immunity process) and macrophages (i.e., cells involved in the detection, phagocytosis and destruction of bacteria and other harmful organisms), and a decrease in fibroblasts (i.e., cells that contribute to the formation of connective tissue) and neuronal cells. These observations were independent of donor sex. 

Monocyte/macrophage and epithelial cells were unique features of a SARS-CoV-2 infection compared to other viral and bacterial causes of pneumonia. The reduction in the epithelial cell compartment was due to the loss of both alveolar type II and type I cells. Alveolar type II cells repopulate the epithelium after injury, and provide important components of the innate immune system. Alveolar type II cells adopted an inflammation-associated transient progenitor cell state and failed to undergo full transition into alveolar type I cells, resulting in impaired lung regeneration. 

Myeloid cells (i.e., monocytes, macrophages, and dendritic cells) represented a major cellular constituent in COVID-19 lungs and were more prevalent as compared to control lungs. The authors found that the receptor tyrosine kinase that is important for coordinated clearance of dying/dead cells and subsequent anti-inflammatory regulation during tissue regeneration was downregulated. These data suggest that myeloid cells are a major source of dysregulated inflammation in COVID-19.

The authors also found significantly more fibroblasts in COVID-19 lungs than in control lungs. The degree of fibrosis correlated with disease duration, indicating that lung fibrosis increases over time in COVID-19. 

In this article, the authors mentioned the limitation of the study that they  analyzed lung tissues from patients who died of COVID-19, and therefore they only examined a subset of potential disease phenotypes. Based on the author’s observation, the rapid development of pulmonary fibrosis is likely to be relevant for patients who survive from severe COVID-19. This atlas may inform our understanding of long-term complications of COVID-19 survivors and provide an important resource for therapeutic development.

Read more about this article here: A molecular single-cell lung atlas of lethal COVID-19

Reviewed by: Molly Scott and Maaike Schilperoort

Making sense of COVID-induced loss of smell

The coronavirus SARS-CoV-2 has led to more than six million confirmed deaths worldwide to date throughout the course of the COVID-19 pandemic. While SARS-CoV-2 enters the body through the respiratory system into the lungs, it can also induce damage in other organs. For instance, the sense of smell, which is mediated by the olfactory sensory neurons in our nose along with our brain, is lost in some COVID patients. How this virus affects our ability to smell is a puzzling question, and one that has been investigated by a team led by Dr. Zazhytska in the Lomvardas lab at Columbia University. They have tirelessly worked on solving this puzzle throughout the COVID shutdown period, and their discoveries, which have recently been published in the journal Cell, have started to provide some key answers.

We can smell the scents around us because the olfactory receptors in our olfactory sensory neurons bind to odorant molecules, relay the information through signaling molecules, and eventually signal to our brain (Figure 1). Dr. Zazhytska and her colleagues found that SARS-CoV-2 was rarely detected in the olfactory sensory neurons themselves, indicating that the virus probably doesn’t gain access to our brain through these sensory neurons. In fact, the most commonly infected cells are the neighboring sustentacular cells (Figure 1b), which are important in maintaining the health of the layer of olfactory cells, including the neurons. If the sustentacular cells die, the sensory neurons can be exposed to a stressful environment without support. Thus, the shutdown of the olfactory system might be an indirect effect of SARS-CoV-2 infection.

Figure 1 The basic structure of the olfactory system.
(A) Signal transduction in olfactory sensory neurons. The cell membrane separates the interior of the cell (cell cytoplasm, bottom) from the outside environment (top).
(B) Anatomy of cells in the nose that are involved in smell perception.
(Figure was made using BioRender).

There are about four hundred olfactory receptor genes scattered across our genome, and each neuron only expresses one of them. This stringent setup is achieved by interactions between multiple chromosomes that bring all the dispersed olfactory receptor genes together and form a cluster in the nucleus of the neuron. This cluster arrangement of olfactory receptor genes allows the gene expression machinery to access and turn on only one receptor at a time. Remarkably, Dr. Zazhytska and her colleagues discovered that this organization is disrupted dramatically after SARS-CoV-2 infection in both hamsters and humans. Infected individuals also show reduced expression of not only receptor genes, but also key molecules that are involved in smell perception, likely as consequences of the disrupted organization.

Interestingly, when the team of scientists exposed uninfected hamsters to UV-treated serum from SARS-CoV-2 infected hamsters, which no longer contain virus, they still observed this same disorganization of olfactory receptor genes in the animals. This observation suggests that not the virus itself but some other circulating molecule(s) trigger the abnormal organization. Identifying these molecules may provide potential treatments for COVID-induced loss of smell, as well as other diseases that can affect our olfaction, including early onset Alzheimer’s disease.

Edited by: Sam Rossano, Eric Smith, James Lee, Trang Nguyen, Maaike Schilperoort

What’s in Your Water? Arsenic Contamination and Inequality.

Water is one of the most essential elements for life. Every living creature requires access to a water source, humans being no exception. Unfortunately, access to clean drinking water continues to be a challenge for many individuals across the globe.  Systematic studies of water inequalities in the U.S. alone indicate increased contamination in areas often dismissed or underserved. Arsenic, a human carcinogen, or cancer causing substance, predominantly released from water flowing through rock formations has previously been measured at dangerous levels in  U.S. water sources. This finding led to the U.S. Environmental Protection Agency (EPA) mandating that arsenic contamination levels must be below a maximum level (10 µg/L) in 2001, resulting in enhanced water filtration and arsenic removal. However, whether this mandate was effective across all demographic areas remained unknown until Dr. Nigra,  previous postdoc and current assistant professor at Columbia’s Mailman School of Public Health, and colleagues took on the challenge of finding out.

Through extremely diligent research, Dr. Nigra and colleagues examined the arsenic exposure in community water systems across the U.S. to identify whether certain populations are exposed to arsenic levels above the maximum  mandated  by the EPA. Dr. Nigra and colleagues examined arsenic exposure levels through gathering data from the EPA’s public water database, which monitors public water for contaminants. They analyzed data of water contaminants across 46 states, Washington DC, the Navajo Nation, and American Indian tribes from the years 2006-2008 and 2009-2011, for overall arsenic concentrations across the different regions (Figure). They also separated the data for concentrations across different subgroups of individuals.

Overall, Dr. Nigra and colleagues identified a 10% reduction in water arsenic exposure.  They found  a reduction in arsenic concentrations in the New England, Eastern Midwest, and Southwest regions of the U.S. over the six year period. They also found reductions in subgroups that fit the following descriptions: most rural mid socioeconomic status (SES), semi urban high SES, and rural high SES. However, there were still communities that had arsenic levels that exceeded the maximum mandated by the EPA (Figure). These communities were predominantly Hispanic communities located in the Southwestern U.S. Furthermore, there was not enough data to identify whether there was a significant reduction in arsenic levels in tribal community water sources. Therefore, while there was an overall reduction in arsenic levels, there is still room for improvement. These Hispanic communities in the Southwestern U.S. are still at an elevated risk for cancer due to this increased exposure to carcinogens. To combat this increased exposure, more financial and technical resources such as an increase in arsenic treatment systems are necessary to reduce these arsenic levels.  Moreover, it is very possible that the under-reported arsenic levels in tribal communities could be putting those individuals at an increased risk. Dr. Nigra and colleagues have investigated an extremely impactful environmental factor and now, with their research, we are all a bit more aware of what’s in our water.  

Figure: Maps of counties in compliance with the EPA’s maximum arsenic concentration cut off of 10ug/L (top) and the average water arsenic concentrations across a six year period (bottom). Top Map: Low/Low: less 10μg/L over the six years; High/Low: greater than 10μg/L in 2006–2008, but less than 10μg/L in 2009–2011; Low/High: less than 10μg/L in 2006–2008 but greater than 10μg/L in 2009–2011; and High/High greater than 10μg/L in both periods. Figure was adapted from Figure 3 and Figure 4, Nigra et al., 2020 

 

Dr. Anne Nigra is a current assistant professor and previous postdoc  in Environmental Health Sciences at Columbia University’s Mailman School of Public Health. 

Reviewed by: Molly Scott, Maaike Schilperoort

Follow this blog

Get every new post delivered right to your inbox.