“CRACK”ing cocaine addiction with medication

Cocaine is a highly addictive stimulant drug made from the leaves of the coca plant that alters mood, perception, and consciousness. It is consumed by smoking, injecting or snorting. According to the United Nations Office on Drugs and Crime, an estimated 20 million people have used cocaine in 2019, almost 2 million more than the previous year. Cocaine causes an increase in the accumulation of dopamine in the brain, which is a chemical messenger that plays an important role in how we feel pleasure and encourages us to repeat pleasurable activities. This dopamine rush causes people to continue using the drug despite the cognitive, behavioral and physical problems it causes,leading to a condition referred to as cocaine use disorder (CUD). CUD related physical and mental health issues range from cardiovascular diseases like heart attack, stroke, hypertension, and atherosclerosis, to psychiatric disorders and sexually transmitted infections. 

According to the CDC, cocaine use was responsible for 1 in 5 overdose deaths. Though almost all users who seek treatment for CUD are given psychosocial interventions like counseling, most continue to use cocaine. Pharmaceutical medication may increase the effectiveness of psychosocial interventions. Medications for other substance abuse disorders (opiod and alcohol) have shown to block euphoric effects, alleviate cravings and stabilize brain chemistry. However, there are currently no FDA approved drugs to treat CUD. 

Dr. Laura Brandt and colleagues have systematically reviewed available research up until 2020 in the area of pharmacological CUD treatment. In this review, they discuss the potential benefits and shortcomings of current pharmacological approaches for CUD treatment and highlight plausible avenues and critical considerations for future study. The authors reviewed clinical trials where the primary disorder is cocaine use and medication tested falls into four categories: dopamine agonists, dopamine antagonists/blockers, new mechanisms that are being tested and a combination of medications. 

Dopamine agonists are medications that have a similar mechanism of action as cocaine, i.e., they can act as substitutes for cocaine without the potential adverse health effects. Dopamine releasers and uptake inhibitors fall under this category and have shown the most promising signs thus far for reduced cocaine self-administration in cocaine-dependent participants. Dopamine uptake inhibitors bind to the dopamine transporter and prevents dopamine reuptake from the extracellular space into the brain cell. The medications that act as substitutes result in the users exhibiting blunted dopamine effects such as low levels of dopamine release and reduced availability of dopamine receptors for dopamine to bind to. They help reduce dopamine hypoactivity by slow release of dopamine which in turn helps reduce responses such as cravings for cocaine and withdrawal symptoms which is usually a cause for relapse. A common concern associated with using dopamine agonists is the possibility of replacing cocaine addiction with the medication. However, there is no strong evidence for this secondary abuse as well as for the cardiovascular risk when using the agonist as a means of treatment. 

Dopamine antagonists/blockers are substances that bind to dopamine receptors, preventing the binding of dopamine and thereby blocking the euphoric effects of cocaine. This approach facilitates the decrease in cocaine use as the effects of cocaine use are absent. Antipsychotics medications, anti-cocaine vaccines, modulators of the reward system, and noradrenergic agents fall under this category. This approach is generally considered to be less effective in treatment for CUD as they require high levels of motivation to start the treatment as well as to maintain it. 

New medications are those that are currently in clinical trials and are being tested in humans for the treatment of CUD. Combination pharmacotheraphy is an interesting approach for treatment and involves combining two medications to treat CUD. An absence of FDA approved medications limits exploration in this direction. 

Having reviewed these data and their shortcomings, the authors point out a very important factor in these studies – the shortcomings of the studies depend on more than just the medication. On one hand, limitations due to medical procedures such as the dosage of medication and its formulation, completion of the medication course, providing/not providing incentives to participants of the study may have hindered the success of these studies. On the other , individuals seeking treatment are not all the same. They differ in terms of cocaine use severity, presence of mental health illness, substance use disorders apart from cocaine use, and their genetics may also play a role in the success of their treatment.  Pharmacotherapy formulations for CUD is not a one size fits all but needs to be tailored to the individual seeking treatment as well as the substance used.  A combination approach targeting withdrawal of the drug and allowing patients to benefit more from behavioral/psychosocial interventions would be more helpful on their path to recovery. Another very important point that requires some attention is the method for determination if the medication has worked. Most studies use the gold standard of performing qualitative urine screens to determine sustained abstinence in clinical trials of pharmacotherapies for CUD. Urine toxicology as evidence of treatment success is not a clear-cut method as various factors impact interpretation of the results. Second the medication is considered to successfully treat CUD only when there is complete abstinence from cocaine use. As many physical and psychological issues accompany substance abuse, considering CUD treatment to be linear is not very beneficial. Considering other aspects such as improvement in quality of life and ability to carry out daily activities would be a better indicator of the effectiveness of the medications used. 

With an increase in cocaine use and abuse in recent years, there is an urgent need to identify medications to treat CUD. The review consolidates the current approaches to treating CUD with medication and points out factors that are overlooked while interpreting the results from these studies. Tailoring medications to each individual would greatly improve clinical trial outcomes and have higher success rates for treating substance use disorders- a promising avenue that needs to be explored.

Dr. Laura Brandt is a Postdoctoral Research Fellow in the Division on Substance Use Disorders, New York State Psychiatric Institute and Department of Psychiatry Columbia University Irving Medical Center.

Reviewed by: Trang Nguyen, Maaike Schilperoort, Sam Rossano, Pei-Yin Shih

 

Novel treatment strategies for fatty liver-related cancer – reality or fanTAZy?

The liver is one of the largest organs in the body, with a weight of approximately 3 pounds. Some of its vital functions include the filtration of blood to remove toxic substances from the body, the production of bile which helps with digestion, and the regulation of fat metabolism. Fats or lipids from the diet are taken up by the liver and processed into fat-carrying proteins called lipoproteins. These lipoproteins are released into circulation to fuel tissues that require energy. However, when there is a positive energy balance, for example due to overeating and/or a sedentary lifestyle, liver cells increasingly store lipids. This can result in metabolic associated fatty liver disease (MAFLD, formerly known as NAFLD), characterized by a liver fat content above 5%. When fat keeps accumulating in the liver, chronic inflammation ensues and the liver progresses to a stage called metabolic associated steatohepatitis (MASH, formerly known as NASH). There are currently no FDA-approved drugs available to treat MASH. At this stage, patients are at risk for developing a type of liver cancer called hepatocellular carcinoma (HCC). HCC is the primary cause of liver cancer in the US, affecting more than 30,000 individuals per year. Progression from a healthy liver to MAFLD, MASH, and HCC is shown in the Figure below.

The different stages of fatty liver disease progression – from a healthy liver, to metabolic associated fatty liver disease (MAFLD), metabolic associated steatohepatitis (MASH), and eventually hepatocellular carcinoma (HCC). Figure created with Biorender.com.

Although MASH is the leading cause of HCC, the mechanisms of how MASH predisposes to HCC tumor formation are largely unknown. The research from Columbia postdoc Xiaobo Wang and colleagues tries to fill this knowledge gap. Dr. Wang investigated TAZ; a gene regulator that was found to be increased in MASH livers. He fed experimental mice with a diet containing high sugar, fat, and cholesterol (the equivalent of human “fast food”), to induce MASH development. Then, he diminished TAZ expression in the liver by using a viral-mediated gene delivery system, by which an engineered virus enters mouse liver cells to specifically turn off the TAZ gene. Silencing of the TAZ gene largely prevented the development of tumors in MASH liver, indicating that TAZ is an important player in MASH-HCC progression.

Dr. Wang continued his research by investigating how TAZ could enable the liver cells to turn into tumor cells. He focused on DNA damage, a process which is important in HCC development, and found clear indications of damaged DNA in the livers of mice and humans with MASH. Most importantly, silencing of TAZ prevented an increase in the DNA damage, suggesting that TAZ promotes genomic instability in liver cells. Since the buildup of oxidative stress within cells is an important cause of DNA damage, Dr. Wang next looked at a specific indicator of oxidative DNA damage. Indeed, this indicator was increased in MASH and decreased with TAZ silencing. He then measured various oxidant-related proteins to find out how TAZ could promote oxidative DNA damage. He discovered that Cybb, a gene involved in the formation of harmful reactive oxygen species, is involved in TAZ-induced liver cancer. Together, these findings show a TAZ-Cybb-oxidative DNA damage pathway (see Figure below) that creates malignant liver cells and promotes the progression from MASH to HCC. This work has been published in the prestigious Journal of Hepatology.

Metabolic associated steatohepatitis (MASH) liver cells highly express a protein called TAZ, which promotes the Cybb gene which is involved in production of reactive oxygen species. These reactive oxygen species induce oxidative DNA damage, which transforms healthy liver cells into tumor cells and thereby promotes progression to hepatocellular carcinoma (HCC). Figure adapted from Wang et al. J Hepatol 2021, and created with Biorender.com.

The new pathway that Dr. Wang and colleagues discovered suggests that TAZ-based therapy could prevent MASH-HCC progression in humans with fatty liver disease. Such a therapy could have a big clinical impact, since it is estimated that about 12% of US adults have MASH. However, a limitation to MASH therapies is that the disease is often asymptomatic and difficult to identify until the late stages of the disease. Aside from focusing on treatments that reduce MASH progression after a large part of the damage has already occurred, our society should increasingly focus on preventative strategies. MASH is often caused by lifestyle-related factors, and risk of MASH can be significantly reduced by maintaining a healthy weight, eating a healthy diet and exercising regularly. Especially in the US where MASH prevalence is expected to increase by 63% by 2030, raising awareness of the importance of living healthy to prevent liver disease and liver cancer is paramount.

 

Reviewed by:

Sam Rossano, Vikas Malik, Molly Scott

 

 

Tau about that! Alzheimer’s protein found in brains of COVID patients

It’s hard not to have COVID on the brain in today’s world – it seems like every conversation ends up on the topic! A recent study completed at Columbia explored the effect of COVID in the brain, by collecting brain samples from the mesial temporal cortex, a brain region implicated in Alzheimer’s disease and responsible for memory, and the cerebellum, a brain region responsible for coordination of movement and balance. Different cellular markers that indicate inflammation and protein build-up in the brain were measured in 10 patients who had passed away from COVID-19 and were compared to brains of those who did not have COVID-19 at the time of death. From this, the researchers were able to infer how COVID-19 infection may alter the brain, potentially causing the neurological symptoms in some COVID patients.

COVID-19 infection can lead to respiratory, cardiac, and neurological symptoms. About one in three COVID patients experience neurological symptoms including loss of taste (hypogeusia), loss of smell (hyposmia), headache, disturbed consciousness, and tingling sensations in their limbs (paresthesia). The exact reason why these neurological symptoms occur is not well understood. In a recent publication, Dr. Steve Reiken and colleagues from the Department of Physiology and Cellular Biophysics at Columbia University Vagelos College of Physicians and Surgeons explore how factors associated with COVID infection, like inflammation, led to these neurological symptoms.

SARS-CoV-2, the virus that causes COVID-19, enters the body through the airways. The spike proteins on the surface of SARS-CoV-2 virus facilitate the entry into cells through the angiotensin converting enzyme 2 (ACE2) receptor. This leads to inflammation in the lungs and other organs. ACE2 receptors are downregulated during COVID infection, a pattern which has been tied to an upregulation of inflammatory marker transforming growth factor-𝛃 (TFG-𝛃) in other disease models including cancer. Lower ACE2 activity has also been tied to greater concentrations of Alzheimer’s disease (AD) related proteins amyloid-𝛃 (A𝛃) and phosphorylated tau. Perhaps the entry-point of the SARS-CoV-2 virus activates inflammation pathways that can affect the brain similar to the way it is affected in AD, and might cause the neurological issues that sometimes come with COVID infection.

In the study, inflammation markers that represent TFG-𝛃 levels were measured in the brain samples of COVID patients and were compared to non-patients. Each of these measures were higher in brain samples of COVID patients, suggesting that COVID infection contributed to more inflammation in the brain.

Inflammation may have downstream effects that can impact the function of healthy tissues. For example, the highly-regulated use of the calcium ion (Ca2+), which is a key player in cell-to-cell communication, can become impaired in conditions of inflammation. Specifically, the ryanodine receptor (RyR) is an ion channel protein which is responsible for Ca2+ release. When in an open configuration, Ca2+ can flow freely through the channel. To stop Ca2+ flow, helper proteins interact with the RyR to stabilize the closed configuration of the channel. Previous studies have suggested that these helper proteins are downregulated in inflammation, which means that the RyR is more likely to be unstable, resulting in excess Ca2+ flow, or a Ca2+ leak. Ca2+ leaks have been thought to contribute to a number of diseases, including the development of tau pathology in AD.

In Dr. Reiken and colleagues’ study, indicators of typically functioning RyR were measured in the brain samples of COVID patients and non-patients. These measures included the amount of RyR channel in the open configuration (which means a lot of free flowing Ca2+) and the concentration of the helper proteins that helps the RyR remain stable in the closed configuration. The researchers found that there were less helper proteins in the COVID brains compared to the non-COVID brains. Additionally, more of the RyR channels were in an open configuration in the COVID brains compared to non-COVID brains. This means that Ca2+ leaks were more likely to happen in the brains of those infected with COVID-19.

In addition to cellular markers of inflammation and Ca2+ leaks, Dr. Reiken and colleagues also investigated levels of AD-related proteins A𝛃 and tau aggregation in the brains of control subjects and COVID patients. For A𝛃, relevant protein levels were similar between COVID patients and controls, suggesting that COVID does not cause the collection of A𝛃 in the brain. However, the concentration of phosphorylated tau, another protein that is highly implicated in AD pathology, was higher in the temporal lobe and cerebellum regions in COVID patients compared to control subjects.

To take this one step further, the researchers treated the COVID patients’ brain samples with Rycal ARM210, a drug that is currently in clinical trials for other applications at the NIH (NCT04141670) and helps to reduce Ca2+ leak. With ARM210, helper protein levels in the COVID brain samples increased from the original levels in the COVID brain samples that were not treated with the drug. Additionally, the amount of RyR in the open configuration decreased in the COVID brain samples with ARM210, compared to the un-treated samples. Thus, treatment with this drug may combat Ca2+ leak in brain tissue. If unstable RyR leads to the Ca2+ leak, and Ca2+ leak can promote tau phosphorylation and build up in the brain, then using the Rycal drug ARM210 to target and limit Ca2+ release may potentially be a way to treat of these brain abnormalities in COVID-19 and possibly minimize neurological symptoms.

Given these results, the authors propose a mechanism by which infection with the SARS-CoV-2 virus may lead to protein aggregation similar to tau deposition in AD. An adaptation of the proposed mechanism is shown in the Figure below.

Figure 1 (above): Proposed mechanism for neurological symptoms of COVID-19 infection. Adapted from Reiken et al., 2022. Created with BioRender.com.

Though this is a very exciting study exploring the neurobiology in COVID brains, there are some additional things to consider. Firstly, while inflammatory markers were elevated in the brains of COVID patients, SARS-CoV-2 virus particles were not detectable in the brain. This suggests that these effects are caused by systemic factors, and are not localized to cells that are infected with SARS-CoV-2. Additionally, in terms of the AD-related proteins, elevated phosphorylated tau protein was detected in the mesial temporal cortex and the cerebellum of COVID patients compared to controls. In AD, tau protein collects in the medial temporal cortex early in disease progression, but does not collect in the cerebellum. This, in addition to the lack of A𝛃 aggregation in the COVID patients’ brain samples, is a marked difference between the pathology of the brain in AD and in COVID. However, distribution and amount of tau protein in AD is linked to cognitive abilities, so perhaps the collection of tau in the brain of COVID contributes to cognitive symptoms like “brain fog”. The current study used brain samples from 10 COVID patients, but did not collect cerebrospinal fluid samples or use animal models to validate these findings yet. Future work that addresses these limitations and further questions may help us fully understand the role of COVID in the brain, and may help with treatments for those who are struggling with prolonged neurological symptoms of COVID.


Dr. Reiken, the first author of this work, is an Assistant Professor at Columbia University Department of Physiology. Dr. Dridi, a Postdoc Fellow at Columbia, and Dr. Liu, a Postdoc Research Scientist at Columbia, also contributed to this work. Find the original research article here.

Reference:
Reiken, S, Sittenfeld, L, Dridi, H, Liu, Y, Liu, X, Marks, AR. Alzheimer’s-like signaling in brains of COVID-19 patients. Alzheimer’s Dement. 2022; 1- 11. https://doi.org/10.1002/alz.12558

Covid-19 and Immunity after Organ Transplant

Over the last two years, the SARS-CoV-2 (Covid-19) pandemic has been at the forefront of media coverage. Hospitals have been overwhelmed, full cities locked down, travel banned, and we are all desperately waiting for a return to the normalcy that immunity promises. However, the development and retention of immunity can be dependent on the individual, and Covid-19 has been particularly daunting to individuals  with weakened immune systems (people who are immunocompromised). These individuals are at an increased risk of succumbing to Covid-19. Overall, it has been easy to identify the individuals that fall into this risk category. However, there has been limited research on the immunity of individuals that  have undergone organ transplants. In a new article by Dr. Mithil Soni, researchers have identified the effects of a solid organ transplant (SOT) on the development and retention of immunity to a plethora of viruses including SARS-CoV-2. SOTs are transplants that  include the kidney, liver, heart, lungs, intestines, and pancreas .

 Dr. Soni and colleagues also focus on the immunity generated by T cells, or cells beyond antibodies that play a role in killing viruses that enter the body. In this study, they focused on the immune response of one patient, a 33-year old male  suffering from erythropoietic protoporphyria, a genetic metabolic disorder that results in excessive liver damage. This subject underwent a SOT and received a liver. When undergoing a SOT, individuals are usually put through a stringent course of immunosuppressants to prevent organ rejection, which places them in the category of immunocompromised.  To their surprise, during a check-up this patient was found to have antibodies for SARS-CoV-2, indicating a previous Covid-19 infection without any serious symptoms. The ability to overcome Covid-19 with minimal symptoms while being classified as immunocompromised intrigued Dr. Soni and colleagues, and the patient agreed to provide his blood for further testing.  

The team went on to test the patient’s immune response to many infections that usually impact immunocompromised individuals. They tested the blood’s immune response to cytomegalovirus and BK virus, two viral infections that immunocompromised and SOT patients are prone to. They also tested the response to Epstein-Barr virus, which can cause Mononucleosis. From the blood, Dr. Soni and colleagues were able to collect and grow the T cells in their lab, exposed them to viruses, and measured their release of cytokines, proteins that are important for a strong immune response. They found a very strong T cell immune response against both cytomegalovirus and BK virus. They also tested the immune response to SARS CoV-2 and other coronaviruses and found a similar level T cell immune response as seen with cytomegalovirus and BK virus. 

These findings overall indicated that the SOT patient continued to have a robust immune response to multiple viruses despite the immunocompromised status. This study shows that it is possible to have robust immune responses to viruses including SARS CoV-2 in an immunocompromised state such as seen after a SOT. However, this research is based on a single case study. To truly understand T-cell memory and activity in immunocompromised individuals much more research has to be done. This means Dr. Soni and colleagues still have their work cut out for them and are actively expanding the research done here. Their next immediate steps are to repeat this study with blood from a larger group of healthy and immunocompromised individuals in the hopes that they will eventually be able to answer the question of SOT and immunity.

Figure: Depiction of increased immunity after SOT.  Top: Liver transplant. Bottom: Expected T cell activity in response to virus vs actual T cell activity in response to virus. 

 

Dr. Mithil Soni, is a previous Postdoctoral Research Fellow and current Associate Research Scientist at Columbia University.

Let’s get MDM2 and MDMX out of the shadow of p53

When it comes to cancer, one molecule stands out as being among the most extensively studied: the p53 tumor suppressor protein. p53 can exist in cells in several different forms. When p53 is in its so-called wild-type form, it is capable of activating various responses that contribute to tumor suppression. In their recent review, Columbia postdoc Rafaela Muniz de Quieroz and colleagues summarize the vast scientific literature on two key regulators of p53: MDM2 and MDMX. Both MDM2 and MDMX are known to interact with p53 and disrupt its function. Their absence has been linked not only to increased cancer development, but also to a number of dysfunctions, including embryonic lethality in mice. MDM2 has been shown to negatively regulate p53 by diverse mechanisms spanning from expression of the p53 gene to degradation of the p53 protein or its expulsion from the cellular nucleus, where the protein accomplishes its function. Although very similar to MDM2, MDMX is less well studied. We do know, however, that MDMX is a protein that can work together with the MDM2  in p53 degradation.

While many reviews and studies have pointed to the roles of MDM2, and to a lesser extent of MDMX, in p53 regulation, the current review by Quieroz and her colleagues  puts a larger focus on the myriad of p53-independent activities of MDM2 and MDMX. The authors provide important details about the p53-independent functions of both MDMX alone and as part of a MDM2–MDMX complex. The review discusses some key features in the structure and function of the proteins, including  key parts  that are relevant for their function, for some associated abnormalities, or for the formation of MDM2 and MDMX complexes.

MDM2 and MDMX are regulated on multiple levels within cells. These include regulation on the DNA level, including usage of several alternative promoters (DNA sequences needed to turn a gene on or off). One of the promoters of MDM2 and MDMX is regulated by their target p53, but there are also p53-independent promoters capable of switching on the genes of MDM2 and MDMS regardless of p53. In addition, numerous variations in the DNA sequences, the so-called single nucleotide polymorphisms (SNPs), affect the expression of the two genes and are relevant to different pathologies. Regulation on the RNA level includes co-transcriptional regulation like alternative splicing, as well as post-transcriptional regulation by microRNAs, long non-coding RNAs, circular RNAs, or RNA binding proteins. The review also presents a detailed characterization of the regulation of MDM2 and MDMX at the protein level, by summarizing data on numerous post-translational modifications or interacting partners of the two proteins, with regards to the different p53 contexts of the cited studies. Amongst the presented binding partners are some of the more recently identified interactors of the MDMs, which include proteins involved in the defense against several viruses. Overall, both MDM2 and MDMX stand out as extensively regulated at virtually every known level which according to the authors “attests to their relevance not only as inhibitors of p53 but of myriad other cellular activities and outcomes on their own”.

Since MDM2 and MDMX have majorly been studied in their relation to inhibit wild-type p53, of a particular interest stands a section of the review summarizing numerous processes in which the two proteins have been shown to be involved in cells lacking wild-type p53 (Figure 1).

Figure 1: Nonmalignant disease (left) and cancer-related (right) p53-independent functions of MDM2 and MDMX (adapted from Figure 4 of the review).

As shown in Figure 1, the p53-independent roles of MDM2 and MDMX in cancer and in other pathologies are versatile. That hints to the importance of uncovering molecules that can modulate the deleterious effects associated with dysfunctions of the two MDMs. A summary of numerous molecules that were shown to regulate the two proteins and thus consist of potential therapeutic targets, are also discussed in the review. Again the authors put an emphasis on how such small molecules might be useful in cells that lack wild-type p53. This is important not only because the two proteins have multiple functions other than regulating wild-type p53 which can be studied in such cells, but also because an important percentage of tumors is characterized by absence of wild-type p53.

The last section of the review points out some outstanding questions and directions for future research. If the fascinating questions of the versatile p53-independent roles MDM2 and MDMX have sparked your interest, find out more from the original paper.

Cosmic Water

Where does water actually come from? Most people would say, from the tap. While this certainly is true, scientists are – fortunately I would say, unfortunately my significant other might say – not like most people. They want to know more.

Before answering this question we should step back and ask, what is water? Water is a molecule, H2O. That means it consists of one oxygen atom, O, and two hydrogen atoms, H2. One way to produce water is to mix hydrogen and oxygen and ignite it. While on earth this can easily be done, on a cosmic scale initiating the reaction is far more complex. The biggest problem is that cosmic space  is cold. Like, really cold. The official record for the coldest temperature measured on earth is held by the arctic Vostok Station with −128.6 °F (−89.2 °C). In comparison, diffuse dense clouds, common cosmic structures where a lot of cosmic chemistry happens, have temperatures of -441.7°F to -279.7°F (-263.2°C to -173.2°C).  Anybody who has ever tried to cook but forgot to turn on the stove knows that for chemistry to happen, heat often has to be supplied, like through the flame in the above experiment. So, how can chemistry happen in the coldness of space?

The key to understanding this lies in the state of matter of cosmic gas. On earth, matter is mostly electrically neutral. That means, it contains exactly the same number of positively and negatively charged particles which therefore cancel each other out. To electrostatically charge an object, we have to actively make an effort, think of rubbing a balloon against your hair. This is not true for the universe in general. Actually, most matter in space is not neutral but charged. One notable example is the molecular ion H3+, a molecule consisting of three hydrogen atoms which are missing one electron, leading to a singly positively charged ion. Charged molecules can undergo reactions which are not possible for their neutral counterparts. For example, they react at temperatures at which their neutral counterparts do not react. In chemistry charged molecules are called radicals and are widely known for having a bad influence on your health. So stay away from cosmic clouds to avoid wrinkles! One reaction network starts with the reaction of atomic oxygen O with H3+. In a first step, two outcomes are possible: either they react to OH+ and H2 which in a second step reacts to H2O+ which subsequently neutralises, or they directly react to H2O+ and H before undergoing the neutralisation. Until recently, little was known about which of the two outcomes was more likely. Therefore, astronomical modelling assumptions had to be made. A precise knowledge of the pathway of the reaction network shown in figure 1 is especially interesting for interstellar regions in which the interstellar OH+ can be destroyed before reacting to H2O+. Here the direct reaction is the only efficient way of forming water, since potentially every intermediate product can undergo reactions not resulting in H2O+, therefore less steps directly increase the reaction yield.

Fig. 1: A strongly simplified excerpt of the reaction network of oxygen resulting in water.
Fig. 1: A strongly simplified excerpt of the reaction network of oxygen resulting in water.

This gap of knowledge was filled by Hillenbrand and colleagues, who accurately measured the reaction O + H3+ for both possible outcomes and therefore were able to give the ratio between them. But wait, didn’t we just learn that for the cosmic area of interest, this reaction takes place at highly unpleasant freezing temperatures? How on earth can this be reproduced on earth in the laboratory while still being able to control the setup? For this, the scientists came up with a nice little trick. On a microscopic level, the temperature of an object can be linked to the velocity of the particles it is made up of. Hotter particles move faster, colder ones move slower. If packed densely together, they constantly hit each other and change their direction of movement, leading to a constant vibration of the whole particle cloud. And the stronger the vibrations, the hotter they are.

This phenomenon was first discovered in 1827 by the Scottish physicist Robert Brown and linked to their temperature in the PhD thesis of Albert Einstein in 1905. The scientists made use of this phenomenon to study the reaction with “cold” reactants without actually cooling them down. Instead of mixing gases of cold O and H3+ together, they created two directed particle beams and let them overlap so the reaction could take place. Even though the beams were produced at room temperature and their individual velocity was quite high, the velocities of the beams relative to each other could be controlled to be very small. Think of driving on the highway and passing another car: You may be travelling at a speed well above 60 mph, corresponding to over 5200 feet per minute. Still, it can actually take you multiple seconds to fully pass a vehicle 10 feet long or more if you are not driving much faster than it, therefore having a low relative speed. And as we just learned, a small velocity corresponds to a low temperature.

Fig. 2: The dual source merged beam setup to measure the O + H3+ reaction.
Fig. 2: The dual source merged beam setup to measure the O + H3+ reaction.

To study the reaction the scientists used the setup shown in figure 2. They used two ion sources to produce either a beam of H3+ or O- ions. Since the experiment requires neutral oxygen atoms, the negatively charged O- ions are first neutralised by a laser, which kicks away the additional electron. These two beams are then overlapped in an interaction region allowing the chemical reaction to take place. Varying the relative velocity of the beams, corresponding to varying the temperature at which the reaction takes place, it can be studied over a broad range of temperatures, ranging from close to absolute zero to more than 1000°F.

 

Using this setup they could measure the so-called branching ratio, meaning the ratio of the outcomes H2O+ to OH+, over a wide temperature range. For low temperatures they found a ratio close to 1:1, whereas for higher temperatures only 20% of the reactions resulted directly in H2O+. In astrochemical models over the whole temperature range a fixed ratio of 30:70 was used, originating from a single measurement at room temperature, which was found to be not true. This implies that the frequently used model underestimates the production of water in cold interstellar regions and has to be adapted.

Atoms team-up to produce light

Brute force is usually not the best approach when trying to understand physical phenomena. Physical systems are nothing but a collection of particles. In order to study how these particles interact with each other, theorists calculate the time-evolution of the whole ensemble. As the number of particles increases, calculations become not plausible. In this context, defining clever shortcuts may be the only way to study real systems. Columbia researchers have established a new theoretical framework that calculates the conditions under which a light burst is emitted by an array of atoms – a structure used in quantum computers. They found that they can predict whether the high intensity light pulse will be emitted by looking at the first moments, thus circumventing the need of solving for the whole time evolution.

Spontaneous light emission is responsible for most of the light that we see. Examples of spontaneous emission are fireflies and the bioluminiscent bay in Puerto Rico. The physical mechanism responsible for spontaneous emission is sketched in Fig. 1a: the emitter (an atom that can be in two different energy states) is excited to a higher energy state, for example by external light. From that excited state, it spontaneously decays to a lower energy level, releasing the energy difference between the two states as a photon, i.e., as light. This is a purely quantum-mechanical process that cannot be explained by classical physics.

If multiple atoms are placed far away from each other, they act as independent units. When relaxing, they emit photons at an intensity that is proportional to the number of atoms present in the system. However, if the distance between the atoms is very small, a phenomenon called Dicke superradiance occurs.

When the atoms are very close, they interact with each other. As a result, the system as a whole cannot be regarded as the sum of many individual entities but rather as a collective system. Imagine many atoms close together forming an array, an ordered structure. External light will excite one of them, but there is no way to determine which atom within the array is the one that is excited. Effectively, all atoms are excited and not excited at the same time, the same way that Schrödinger’s cat is dead and alive at the same time. In quantum mechanics this phenomenon is  called superposition. When one of the atoms relaxes, the full atomic array decays as a whole and a photon is emitted in a particular direction.

If an excited atom is isolated, there is no reason why it should emit a photon in a particular direction. However, in a coupled atomic array, constructive and destructive interference creates what are called bright and dark channels. To understand this concept, we only need a lake and a handful of rocks. When a rock is thrown into a lake, it creates a circular pattern around it by emitting a wave that travels in all the possible directions. However, if one throws many rocks close to each other into the lake, the resulting wave does not travel in all possible directions: the waves from the individual rocks interfere. Some directions will not have waves due to individual waves traveling in opposite directions (destructive interference) and the wave pattern will result from the constructive interference of the individual waves (see Fig. 1c,d). That’s exactly what happens in the atomic array: a photon – which is a quantum object and therefore can behave as a particle as well as a wave – is emitted from each atom in all possible directions, but most of those photons interfere destructively and only a few of them survive, and those constitute bright channels.

 

Figure 1. a. Schematic representation of spontaneous emission. Left: the atom is in an excited state. Right: the atom relaxes to the ground state and emits light (a photon). b. A chain and a ring of atoms. c. Interference created by multiple initial wave fronts originated from the individual objects. d. Interference pattern created by two rocks thrown into the water.

Now let’s think about the second event of photon emission. When the atoms are far away from each other, each photon would be emitted in a random direction. Nevertheless, in an atomic array, the fact that the first photon is radiated along a particular direction makes it more likely for the second photon to be radiated in that same direction. It’s like an avalanche: once the first snow has started moving down along a path, the rest of the snow follows. Once the first photon is emitted along a particular direction, the next photons follow. And that creates the superradiant burst, a high intensity pulse of light.

Theoretical calculations of superradiance in systems of many atoms are not possible due to the complexity of the calculation – the computer memory and time needed are both prohibitive. What Masson and colleagues found is that, by looking at the first two photons, one can already know if there is going to be a superradiant burst. They can anticipate if the avalanche is going to happen. This means that the early dynamics define the nature of the light emission, and a calculation of the whole time-evolution is not necessary.

Since the distance between the atoms dictates the emergence of superradiance, one may ask whether the arrangement of the atoms plays any role. Before Masson’s work, the understanding in the field was that atomic chains and rings behave differently. In an atomic chain, the two atoms at the end are different from those in the middle, since the atom at the edge has only one partner whereas the one in the middle has two. On the other hand, in a ring, all the atoms have the same environment (see Fig. 1b). And this is certainly true for a system with very few atoms. But thanks to the authors’ theoretical approach, it is now possible to include many atoms in the calculation. And they found that, despite the atoms’ arrangement, superradiance occurs equally in chains and rings when the number of atoms is very high. The reason is that, for structures with several atoms, the influence of the two placed at the end of the chain is washed out by the effect of the many atoms located in the middle. Moreover, they also found that atoms can exhibit superradiance at much larger distances than expected.

Atomic arrays are used in atomic clocks, in GPS technology, and quantum computers. In quantum technologies, each atom is used as a bit, the unit of information – it represents a 1 or a 0 depending on if it is excited or relaxed. A byte contains eight bits. As a reference, Figure 1 contains 6000000 bytes. The common belief is that interactions between the atoms and the environment produce information loss with respect to a pure, isolated system. However, Masson and Asenjo-Garcia show that interactions between the atoms results in their synchronization, producing a coherent, high intensity light burst.

How a molecular structure explains the transport of fatty acids past the blood-brain barrier

The brain and eyes develop through constant circulation of nutrients through the blood-brain and blood-retina barriers. One such nutrient that is essential for development is an omega-3 fatty acid called docosahexaenoic acid (DHA). DHA makes up a fifth of all the fatty acids required on the membranes of cells in the central nervous system. Neither the neurons in the brain nor the cells in the eye are capable of synthesizing DHA by themselves and therefore depend on dietary sources for DNA. Previously, scientists knew from cellular clues that this fatty acid most likely passed through to the blood-brain and blood-retina barriers in the form of lysophophatidylcholine (LPC-DHA) using a molecular channel. This transporter is known as a major facilitator superfamily domain containing 2A, or MFSD2A, with the help of sodium atoms regulating the channel. However, it was not clear how this channel allowed the passing of complex molecules like DHA. A recent study by Dr. Rosemary Cater and colleagues at Columbia University provided precise clues to further show the structure of this channel. 

To investigate the structure of MFSD2A, the authors used a state of the art imaging technique called single-particle cryo-electron microscopy. This is a method of electron microscopy where a beam of electrons is transmitted through a rapidly frozen purified molecule. Because the sample is flash frozen, the molecules trapped in a frozen state can be imaged in their native shape as present in the cell and from multiple angles. By capturing and combining multiple captured 2D images, a 3D structure of the protein can be reconstructed with extreme accuracy. Cryo-electron microscopy is so impactful in biological significance that this method was awarded the 2017 Nobel Prize in Chemistry. found a number of molecular patterns and arrangements of protein chains that make up a full molecule of MFSD2A

Protein structure studies are typically among the most challenging grounds to explore in biology because proteins need to be captured in their native state as present in the cell. Past discoveries of various protein structures have been so instrumental in shaping therapeutic areas that the extent of mechanistic understanding of biological molecules has resulted in recognition by Nobel committees. Most recently, the discovery of the structure of the ribosome opened up fields of exploration into therapeutic interventions into ribosome diseases, some of which can lead to cancer.

To get the best chance at imaging the structure of MFSD2A, the scientists extracted and examined purified versions of this protein obtained from multiple organisms: the zebrafish, the frog, European cattle, the domestic dog, the red junglefowl, mice and humans. Finally, the authors found that the protein obtained from the red junglefowl, which is a rooster species that originates from Southeast Asia, was the most experimentally stable and most alike (73% identity) the human version of MFSD2A. 

Using additional accessory proteins to help with the orientation of MFSD2A, the authors obtained high-quality images, with a resolution of 0.3 nanometer, or 0.3 billionth of a meter. From the imaging data, the authors found that MFSD2A protein itself is about 5 nm wide and 8 nm long. MFSD2A is a transporter protein and like many transporters, it contains repeated bundles of helices made of protein chains that traverse the cell membrane and are connected by a protein chain that loops within the space in the cell. 

Structure of MFSD2A arranged as protein helices (colored cylinders) within the cell membrane along with protein loops that form both in the extracellular space (“Out”) and within the interior, cytoplasmic space (“In”) of the cell. The cytoplasmic loops likely have an important functional role. Figure from Cater et al, 2021.

The cell membrane consists of two layers of lipid molecules, known as the lipid bilayer, that allow entry and exit of materials from the cell. These loops provide the shape to the protein inside the cell such that it appears to provide a large enough cavity opening from the lipid bilayer into the cellular space to allow the target molecules to enter the cell. Amino acids are the building blocks of proteins and the cavity contains amino acids of both water-attracting and water-repelling kind. This property makes it possible for many molecules of differing chemical nature to be able to be accommodated within the cavity. This cavity contains three important regions that allow for the protein to be specific and functional: a charged region, a binding site for sodium atoms and a lipid-specific pocket. The authors speculate that these parts help in establishing the mechanism by which LPC-DHA is transported from the outside into the cell. The multiple protein helices form two protein domains that capture LPC-DHA from outside the cell layer of the blood brain barrier of endothelial cells, then rock over a rotation axis so that now their confirmation switches and finally, they release the protein molecule into the cell. For this activity of movement of LPC-DHA, sodium atoms are absolutely required to allow for the shape change of the protein. Once LPC-DHA enters the barrier cells in this manner, the protein is then transported across to the other side of the cell facing the brain containing neurons. 

The transporter channel MFSD2A changes its shape once it binds sodium atoms in the extracellular space, which helps the transport of LPC-DHA from the blood into the brain space through the barrier of a single line of cells made up of endothelial cells. Figure adapted from Cater et al, 2021.

Humans with mutations in MFSD2A gene have abnormal brain defects such as microcephaly, and disruption of the gene in mice affected neuronal branching and fatty acid composition in the brain. The discovery of the structure of a molecule that mediates uptake of essential nutrients across the blood-brain and eye-brain barriers will help in the delivery of therapies of neurological diseases.

Dr. Rosemary J. Cater is a postdoctoral researcher in the lab of Dr. Filippo Mancia in the Department of Physiology and Cellular Biophysics at Columbia University.

Take a Break: How the Brain Chooses When to Explore and When to Rest

Have you ever wondered why we feel comfortable in a familiar place or why going back to our favorite spots over and over again feels so good? Well, Dr. Paolo Botta, a former postdoc at Columbia University, and colleagues attempted to unravel some of the inner workings of the brain when it comes to rest and exploration. More specifically, Dr. Botta examined how neuronal activity correlates with periods of rest when exploring new areas. Dr. Botta and colleagues followed the behavior of mice as they freely explored a new area. They specifically looked at where and how often these mice decided to exhibit arrest behavior, or, in other words, take a break during their explorations. While the arrest behavior alone is a fascinating phenomenon and provides insight into how mice explore new spaces, Dr. Botta and colleagues decided to go a step further and see which neurons in the brain are important for this arrest behavior. They decide to home in on an area of the brain called the Nucleus of the Basal Lateral Amygdala (BLA). This area has previously been shown to be involved in locomotor exploration, experience based learning, recognition of familiar areas.

With this information in hand, Dr. Botta and colleagues began by identifying whether BLA neurons are active during arrest behavior. To this end, they gave mice access to both their home cages and a large open area for five days and allowed them to freely explore the large open area during this period. BLA neuronal activity was monitored in the mice by measuring calcium levels, with higher calcium levels indicating neuronal activity (Figure). The researchers observed an increase in calcium in BLA neurons during arrest behavior, which means that BLA neurons are involved in this type of behavior.  However, do these neurons actually cause the arrest behavior? To answer this question, Dr. Botta and colleagues either activated or inhibited the neurons using optogenetics. Optogenetics is a technique in which neurons are stimulated by light. So, by turning different lights on and off, the researchers were able to either activate or inhibit BLA neurons whenever they wanted to. When they activated the BLA neurons, the mice decreased their speed and experienced more arrest behavior. When they inhibited the BLA neurons, the mice had an increase in movement speed. After seeing how turning BLA neurons on and off affected behavior, they concluded that the BLA neurons are important for inducing arrest behavior.

At this point, Dr. Botta and colleagues have revealed that BLA neuronal activity occurs specifically during these arrest behaviors and that their activity is important for the onset of the arrest. However, their curiosity did not stop there. They began to wonder whether BLA activity changed when the mice exhibited arrest behavior, or took breaks in more familiar areas. To figure this out, Dr. Botto and colleagues tracked exactly where the mice explored and counted how many times the mice exhibited arrest behavior in areas that they previously explored. With this experiment, they realized that the mice were more likely to exhibit arrest behavior in areas previously visited. So, mice, like humans, have favorite spots and they like to rest in those spots! After seeing that the mice have favorite spots, Dr. Botto and colleagues went on to examine the BLA neuronal activity in these familiar areas. They found that there was an increase in neuronal activity in these familiar areas and the more a mouse revisited and exhibited arrest behavior in a specific area the more neuronal activity developed. In other words, the more often a mouse took a break in a specific area the more that correlated with BLA neuronal activity.

The amygdala has multiple nuclei, which consist of groups of cells that are important for specific roles. The Central Nucleus of the Amygdala (CEA) is a part of the amygdala that has previously been shown to be involved in immobility. BLA neurons also communicate with the CEA (Figure). Knowing that the BLA neurons are important for invoking arrest behavior and the CEA plays a role in immobility, Dr. Botta and colleagues were curious as to whether these BLA neurons that project to the CEA are the specific neurons involved in triggering arrest behavior. To see whether the BLA neurons that project to the CEA are the ones active during arrest behavior they used the combination of calcium imaging and optogenetic techniques previously mentioned. With these techniques they were able to see that the BLA neurons that project to the CEA had an increase in neuronal activity during arrest behavior (Figure). This increase was not seen in BLA neurons that projected to other parts of the amygdala indicating that the BLA-CEA interaction is integral for the arrest activity. They also repeated the stimulation of the BLA neurons that project to CEA and observed an increase in arrest while inhibiting the same neurons resulted in an increase in movement, further confirming the need of this BLA-CEA interaction to induce arrest behavior.

Overall, Dr. Botto and colleagues discovered that BLA neurons that communicate with the CEA are important for arrest behavior, particularly in familiar places. This behavior seems to be extremely important for allowing a mouse to orient itself and properly explore novel surroundings. Maybe humans have a similar pathway that we use when wandering around. Could my BLA be the reason why I always go to the same cafes after a long walk or stop in the same part of the park while walking my dog? Are our BLA neurons just firing away while we rest? 


Figure: BLA neuronal activity during exploratory vs arrest behavior.      Left: Decreased activity in BLA neurons that communicate with the CEA results in increased exploratory behavior. Right: Increased BLA to CEA neuronal activity, indicated by calcium signaling, results in increased arrest behavior. Red colors indicate decreased BLA neuronal activity and increased exploratory behavior. Green colors indicate increased BLA neuronal activity and increased arrest behavior. BLA: Nucleus of the Basolateral Amygdala, CEA: Central Nucleus of the Amygdala

Why the gallbladder matters – The role of bile acids in metabolic health

Unless you belong to the 10-15% of people that have gallstones, you probably never think about your gallbladder or its function. However, this small pear-shaped organ plays an important role in our digestive system. The gallbladder is situated right under the liver, and stores bile produced by liver cells. Mostly after eating meals, bile is released from the gallbladder into the gut. Here, substances within the bile called bile acids help with the breakdown and absorption of fat. Apart from their role in the digestive system, bile acids have been shown to communicate with other organs and thereby affect the metabolism of fat and sugar. Not unexpectedly, considering their roles in digestion and metabolism, bile acids are associated with various metabolic diseases in humans, such as obesity and diabetes. Modulation of bile acids could therefore be used as a strategy to treat or prevent such metabolic disorders.

Dr. Antwi-Boasiako Oteng and colleagues from the Haeusler lab of the Department of Pathology & Cell Biology at Columbia University aimed to better understand how bile acids influence metabolic health. The two primary bile acids in humans are cholic acid (CA) and chenodeoxycholic acid (CDCA), which are both made from cholesterol in the liver (see Figure below, left panel). CA production requires the enzyme Cyp8b1, which adds a hydroxyl group (i.e., one hydrogen atom bonded to one oxygen atom) to the 12th carbon of the molecule. Because of this modification, CA is called a “12α-hydroxylated bile acid”. CDCA does not contain this hydroxyl group on its 12th carbon, and is therefore called a “non-12α-hydroxylated bile acid”. Levels of 12α-hydroxylated bile acids like CA are higher in obese individuals, but it is not yet known whether they can cause obesity. Therefore, in their study published in Molecular Metabolism, the researchers investigated the role of non-12α-hydroxylated versus 12α-hydroxylated bile acids in the development of metabolic disorders in an experimental mouse model.

Dr. Oteng and colleagues started their research by genetically manipulating mice in such a way that they have bile acids more similar to humans. This is necessary as mice produce bile acids called muricholic acids (MCAs) that are not present in humans. By removing the enzyme that converts CDCA into MCA, the researchers created mice with a human-like bile acid profile (see Figure below, right panel). Aside from having low levels of MCA, these mice had higher levels of CDCA and lower levels of CA as compared to regular mice. Most importantly, fat absorption was strongly reduced in these mice, and they were protected from weight gain when fed a high-fat diet. To test if this could be due to the relative decrease in 12α-hydroxylated bile acids, the researchers supplemented another group of human-like mice with high levels of 12α-hydroxylated bile acids. This treatment strongly promoted fat absorption, which may increase susceptibility to metabolic disorders when combined with an excessive and/or unhealthy diet.

These findings suggest that the ratio of non-12α-hydroxylated versus 12α-hydroxylated bile acids is an important determinant of metabolic health in humans, which opens up new avenues for therapeutic intervention. According to Dr. Oteng, “We now have increased confidence that targeting Cyp8b1 to reduce the levels of 12α-hydroxylated bile acids can reduce the risk of metabolic disease in humans”. Currently, more than one-third of Americans have metabolic disorders, which increases their risk of heart disease and other health problems. If proven effective, a therapy targeting human bile acid composition could have a major impact on public health.


Left panel. The bile acids CA and CDCA are made from cholesterol in the liver. CA is 12α-hydroxylated bile acid produced by the enzyme Cyp8b1, while CDCA is non-12α-hydroxylated. In mice, CDCA is further converted into MCA. Right panel. Inhibition of the conversion of CDCA into MCA increases the ratio of non-12α-hydroxylated versus 12α-hydroxylated bile acids, thereby reducing fat absorption in the gut and protecting from diet-induced weight gain.

Follow this blog

Get every new post delivered right to your inbox.