Let’s get MDM2 and MDMX out of the shadow of p53

When it comes to cancer, one molecule stands out as being among the most extensively studied: the p53 tumor suppressor protein. p53 can exist in cells in several different forms. When p53 is in its so-called wild-type form, it is capable of activating various responses that contribute to tumor suppression. In their recent review, Columbia postdoc Rafaela Muniz de Quieroz and colleagues summarize the vast scientific literature on two key regulators of p53: MDM2 and MDMX. Both MDM2 and MDMX are known to interact with p53 and disrupt its function. Their absence has been linked not only to increased cancer development, but also to a number of dysfunctions, including embryonic lethality in mice. MDM2 has been shown to negatively regulate p53 by diverse mechanisms spanning from expression of the p53 gene to degradation of the p53 protein or its expulsion from the cellular nucleus, where the protein accomplishes its function. Although very similar to MDM2, MDMX is less well studied. We do know, however, that MDMX is a protein that can work together with the MDM2  in p53 degradation.

While many reviews and studies have pointed to the roles of MDM2, and to a lesser extent of MDMX, in p53 regulation, the current review by Quieroz and her colleagues  puts a larger focus on the myriad of p53-independent activities of MDM2 and MDMX. The authors provide important details about the p53-independent functions of both MDMX alone and as part of a MDM2–MDMX complex. The review discusses some key features in the structure and function of the proteins, including  key parts  that are relevant for their function, for some associated abnormalities, or for the formation of MDM2 and MDMX complexes.

MDM2 and MDMX are regulated on multiple levels within cells. These include regulation on the DNA level, including usage of several alternative promoters (DNA sequences needed to turn a gene on or off). One of the promoters of MDM2 and MDMX is regulated by their target p53, but there are also p53-independent promoters capable of switching on the genes of MDM2 and MDMS regardless of p53. In addition, numerous variations in the DNA sequences, the so-called single nucleotide polymorphisms (SNPs), affect the expression of the two genes and are relevant to different pathologies. Regulation on the RNA level includes co-transcriptional regulation like alternative splicing, as well as post-transcriptional regulation by microRNAs, long non-coding RNAs, circular RNAs, or RNA binding proteins. The review also presents a detailed characterization of the regulation of MDM2 and MDMX at the protein level, by summarizing data on numerous post-translational modifications or interacting partners of the two proteins, with regards to the different p53 contexts of the cited studies. Amongst the presented binding partners are some of the more recently identified interactors of the MDMs, which include proteins involved in the defense against several viruses. Overall, both MDM2 and MDMX stand out as extensively regulated at virtually every known level which according to the authors “attests to their relevance not only as inhibitors of p53 but of myriad other cellular activities and outcomes on their own”.

Since MDM2 and MDMX have majorly been studied in their relation to inhibit wild-type p53, of a particular interest stands a section of the review summarizing numerous processes in which the two proteins have been shown to be involved in cells lacking wild-type p53 (Figure 1).

Figure 1: Nonmalignant disease (left) and cancer-related (right) p53-independent functions of MDM2 and MDMX (adapted from Figure 4 of the review).

As shown in Figure 1, the p53-independent roles of MDM2 and MDMX in cancer and in other pathologies are versatile. That hints to the importance of uncovering molecules that can modulate the deleterious effects associated with dysfunctions of the two MDMs. A summary of numerous molecules that were shown to regulate the two proteins and thus consist of potential therapeutic targets, are also discussed in the review. Again the authors put an emphasis on how such small molecules might be useful in cells that lack wild-type p53. This is important not only because the two proteins have multiple functions other than regulating wild-type p53 which can be studied in such cells, but also because an important percentage of tumors is characterized by absence of wild-type p53.

The last section of the review points out some outstanding questions and directions for future research. If the fascinating questions of the versatile p53-independent roles MDM2 and MDMX have sparked your interest, find out more from the original paper.

Cosmic Water

Where does water actually come from? Most people would say, from the tap. While this certainly is true, scientists are – fortunately I would say, unfortunately my significant other might say – not like most people. They want to know more.

Before answering this question we should step back and ask, what is water? Water is a molecule, H2O. That means it consists of one oxygen atom, O, and two hydrogen atoms, H2. One way to produce water is to mix hydrogen and oxygen and ignite it. While on earth this can easily be done, on a cosmic scale initiating the reaction is far more complex. The biggest problem is that cosmic space  is cold. Like, really cold. The official record for the coldest temperature measured on earth is held by the arctic Vostok Station with −128.6 °F (−89.2 °C). In comparison, diffuse dense clouds, common cosmic structures where a lot of cosmic chemistry happens, have temperatures of -441.7°F to -279.7°F (-263.2°C to -173.2°C).  Anybody who has ever tried to cook but forgot to turn on the stove knows that for chemistry to happen, heat often has to be supplied, like through the flame in the above experiment. So, how can chemistry happen in the coldness of space?

The key to understanding this lies in the state of matter of cosmic gas. On earth, matter is mostly electrically neutral. That means, it contains exactly the same number of positively and negatively charged particles which therefore cancel each other out. To electrostatically charge an object, we have to actively make an effort, think of rubbing a balloon against your hair. This is not true for the universe in general. Actually, most matter in space is not neutral but charged. One notable example is the molecular ion H3+, a molecule consisting of three hydrogen atoms which are missing one electron, leading to a singly positively charged ion. Charged molecules can undergo reactions which are not possible for their neutral counterparts. For example, they react at temperatures at which their neutral counterparts do not react. In chemistry charged molecules are called radicals and are widely known for having a bad influence on your health. So stay away from cosmic clouds to avoid wrinkles! One reaction network starts with the reaction of atomic oxygen O with H3+. In a first step, two outcomes are possible: either they react to OH+ and H2 which in a second step reacts to H2O+ which subsequently neutralises, or they directly react to H2O+ and H before undergoing the neutralisation. Until recently, little was known about which of the two outcomes was more likely. Therefore, astronomical modelling assumptions had to be made. A precise knowledge of the pathway of the reaction network shown in figure 1 is especially interesting for interstellar regions in which the interstellar OH+ can be destroyed before reacting to H2O+. Here the direct reaction is the only efficient way of forming water, since potentially every intermediate product can undergo reactions not resulting in H2O+, therefore less steps directly increase the reaction yield.

Fig. 1: A strongly simplified excerpt of the reaction network of oxygen resulting in water.
Fig. 1: A strongly simplified excerpt of the reaction network of oxygen resulting in water.

This gap of knowledge was filled by Hillenbrand and colleagues, who accurately measured the reaction O + H3+ for both possible outcomes and therefore were able to give the ratio between them. But wait, didn’t we just learn that for the cosmic area of interest, this reaction takes place at highly unpleasant freezing temperatures? How on earth can this be reproduced on earth in the laboratory while still being able to control the setup? For this, the scientists came up with a nice little trick. On a microscopic level, the temperature of an object can be linked to the velocity of the particles it is made up of. Hotter particles move faster, colder ones move slower. If packed densely together, they constantly hit each other and change their direction of movement, leading to a constant vibration of the whole particle cloud. And the stronger the vibrations, the hotter they are.

This phenomenon was first discovered in 1827 by the Scottish physicist Robert Brown and linked to their temperature in the PhD thesis of Albert Einstein in 1905. The scientists made use of this phenomenon to study the reaction with “cold” reactants without actually cooling them down. Instead of mixing gases of cold O and H3+ together, they created two directed particle beams and let them overlap so the reaction could take place. Even though the beams were produced at room temperature and their individual velocity was quite high, the velocities of the beams relative to each other could be controlled to be very small. Think of driving on the highway and passing another car: You may be travelling at a speed well above 60 mph, corresponding to over 5200 feet per minute. Still, it can actually take you multiple seconds to fully pass a vehicle 10 feet long or more if you are not driving much faster than it, therefore having a low relative speed. And as we just learned, a small velocity corresponds to a low temperature.

Fig. 2: The dual source merged beam setup to measure the O + H3+ reaction.
Fig. 2: The dual source merged beam setup to measure the O + H3+ reaction.

To study the reaction the scientists used the setup shown in figure 2. They used two ion sources to produce either a beam of H3+ or O- ions. Since the experiment requires neutral oxygen atoms, the negatively charged O- ions are first neutralised by a laser, which kicks away the additional electron. These two beams are then overlapped in an interaction region allowing the chemical reaction to take place. Varying the relative velocity of the beams, corresponding to varying the temperature at which the reaction takes place, it can be studied over a broad range of temperatures, ranging from close to absolute zero to more than 1000°F.


Using this setup they could measure the so-called branching ratio, meaning the ratio of the outcomes H2O+ to OH+, over a wide temperature range. For low temperatures they found a ratio close to 1:1, whereas for higher temperatures only 20% of the reactions resulted directly in H2O+. In astrochemical models over the whole temperature range a fixed ratio of 30:70 was used, originating from a single measurement at room temperature, which was found to be not true. This implies that the frequently used model underestimates the production of water in cold interstellar regions and has to be adapted.

Atoms team-up to produce light

Brute force is usually not the best approach when trying to understand physical phenomena. Physical systems are nothing but a collection of particles. In order to study how these particles interact with each other, theorists calculate the time-evolution of the whole ensemble. As the number of particles increases, calculations become not plausible. In this context, defining clever shortcuts may be the only way to study real systems. Columbia researchers have established a new theoretical framework that calculates the conditions under which a light burst is emitted by an array of atoms – a structure used in quantum computers. They found that they can predict whether the high intensity light pulse will be emitted by looking at the first moments, thus circumventing the need of solving for the whole time evolution.

Spontaneous light emission is responsible for most of the light that we see. Examples of spontaneous emission are fireflies and the bioluminiscent bay in Puerto Rico. The physical mechanism responsible for spontaneous emission is sketched in Fig. 1a: the emitter (an atom that can be in two different energy states) is excited to a higher energy state, for example by external light. From that excited state, it spontaneously decays to a lower energy level, releasing the energy difference between the two states as a photon, i.e., as light. This is a purely quantum-mechanical process that cannot be explained by classical physics.

If multiple atoms are placed far away from each other, they act as independent units. When relaxing, they emit photons at an intensity that is proportional to the number of atoms present in the system. However, if the distance between the atoms is very small, a phenomenon called Dicke superradiance occurs.

When the atoms are very close, they interact with each other. As a result, the system as a whole cannot be regarded as the sum of many individual entities but rather as a collective system. Imagine many atoms close together forming an array, an ordered structure. External light will excite one of them, but there is no way to determine which atom within the array is the one that is excited. Effectively, all atoms are excited and not excited at the same time, the same way that Schrödinger’s cat is dead and alive at the same time. In quantum mechanics this phenomenon is  called superposition. When one of the atoms relaxes, the full atomic array decays as a whole and a photon is emitted in a particular direction.

If an excited atom is isolated, there is no reason why it should emit a photon in a particular direction. However, in a coupled atomic array, constructive and destructive interference creates what are called bright and dark channels. To understand this concept, we only need a lake and a handful of rocks. When a rock is thrown into a lake, it creates a circular pattern around it by emitting a wave that travels in all the possible directions. However, if one throws many rocks close to each other into the lake, the resulting wave does not travel in all possible directions: the waves from the individual rocks interfere. Some directions will not have waves due to individual waves traveling in opposite directions (destructive interference) and the wave pattern will result from the constructive interference of the individual waves (see Fig. 1c,d). That’s exactly what happens in the atomic array: a photon – which is a quantum object and therefore can behave as a particle as well as a wave – is emitted from each atom in all possible directions, but most of those photons interfere destructively and only a few of them survive, and those constitute bright channels.


Figure 1. a. Schematic representation of spontaneous emission. Left: the atom is in an excited state. Right: the atom relaxes to the ground state and emits light (a photon). b. A chain and a ring of atoms. c. Interference created by multiple initial wave fronts originated from the individual objects. d. Interference pattern created by two rocks thrown into the water.

Now let’s think about the second event of photon emission. When the atoms are far away from each other, each photon would be emitted in a random direction. Nevertheless, in an atomic array, the fact that the first photon is radiated along a particular direction makes it more likely for the second photon to be radiated in that same direction. It’s like an avalanche: once the first snow has started moving down along a path, the rest of the snow follows. Once the first photon is emitted along a particular direction, the next photons follow. And that creates the superradiant burst, a high intensity pulse of light.

Theoretical calculations of superradiance in systems of many atoms are not possible due to the complexity of the calculation – the computer memory and time needed are both prohibitive. What Masson and colleagues found is that, by looking at the first two photons, one can already know if there is going to be a superradiant burst. They can anticipate if the avalanche is going to happen. This means that the early dynamics define the nature of the light emission, and a calculation of the whole time-evolution is not necessary.

Since the distance between the atoms dictates the emergence of superradiance, one may ask whether the arrangement of the atoms plays any role. Before Masson’s work, the understanding in the field was that atomic chains and rings behave differently. In an atomic chain, the two atoms at the end are different from those in the middle, since the atom at the edge has only one partner whereas the one in the middle has two. On the other hand, in a ring, all the atoms have the same environment (see Fig. 1b). And this is certainly true for a system with very few atoms. But thanks to the authors’ theoretical approach, it is now possible to include many atoms in the calculation. And they found that, despite the atoms’ arrangement, superradiance occurs equally in chains and rings when the number of atoms is very high. The reason is that, for structures with several atoms, the influence of the two placed at the end of the chain is washed out by the effect of the many atoms located in the middle. Moreover, they also found that atoms can exhibit superradiance at much larger distances than expected.

Atomic arrays are used in atomic clocks, in GPS technology, and quantum computers. In quantum technologies, each atom is used as a bit, the unit of information – it represents a 1 or a 0 depending on if it is excited or relaxed. A byte contains eight bits. As a reference, Figure 1 contains 6000000 bytes. The common belief is that interactions between the atoms and the environment produce information loss with respect to a pure, isolated system. However, Masson and Asenjo-Garcia show that interactions between the atoms results in their synchronization, producing a coherent, high intensity light burst.

How a molecular structure explains the transport of fatty acids past the blood-brain barrier

The brain and eyes develop through constant circulation of nutrients through the blood-brain and blood-retina barriers. One such nutrient that is essential for development is an omega-3 fatty acid called docosahexaenoic acid (DHA). DHA makes up a fifth of all the fatty acids required on the membranes of cells in the central nervous system. Neither the neurons in the brain nor the cells in the eye are capable of synthesizing DHA by themselves and therefore depend on dietary sources for DNA. Previously, scientists knew from cellular clues that this fatty acid most likely passed through to the blood-brain and blood-retina barriers in the form of lysophophatidylcholine (LPC-DHA) using a molecular channel. This transporter is known as a major facilitator superfamily domain containing 2A, or MFSD2A, with the help of sodium atoms regulating the channel. However, it was not clear how this channel allowed the passing of complex molecules like DHA. A recent study by Dr. Rosemary Cater and colleagues at Columbia University provided precise clues to further show the structure of this channel. 

To investigate the structure of MFSD2A, the authors used a state of the art imaging technique called single-particle cryo-electron microscopy. This is a method of electron microscopy where a beam of electrons is transmitted through a rapidly frozen purified molecule. Because the sample is flash frozen, the molecules trapped in a frozen state can be imaged in their native shape as present in the cell and from multiple angles. By capturing and combining multiple captured 2D images, a 3D structure of the protein can be reconstructed with extreme accuracy. Cryo-electron microscopy is so impactful in biological significance that this method was awarded the 2017 Nobel Prize in Chemistry. found a number of molecular patterns and arrangements of protein chains that make up a full molecule of MFSD2A

Protein structure studies are typically among the most challenging grounds to explore in biology because proteins need to be captured in their native state as present in the cell. Past discoveries of various protein structures have been so instrumental in shaping therapeutic areas that the extent of mechanistic understanding of biological molecules has resulted in recognition by Nobel committees. Most recently, the discovery of the structure of the ribosome opened up fields of exploration into therapeutic interventions into ribosome diseases, some of which can lead to cancer.

To get the best chance at imaging the structure of MFSD2A, the scientists extracted and examined purified versions of this protein obtained from multiple organisms: the zebrafish, the frog, European cattle, the domestic dog, the red junglefowl, mice and humans. Finally, the authors found that the protein obtained from the red junglefowl, which is a rooster species that originates from Southeast Asia, was the most experimentally stable and most alike (73% identity) the human version of MFSD2A. 

Using additional accessory proteins to help with the orientation of MFSD2A, the authors obtained high-quality images, with a resolution of 0.3 nanometer, or 0.3 billionth of a meter. From the imaging data, the authors found that MFSD2A protein itself is about 5 nm wide and 8 nm long. MFSD2A is a transporter protein and like many transporters, it contains repeated bundles of helices made of protein chains that traverse the cell membrane and are connected by a protein chain that loops within the space in the cell. 

Structure of MFSD2A arranged as protein helices (colored cylinders) within the cell membrane along with protein loops that form both in the extracellular space (“Out”) and within the interior, cytoplasmic space (“In”) of the cell. The cytoplasmic loops likely have an important functional role. Figure from Cater et al, 2021.

The cell membrane consists of two layers of lipid molecules, known as the lipid bilayer, that allow entry and exit of materials from the cell. These loops provide the shape to the protein inside the cell such that it appears to provide a large enough cavity opening from the lipid bilayer into the cellular space to allow the target molecules to enter the cell. Amino acids are the building blocks of proteins and the cavity contains amino acids of both water-attracting and water-repelling kind. This property makes it possible for many molecules of differing chemical nature to be able to be accommodated within the cavity. This cavity contains three important regions that allow for the protein to be specific and functional: a charged region, a binding site for sodium atoms and a lipid-specific pocket. The authors speculate that these parts help in establishing the mechanism by which LPC-DHA is transported from the outside into the cell. The multiple protein helices form two protein domains that capture LPC-DHA from outside the cell layer of the blood brain barrier of endothelial cells, then rock over a rotation axis so that now their confirmation switches and finally, they release the protein molecule into the cell. For this activity of movement of LPC-DHA, sodium atoms are absolutely required to allow for the shape change of the protein. Once LPC-DHA enters the barrier cells in this manner, the protein is then transported across to the other side of the cell facing the brain containing neurons. 

The transporter channel MFSD2A changes its shape once it binds sodium atoms in the extracellular space, which helps the transport of LPC-DHA from the blood into the brain space through the barrier of a single line of cells made up of endothelial cells. Figure adapted from Cater et al, 2021.

Humans with mutations in MFSD2A gene have abnormal brain defects such as microcephaly, and disruption of the gene in mice affected neuronal branching and fatty acid composition in the brain. The discovery of the structure of a molecule that mediates uptake of essential nutrients across the blood-brain and eye-brain barriers will help in the delivery of therapies of neurological diseases.

Dr. Rosemary J. Cater is a postdoctoral researcher in the lab of Dr. Filippo Mancia in the Department of Physiology and Cellular Biophysics at Columbia University.

Take a Break: How the Brain Chooses When to Explore and When to Rest

Have you ever wondered why we feel comfortable in a familiar place or why going back to our favorite spots over and over again feels so good? Well, Dr. Paolo Botta, a former postdoc at Columbia University, and colleagues attempted to unravel some of the inner workings of the brain when it comes to rest and exploration. More specifically, Dr. Botta examined how neuronal activity correlates with periods of rest when exploring new areas. Dr. Botta and colleagues followed the behavior of mice as they freely explored a new area. They specifically looked at where and how often these mice decided to exhibit arrest behavior, or, in other words, take a break during their explorations. While the arrest behavior alone is a fascinating phenomenon and provides insight into how mice explore new spaces, Dr. Botta and colleagues decided to go a step further and see which neurons in the brain are important for this arrest behavior. They decide to home in on an area of the brain called the Nucleus of the Basal Lateral Amygdala (BLA). This area has previously been shown to be involved in locomotor exploration, experience based learning, recognition of familiar areas.

With this information in hand, Dr. Botta and colleagues began by identifying whether BLA neurons are active during arrest behavior. To this end, they gave mice access to both their home cages and a large open area for five days and allowed them to freely explore the large open area during this period. BLA neuronal activity was monitored in the mice by measuring calcium levels, with higher calcium levels indicating neuronal activity (Figure). The researchers observed an increase in calcium in BLA neurons during arrest behavior, which means that BLA neurons are involved in this type of behavior.  However, do these neurons actually cause the arrest behavior? To answer this question, Dr. Botta and colleagues either activated or inhibited the neurons using optogenetics. Optogenetics is a technique in which neurons are stimulated by light. So, by turning different lights on and off, the researchers were able to either activate or inhibit BLA neurons whenever they wanted to. When they activated the BLA neurons, the mice decreased their speed and experienced more arrest behavior. When they inhibited the BLA neurons, the mice had an increase in movement speed. After seeing how turning BLA neurons on and off affected behavior, they concluded that the BLA neurons are important for inducing arrest behavior.

At this point, Dr. Botta and colleagues have revealed that BLA neuronal activity occurs specifically during these arrest behaviors and that their activity is important for the onset of the arrest. However, their curiosity did not stop there. They began to wonder whether BLA activity changed when the mice exhibited arrest behavior, or took breaks in more familiar areas. To figure this out, Dr. Botto and colleagues tracked exactly where the mice explored and counted how many times the mice exhibited arrest behavior in areas that they previously explored. With this experiment, they realized that the mice were more likely to exhibit arrest behavior in areas previously visited. So, mice, like humans, have favorite spots and they like to rest in those spots! After seeing that the mice have favorite spots, Dr. Botto and colleagues went on to examine the BLA neuronal activity in these familiar areas. They found that there was an increase in neuronal activity in these familiar areas and the more a mouse revisited and exhibited arrest behavior in a specific area the more neuronal activity developed. In other words, the more often a mouse took a break in a specific area the more that correlated with BLA neuronal activity.

The amygdala has multiple nuclei, which consist of groups of cells that are important for specific roles. The Central Nucleus of the Amygdala (CEA) is a part of the amygdala that has previously been shown to be involved in immobility. BLA neurons also communicate with the CEA (Figure). Knowing that the BLA neurons are important for invoking arrest behavior and the CEA plays a role in immobility, Dr. Botta and colleagues were curious as to whether these BLA neurons that project to the CEA are the specific neurons involved in triggering arrest behavior. To see whether the BLA neurons that project to the CEA are the ones active during arrest behavior they used the combination of calcium imaging and optogenetic techniques previously mentioned. With these techniques they were able to see that the BLA neurons that project to the CEA had an increase in neuronal activity during arrest behavior (Figure). This increase was not seen in BLA neurons that projected to other parts of the amygdala indicating that the BLA-CEA interaction is integral for the arrest activity. They also repeated the stimulation of the BLA neurons that project to CEA and observed an increase in arrest while inhibiting the same neurons resulted in an increase in movement, further confirming the need of this BLA-CEA interaction to induce arrest behavior.

Overall, Dr. Botto and colleagues discovered that BLA neurons that communicate with the CEA are important for arrest behavior, particularly in familiar places. This behavior seems to be extremely important for allowing a mouse to orient itself and properly explore novel surroundings. Maybe humans have a similar pathway that we use when wandering around. Could my BLA be the reason why I always go to the same cafes after a long walk or stop in the same part of the park while walking my dog? Are our BLA neurons just firing away while we rest? 

Figure: BLA neuronal activity during exploratory vs arrest behavior.      Left: Decreased activity in BLA neurons that communicate with the CEA results in increased exploratory behavior. Right: Increased BLA to CEA neuronal activity, indicated by calcium signaling, results in increased arrest behavior. Red colors indicate decreased BLA neuronal activity and increased exploratory behavior. Green colors indicate increased BLA neuronal activity and increased arrest behavior. BLA: Nucleus of the Basolateral Amygdala, CEA: Central Nucleus of the Amygdala

Why the gallbladder matters – The role of bile acids in metabolic health

Unless you belong to the 10-15% of people that have gallstones, you probably never think about your gallbladder or its function. However, this small pear-shaped organ plays an important role in our digestive system. The gallbladder is situated right under the liver, and stores bile produced by liver cells. Mostly after eating meals, bile is released from the gallbladder into the gut. Here, substances within the bile called bile acids help with the breakdown and absorption of fat. Apart from their role in the digestive system, bile acids have been shown to communicate with other organs and thereby affect the metabolism of fat and sugar. Not unexpectedly, considering their roles in digestion and metabolism, bile acids are associated with various metabolic diseases in humans, such as obesity and diabetes. Modulation of bile acids could therefore be used as a strategy to treat or prevent such metabolic disorders.

Dr. Antwi-Boasiako Oteng and colleagues from the Haeusler lab of the Department of Pathology & Cell Biology at Columbia University aimed to better understand how bile acids influence metabolic health. The two primary bile acids in humans are cholic acid (CA) and chenodeoxycholic acid (CDCA), which are both made from cholesterol in the liver (see Figure below, left panel). CA production requires the enzyme Cyp8b1, which adds a hydroxyl group (i.e., one hydrogen atom bonded to one oxygen atom) to the 12th carbon of the molecule. Because of this modification, CA is called a “12α-hydroxylated bile acid”. CDCA does not contain this hydroxyl group on its 12th carbon, and is therefore called a “non-12α-hydroxylated bile acid”. Levels of 12α-hydroxylated bile acids like CA are higher in obese individuals, but it is not yet known whether they can cause obesity. Therefore, in their study published in Molecular Metabolism, the researchers investigated the role of non-12α-hydroxylated versus 12α-hydroxylated bile acids in the development of metabolic disorders in an experimental mouse model.

Dr. Oteng and colleagues started their research by genetically manipulating mice in such a way that they have bile acids more similar to humans. This is necessary as mice produce bile acids called muricholic acids (MCAs) that are not present in humans. By removing the enzyme that converts CDCA into MCA, the researchers created mice with a human-like bile acid profile (see Figure below, right panel). Aside from having low levels of MCA, these mice had higher levels of CDCA and lower levels of CA as compared to regular mice. Most importantly, fat absorption was strongly reduced in these mice, and they were protected from weight gain when fed a high-fat diet. To test if this could be due to the relative decrease in 12α-hydroxylated bile acids, the researchers supplemented another group of human-like mice with high levels of 12α-hydroxylated bile acids. This treatment strongly promoted fat absorption, which may increase susceptibility to metabolic disorders when combined with an excessive and/or unhealthy diet.

These findings suggest that the ratio of non-12α-hydroxylated versus 12α-hydroxylated bile acids is an important determinant of metabolic health in humans, which opens up new avenues for therapeutic intervention. According to Dr. Oteng, “We now have increased confidence that targeting Cyp8b1 to reduce the levels of 12α-hydroxylated bile acids can reduce the risk of metabolic disease in humans”. Currently, more than one-third of Americans have metabolic disorders, which increases their risk of heart disease and other health problems. If proven effective, a therapy targeting human bile acid composition could have a major impact on public health.

Left panel. The bile acids CA and CDCA are made from cholesterol in the liver. CA is 12α-hydroxylated bile acid produced by the enzyme Cyp8b1, while CDCA is non-12α-hydroxylated. In mice, CDCA is further converted into MCA. Right panel. Inhibition of the conversion of CDCA into MCA increases the ratio of non-12α-hydroxylated versus 12α-hydroxylated bile acids, thereby reducing fat absorption in the gut and protecting from diet-induced weight gain.

The key to a longer life might be in skipping that midnight snack

Have you ever caved in to the temptation of a snack in the middle of the night that manifested into a quick freezer dive to grab that ice-cream or into a series of quick taps on your food delivery app to get those udon noodles? Suffice it to say that I have been a victim to this thought one too many times. Much to my chagrin, there is an abundance of evidence that suggests that eating during restricted hours of the day or time-restricted feeding (TRF) can slow down decline of bodily functions. Limiting food intake to certain hours of daytime, even if the food is not necessarily nutritious or low in calories, can prevent ageing or even kickstart anti-aging mechanisms in mice and flies with obesity or heart disease. Because ageing was dependent on when the body takes in food, these studies hint at the role of the body’s biological clock, known as circadian rhythms, in regulating health and longevity. In an unexpected new study authored by Columbia postdoc Dr. Matt Ulgherait, flies following time-restricted feeding while also balancing it with an unlimited all-access ad libitum diet, show a significant increase in lifespan. 

By structuring 24 hour day-night periods as cycles of 12 hours of light followed by 12 hours of darkness in a temperature-controlled box, the authors tested various dietary regimens for their effects on lifespan and stumbled upon one regimen that consistently showed longer lifespan along with enhanced health in the flies. This regimen cycled between a 20-hour fast starting at mid-morning (6 hours after lights on) to a 22 hour recovery period of eating ad-libitum on repeat in young flies within 10-40 day post hatching stage of adulthood. However, flies that began this regimen after reaching older age at day 40 did not show enhanced lifespan. In comparison to flies that were allowed access to food ad-libitum on a 24 hour cycle, flies following this particular fasting-feeding regimen showed a 18% increase in female lifespan and 13% increase male lifespan in their young age. Due to the cycling schedule of unlimited food access with periods of fasting, the authors termed this regimen as intermediate time-restricted feeding (iTRF).

Previous studies have shown that caloric restriction through reduced food intake, protein restriction or inhibiting insulin-like signaling can extend lifespan. However, iTRF did not appear to limit flies from eating less and in many cases, resulted in flies eating more during times of food access compared to those in the ad libitum group. Thus, lifespan extension under iTRF did not occur because of limitation in nutrient uptake. Interestingly, an iTRF regimen performed under additional treatments of either dietary protein restriction or inhibited insulin-like signaling, resulted in a marked boost in lifespan compared to iTRF alone. It therefore seems that  independent mechanisms that  can enhance lifespan can be combined to increase lifespan even more. 

While these methods provide ways to extend lifespan through incremental means, some might argue that it would be meaningless to simply survive without long-lasting health benefits. To examine whether the longer-lived flies continued to exhibit youth, scientists measured the fitness of the flies using two well-known age-related tests: the flies’ ability to climb up the plastic vial they are in and how much they accumulate in their tissues aggregates of aging proteins – polyubiquitin and p62. When compared to the ad libitum group, iTRF flies climbed much faster and had fewer polyubiquitin and p62 aggregates in the flight muscles, even after they reached an age beyond 40 days of hatching. While the gut microbiome was shown to dictate proclivity for disease and thus have an effect on lifespan, the gut tissue in iTRF flies remained healthier with more normal cells, even when the gut microbiome was depleted with antibiotics. Therefore, the flies appeared to be in optimal health conditions with fewer aging markers in addition to longer survival, demonstrating yet again that aging slowed down due to better functioning of organs.

The dietary regimen under iTRF only controls the timing of feeding but not the nutritional intake, which provided clues to the authors that perhaps the body’s natural biological clock had something to do with iTRF-mediated lifespan. The biological clock in flies consists of proteins that are also present in other organisms all the way from fungi to humans. The main molecular parts of the core circadian clock include the proteins ‘Clock’ (Clk) and ‘Cycle’ (Cyc) which activate the genes period (per) and timeless (tim), which in turn inhibit Clk and Cyc. This process is called a feedback loop which takes all of 24 hours to complete in both flies and humans, and this is how our bodies respond to light-dark cycles. Flies undergoing iTRF showed enhanced expression of Clk in the daytime and of per and tim at night time. The authors then explored the feeding behavior and metabolism of circadian clock genetic mutants undergoing iTRF and found that neither the 20 hour long fasting period nor dietary restriction in their food altered their feeding behavior when compared to normal flies under iTRF. Yet, the extended lifespan was completely missing in Clk, per and tim mutants undergoing iTRF. Even the improved health seen with an iTRF regimen through better climbing ability and less aging-protein aggregation was abolished in per mutants compared to normal flies. Shifting the iTRF cycle by 12 hours with a fasting period during the daytime abolished the occurrence of an extended lifespan. In the altered regimen, while the same cycle was now only shifted by half a day, eating at night time while fasting during the day just did not work. This discovery showed that there could be a deep link between the body’s biological timer and when during the day food is eaten that determines both longevity and well-being. 

Because shifting the fasting period to daytime did not show any benefits, the authors checked whether genes that activate during fasting are also linked to the biological clock. In fact, Dr. Ulgherait and group had already shown that disrupting tim and per genes in the gut, which is where food is processed, caused an increase in lifespan. But, iTRF included periods of starvation that could trigger different metabolic processes. Starvation induces cellular mechanisms to degrade and recycle its molecules in a process called autophagy. Interestingly, genes encoding two autophagy proteins, Atg1 and Atg8a, which are also present in humans, showed peak levels in the night time with enhanced peaks in flies under iTRF. During autophagy, there is an increased activity of cell organelles called lysosomes that contain digestive enzymes needed to break down cellular parts. The authors found that normal flies fasting under iTRF showed higher Atg1 and Atg8a expression along with more lysosomal activity but period mutants failed to do so. Using some more genetic tricks, the authors found that manipulating the level of autophagy to go up or down directly showed an effect on iTRF-mediated lifespan.

Finally, to explore the link between iTRF-mediated lifespan and autophagy, the authors used genetic tools to increase night-specific levels of Atg1 and Atg8a. In a surprising revelation, flies with night-specific expression of Atg1 and Atg8a showed an increase in lifespan, even when these flies did not undergo fasting and were fed ad libitum. Subjecting these genetically altered flies to iTRF did not additionally increase their lifespan, suggesting to the authors that circadian enhancement of cellular degradation under an all-access diet provides the same beneficial effects as fasting done under the stricter regimen of iTRF. Flies with night-specific enhanced autophagy also showed better neuromuscular and gut health on an all-access diet. Therefore, clock-dependent enhancement of the biological recycling machinery can mimic the lifespan extension mediated by iTRF.


Now of course large genetic manipulations are not yet a consideration in humans but this study provides a potentially powerful yet simple change in dietary strategy that could just somehow slow down aging. Aging increases risk of mortality and disease but imagine a food intake regimen translatable from this study into humans that can help improve overall neuromuscular and gut health. So, while technology has indeed made it so much easier than before to have food at our doorstep in a few phone taps in the middle of the night, perhaps restricting the hours of when we eat can really help us live healthier lives. This study now makes me reconsider the famous quote by Woody Allen in the context of food, “You can live to be a hundred if you give up all the things that make you want to live to be a hundred”.

Dr. Matt Ulgherait is a postdoctoral researcher in the lab of Dr. Mimi Shirasu-Hiza in the department of Genetics & Development at Columbia University. Dr. Ulgherait and his colleagues also recently showed that removing the expression of the period gene from the gut tissue was sufficient to cause an increase in lifespan.

Magic under the microscope

Researchers design an accessible, straightforward technique to characterize moiré systems – a class of materials built by placing slightly misaligned atomic monolayers on top of each other. Under certain conditions, such moiré structures exhibit exotic physical phenomena absent in the individual units that conform them.

A moiré pattern is an interference effect that arises when two grids are superimposed. It can be observed in the wrinkles of a mesh shirt and it is responsible for the fringes that appear when taking a picture of a computer screen. Moiré patterns are present in art and fashion, and in the last few years their effect in two-dimensional materials has entailed a revolution in physics.

Two-dimensional materials are those that are less than a nanometer thick. The first one to be isolated was graphene, a single-layer of carbon atoms (see Fig. 1a). Such discovery opened a whole new field of research and many labs around the world started making their own stacks – structures with two-dimensional materials placed on top of each other. If one were to place one of those layers slightly misaligned with the one below, a moiré pattern would emerge. This interference effect can be visualized in Fig. 1b. The small circles represent the carbon atoms that form a crystalline lattice (an ordered structure) on each graphene layer. The top layer is rotated with respect to the bottom one and, as a consequence, a periodicity larger than the atomic lattice emerges as highlighted in Fig. 1b.

In 2018, the field of condensed matter physics was stirred up: such moiré materials, at a very specific misalignment value called the magic angle, exhibit electronic states of matter that are not present in the individual layers, such as superconductivity or magnetism. The emergence of those electronic phases is a consequence of the moiré pattern and its direct visualization is critical for their understanding. There are a few techniques, including transmission electron microscopy and scanning tunneling microscopy that allow for this, but they require complex setups that do not necessarily work for any material, which has significantly slowed down the progress in the field. McGilly and colleagues show a new and simple technique based on piezoresponse force microscopy to visualize moiré patterns.

A piezoresponse force microscope consists of a sharp metallic tip brought into contact with the material under study –  in this case, the moiré system (see Fig. 1c). Piezoresponsive materials are those that undergo a mechanical deformation in the presence of an electric field. In the microscope, the sample moves a small amount when a voltage is applied across it and the tip follows the motion. Such tip motion is measured as a voltage which is amplified to detectable values. The tip is then moved around the sample and the process is repeated on every pixel of a selected region, producing a map of the sample’s deformation.

a. Graphene atomic lattice. Each ball represents a carbon atom. b. Twisted graphene bilayers. The three main stacking configurations are shown (AA, AB and domain wall). The moiré unit cell is highlighted. c. Microscope tip in contact with the graphene bilayer d. The strain on the graphene layer bends the chemical bonds between the atoms from in-plane (left) to a mixed in-plane/out-of-plane character (right).

In principle, it was not obvious that a moiré pattern would be detectable with the microscope. When moiré patterns form, it creates a repetitive set of individual units that are called unit cells (highlighted region in Fig. 1b). Each unit cell is formed by regions with different atomic three-dimensional configurations, called sites. In the case of graphene, those sites are called AA and AB which stands for how the atoms from each layer lie on top of each other (see insets in Fig. 1b). The AB regions (also called domains) are separated by domain walls, as highlighted in Fig. 1b. McGilly and colleagues show that the voltage signal detected with the microscope is localized on the domain walls.

When the moiré pattern forms, the atomic layers relax to accommodate it and the layer wrinkles along the domain wall (see right panel in Fig. 1d). Since the microscope is not sensitive to such small deformation, the origin of the detected signal must be electronic. Flat graphene layers have planar bonds, as shown in the left panel of Fig. 1d. However, the curvature of the wrinkle bends the atomic bonds on the graphene layer, which in turn causes an asymmetric distribution of the charge in the vertical direction and gives rise to an out-of-plane polarization (P), which is responsible for the signal measured in the microscope.

The technique designed by McGilly and colleagues has been proven extremely useful for the advancement of the field due to the simplicity of the method and the fact that it allows imaging of any moiré pattern, independently of the nature of the individual units that conform it – that is, whether they are metals, semiconductors or insulators. Being able to image moiré patterns with such an accessible technique will help improve the fabrication process, and having uniform samples is critical since strain gradients can significantly alter the states of matter that emerge in moiré materials.


Dr. Leo McGilly is a Postdoctoral Research Fellow in the Physics Department at Columbia University.

How Bouldering keeps urban communities in shape

So, you too enjoy this amazing sport, where people climb over comparatively short distances without any tools, such as ropes or harnesses? Amazing! But, to quote a famous British ensemble, now for something completely different. Today we want to talk about a more serious and urgent topic: flood risks. The recent flood in the New York City area convincingly showed us the risk of flooding in (highly) populated urban areas. Climate change and socioeconomic developments keep on increasing this risk further and further.

Flooding in NYC
Figure 1: Strong rain in New York City transformed parts of it into Venice’s little brother, with less romance but at least 43 death cases.

The United Nations have formulated in their 2030 Agenda for Sustainable Development 17 goals to »[…] stimulate action over the next fifteen years in areas of critical importance for humanity and the planet«. Goal number 11 is »sustainable cities and communities«. But, to properly address a risk it is necessary to adequately analyze and describe it. Current approaches for urban risk analysis mostly lack two important factors: First, they are mainly qualitative but not quantitative. That means, they accurately describe the what of a risk but not how much. We probably all can agree that the information that the biggest crocodile ever found in nature was longer than the biggest giraffe ever was high is much more impressive than the statement, crocodiles can become really big. This demonstrates why quantitative statements are important.

The second problem they do not address properly is the prediction of urban development. They project city growth rather arbitrarily, seldomly incorporating geographical, social or economic factors associated with urbanization. While predictions are difficult, especially about the future, some information exists which can be used as a guide about most probable development cases.

Dr. Mona Hemmati and colleagues tackled both these problems by developing a framework for understanding the interactions between urbanization and flood risks. To do so, they combined four main components: an urban growth module, a hazard module, a risk assessment module and a policy implementation module. The urban growth module is used to achieve a more realistic urban development prediction and the hazard module to generate floodplains. For the risk assessment module the two previous modules are combined while the policy implementation module is used to implement nonstructural strategies, such as for example development zones or taxation variations.

For the framework development the City of Boulder, Colorado, has been chosen as a testbed. Various data such as size, shape, surrounding area or density distribution of the city has been gathered by different sources and used as input parameters for their model.

Their urban growth model has four key features which are used to predict the urbanization process, divided into residential, industrial and commercial & mixed-used occupation. They divide the urban area and surroundings into equally sized cells, the so-called cell space, creating a 2D spatial grid. Each cell can have a cell state, which describes if the cell is developed or not. The neighbourhood of a cell is a factor which can either have an attractive or repulsive effect on the surrounding cells and the transition potential represents the probability of a cell state change for the next time step, defined by different development factors. For the hazard module a development by the Federal Emergency Management Agency was used. With this different floodplains characteristics can be calculated for various flood scenarios, such as for 5, 10, etc. year return period. The risk assessment module measures the damage to physical infrastructure and caused by economic and social disruptions as expected annual damage (EAD) in $US. Last, the policy implementation module takes into account nonstructural flood mitigation measures. Structural measures, such as for example dams, aim at controlling the hazard and keeping the flood out, while nonstructural measures, such as for example land acquisition or public awareness programmes aim at reducing the exposure to hazard.

Using this framework, they tested two different policies against both the current development policy of the city as well as no policy at all. For the first policy they defined low-risk zones and disallowed development and high-risk zones, while for the second they defined socioeconomic incentives, such as for example placing school and places of entertainment in low-risk zones. The interesting result was, that from the four tested cases, Boulder’s current development policy showed the worst result in terms of growth inside the floodplains and therefore long-term costs. Even uncontrolled development was better, while the best policy was the zoning policy, closely followed by the incentive policy.

It can be summarised that while their model still contained many educated guesses and assumptions and for example neglected the influence of the growth module onto the hazard module it can be considered a huge step forward in comparison to purely qualitative models based on random development. The testbed Boulder showed it can be directly applied to community planners in assisting the mitigation of risks due to future hazards, bringing the science out of their ivory tower into the heart of modern society: The city itself.

Dr. Mona Hemmati is a Postdoctoral Research Scientist in the department for Ocean and Climate Physics at the Lamont-Doherty Earth Observatory (LDEO) of Columbia University.

The Different Perceptions of Cultural Appropriation

The term cultural appropriation, is by far a familiar one. It is defined as situations where a person associated with one group uses cultural elements from another group. These elements can include cultural items like “symbols, genres, expressions, technology and artifacts”. While the term is widely used, actual empirical data surrounding the perception of cultural appropriation is limited. In a recent publication, Dr. Ariel Mosley, a Columbia postdoc, and Dr. Biernat venture into the perception of cultural appropriation. To understand how cultural appropriation is perceived by different groups, Dr. Mosley utilizes an approach of a majority and a minority group in the same community and identifies how each group views different actions as cultural appropriation. 

This study uses multiple examples of cultural appropriation (Figure) to identify the perception of appropriation (whether the example is actually cultural appropriation), perception of harm (whether the appropriation can be harmful to the group the cultural aspect was borrowed from), perception of intent (whether the appropriation was done purposefully), and distinctiveness threat (whether the appropriation threatened cultural aspects that allow the minority group to be distinct from the majority group).

To fully identify the perception of cultural appropriation, this study was divided into five sub-studies. Studies one through three focused on the perception of appropriation, harm and intent, study four focused on manipulating distinctiveness threat, and study five focused on fully crossing the actor and race. They recruited an equal number of adults that either identified as Black or White Americans, with White Americans being considered representative of the majority group and Black Americans representative of the minority group. For studies one through three, the authors set out to answer whether Black Americans or White Americans would have higher perceptions of appropriation, harm, intent, and distinctiveness threat. They used a design where the participants would read scenarios, adopted from social media and news clips, of potential cultural appropriation. In these scenarios the perpetrator, the person doing the appropriating, could be either white or black. The participants were asked to review six possible cases of cultural appropriation (Figure). Throughout the three studies they found that Black participants perceived more cases of appropriation than White participants when the perpetrator was White. In a similar pattern, Black participants saw the scenarios as more harmful, and with intent when the perpetrator was White. When the perpetrator was Black neither White participants nor Black participants saw the scenario as appropriation. Black participants also overall felt an increased distinctiveness threat when compared to White participants. These findings supported Dr. Mosley and Dr. Biernats’ original hypothesis of cultural appropriation being more likely to be perceived when perceivers were members of the minority group.

Since in studies one through three, Black participants felt an increased distinctiveness threat, Dr. Mosley and Dr. Beirnat wanted to see whether increased distinctiveness threat in particular could alter the perception of cultural appropriation. To test this the authors primed the participants in a fourth study for increased distinctiveness threat and focused on one scenario category, “hairstyle” (Figure). They primed the participants to either have increased distinctiveness of threat by having them read, “The Disappearing Color Line in America” or normal distinctiveness of threat by having them read, “The Geography and Climate in America”. Black participants were widely unaffected by the priming with the results mimicking studies one through three, but for White participants, those that were primed for distinctiveness of threat saw the White perpetrators’ actions as cultural appropriation. These results indicated that the level of distinctiveness threat experienced increases the perception of cultural appropriation.

Figure: Detailed depiction of the study designs and categories of cultural appropriation.

Then in study five, to reassure their results, the authors paired a perpetrator with a product that was distinctly part of the participant’s culture. The previous four studies used an item/product that was outside of the perpetrator’s culture, but not necessarily an item belonging to the participant’s culture.  Here they used an item/product that was explicitly part of the participant’s culture. The perpetrator was either a White waiter serving culturally Black cuisines or a Black waiter serving culturally White cuisines. Mimicking their previous studies, they found that Black volunteers were more likely to see cultural appropriation when the waiter was White.

Overall, their study indicated that majority and minority groups perceive cultural appropriation differently, with the minority group being more sensitive to actions that can be perceived as appropriative. They also found that harm and intent correlated with appropriation leading them to the conclusion that both perceptions are part of the appropriation construct. These findings supported their initial hypothesis that power relations and social constructs affect the perception of cultural appropriation and added empirical data to a topic often spoken about but yet understudied.

While Dr. Mosley and Dr. Beirnat have added a significant amount of empirical information on how cultural appropriation is perceived, there is still more to explore. Future studies could expand on how cultural appropriation affects multiple other groups including individuals across different races, sexual orientations, genders and individuals with disabilities. 


Dr. Ashley Mosley is a Post-Doctoral Research Scientist in the Department of Psychology at Columbia University. Her research focuses on social cognition, social identity and intergroup biases. More information about Dr. Mosley can be found on her website.