Over the brainbow with a new PAL

Brains come in a variety of sizes. Several orders of magnitude separate the convoluted human brain from that of the fruit fly. But no matter the configuration, neuroscientists strive to study neurons in further and further detail, ideally at the single-cell level. However, consistently identifying individual neurons is not an easy endeavor; in most organisms there are so many neurons that even when labelled with colored tags we can’t see the forest for the trees.

Enter Caenorhabditis elegans, a worm with only 302 neurons. This tiny transparent worm comes in handy for scientists because they can easily follow the fate of any of its cells from the embryonic stage to adulthood. However, despite the somewhat predetermined developmental paths, the exact spatial location of each cell is slightly variable. Therefore, studies examining individual cell identities are doomed if they rely exclusively on relative position within each group of neurons.

In their recent paper, Dr. Eviatar Yemini and colleagues introduce a solution to this problem in the form of deterministic fluorescent labelling of all 302 neurons of the worm C. elegans. Their approach, NeuroPAL (a neuronal polychromatic atlas of landmarks), produces the same pattern of colors across worms, which makes a fundamental improvement to previous similar techniques like Brainbow that produce random patterns.

The first challenge they faced was determining how many distinguishable colors they needed to correctly identify all neurons. Fortunately, the neurons of this worm are dispersed over its whole body, grouped in 11 different clusters, so they didn’t need all 302 to be labelled with distinct colors. The biggest of these ganglia contains roughly 30 neurons, so aiming for a range of colors around that number was enough. How do you get those colors? One could achieve a sizable palette by using just a few different fluorescent markers or “fluorophores” that are detectable at different intensities. However, selecting the final fluorophores wouldn’t have happened without a great example of scientific collaboration. Dr. Yemini had been struggling with the colors blending together until he contacted a colleague who told him about a new fluorophore, and this ended up being the missing puzzle piece he needed to achieve all the distinguishable fluorophores. Once they had carefully selected the three distinct markers, a clever trick of imaging them in red, blue and green, allowed them to obtain a whole RGB palette of colors.

The next step was to achieve the different levels of each fluorophore for each neuron in a consistent way across worms by changing the signal driving the fluorophores’ expression. Starting from a list of 133 previously published genes with different patterns of neuronal expression, Dr. Yemini tried them all, painstakingly narrowing down the list to 41 winners by a process of iterative trial and error, checking the resulting color combinations and whether the neurons could be distinguished at each step. This process alone spanned more than two years, and required deep dives into the literature and some expert judgement calls: “in one rather desperate case, I guessed the expression from the behavioral phenotype and, very luckily, was approximately right” Dr. Yemini says.

Once they achieved the final combination of colors, they had finally created a genetically modified worm that could pass on this colorful and robust pattern for many generations.

A fluorescent image of a NeuroPAL worm with distinct labelling of each neuron
A NeuroPAL worm with deterministic fluorescent labelling of its 302 neurons. Notice that the head, on the left, has a much higher density of neurons, which had previously complicated the task of identifying them. Courtesy of Dr. Yemini.

NeuroPAL is not only a technical feat in and of itself, but was also a creative outlet for Dr. Yemini, who enjoyed the highly collaborative art-meets-science project:

“In high school, I had to choose between applying to art school or following what I thought was a more traditional route. I loved the artistic process, I’d taken many art classes and even managed to score a scholarship for an after-school art program at SUNY Purchase. But I let luck guide my fate and ended up taking the non-artistic route. I really miss that part of me. The process of making NeuroPAL has felt like a taste of a part of me that I’d lost.”

So what do you do after you create a tool that allows you to unambiguously identify all neurons? You use it to explore more questions! Dr. Yemini and his colleagues applied their shiny new worms to study many old questions in the field. For example, they leveraged the individual cell identification to refine whole-brain activity imaging, with which scientists previously had the issue of being unable to determine neuronal identity. They succeeded in recording responses to different chemical stimuli, both attractive and repulsive, confirming previous results and adding new neurons to the response pathways, thus unraveling more complex neuronal networks. On the whole, they show that this new tool can be used for exploring a variety of questions in C. elegans.

Sped-up video of a NeuroPAL worm responding to a stimulus. The cells that are activated by the stimulus shine more brightly and are identifiable by their underlying color.

You might be thinking, “This is all very cool, but what does a tiny transparent worm have to do with me?” While studying C. elegans in itself can shine light on some basic biological processes, it can also open the door to discoveries in more complex organisms and those more similar in neural organization to humans. Indeed, the authors suggest that this approach to unequivocally label cells could be translated to other models that are widely used in biomedical research, such as the fruit fly, fish and even mice. We still have a long way to go before we can create an entire rodent with a consistent pattern of shiny cells, but local labelling may be a more attainable goal. And eventually, this research will help us distinguish the trees from the forest.

Dr. Eviatar Yemini  is an Adjunct Associate Research Scientist in the Department of Biological Sciences at Columbia University (Hobert lab). He will be starting his own lab at the University of Massachusetts Medical School in January 2022. Reach out to him for exciting job opportunities!

The hungry algorithm: machine learning to ease the “what’s for dinner?” decision

When Dr. Jaan Altosaar heard that food deprivation increases stem cell regeneration and immune system activity in rats, he did what many would not dare: he decided to try it himself and fasted for five days. Thoughts of food started to take over his mind and, with what can only be qualified as a superhuman ability to think with low blood sugar, he went on a scientific tangent and channeled them into tackling the complicated task of improving food recommendation systems, which led to publishing a research article about it.

Dr. Altosaar wanted help in making decisions because choosing is hard. When faced with an excessive number of options, we fall victim to decision fatigue and tend to prefer familiar things. Companies know this, and many have developed personalized recommendations for many facets of our lives: Facebook’s posts on your timeline, potential partners on dating apps, or suggested products on Amazon. But Jaan had a clear favorite: Spotify’s Discover Weekly algorithm. The music app gathers information on co-occurrence of artists in playlists and compares the representation of you as a listener to the couple billion playlists it has at its disposal to suggest songs you might enjoy. Since Dr. Altosaar’s problem was similar, he framed the problem as feeding the algorithm a user’s favorite recipes (“playlists”), which are made of a list of ingredients (“songs”). Would the algorithm then cook up suggestions of complimentary meals based on the ingredients in them?

A meal consumed by a user (hamburger) is made up of ingredients (bread, lettuce, tomato, cheese, meat). This information is given to the machine learning algorithm, which will use learnt information about that user to provide a recommendation likely to be eaten by them.

Meal recommendation in an app is challenging on several fronts. First, a food tracking app might record eating the same meal in many different ways or with unique variations (such as a sandwich with homemade hot sauce or omitting pickles). This means that any specific meal is typically only logged by a small number of users. Further, the database of all possible meals a user might track is enormous, and each meal only contains a few ingredients. 

In traditional recommender systems such as those used by Netflix, solving this problem might mean first  translating  the data into a large matrix where users are rows and items (e.g. movies or meals) are columns. The values in the matrix are ones or zeros depending on whether the user consumed the item or not. Modern versions of recommender systems, including the one in Dr. Altosaar’s paper, also incorporate item attributes (ingredients, availability, popularity) and use them as additional information to better tailor recommendations. An outstanding issue, however, is striking a balance between flexibility, to account for the fact that we are not all like Joey Tribbiani and might not like custard, jam and beef all together (even if we like them separately), and scalability, since an increasing number of attributes takes a toll on computing time. Additionally, these machine learning algorithms are not always trained the same way they are later evaluated for performance.

A matrix with a representation of users and items
A sparse matrix representing whether a user “u” consumed an item “m” (coded with a one). If the user did not consume the item, there is a zero. Note that most items in the matrix are zeroes, so that there is not a lot of actual information (thus calling it sparse).

The new type of model Dr. Altosaar and colleagues propose, RankFromSets, frames the problem as a binary classification. This means that it learns to assign a zero to meals unlikely to be consumed by a user, and a one to those that are likely to be consumed. When faced with giving a user a set of potential meals (say five), it strives to maximize the number of meals the user will actually eat from those five recommended to them. To leverage the power of incorporating the meal’s ingredients, the algorithm uses a technique from natural language processing to learn embeddings. These are a way to compress data  to preserve the relevant information you care about to solve your problem; in this case, learning about patterns useful for predicting which ingredients tip the balance for someone to consume a meal. This allows for a numerical representation for each meal based on its constituent foods, and the patterns in how those foods are consumed across all users.

The RankFromSets classification model incorporates several components. There are  embeddings for representing user preferences  alongside the embeddings corresponding to a meal a user might consume. The classifier is spiced up with additional user-independent information about the meal’s popularity and its availability. These components are used by the model to learn the probability that a particular meal will be consumed by a user. Potential meals a user might enjoy – or that might be healthier options – are then ranked, and the top meals will be recommended to the user. For example, if you have had avocados in every one of your meals, they are in season, and all those Millennials are logging in their avocado toast, you are very likely to receive recommendations that include avocados in the future.

As a proof of concept, the authors tested their method not only on food data, which they got from the LoseIt! weight loss app, but also on a dataset unrelated to meal choices. For this independent data set, the authors used reading choices and behavior among users of arXiv, a preprint server. They trained the model on past user behavior data and evaluated performance (accuracy of paper suggestions) on a previously separated portion of that same data (so they knew whether the user had actually read the paper, but this information was hidden from the algorithm for evaluation). This is a typical way to  assess the performance of machine learning systems, and their method outperformed previously-developed recommender systems. The better performance and translatability to tasks other than meal recommendation is indicative of the potential of this tool to be applied in other contexts.

This new recommender system could be applied to either recipe recommendation apps, or even to an app that would suggest first-time customers of a restaurant the menu items that they are more likely to like based on their preferences. The system also has the potential to incorporate additional information beyond whether a user consumed (and liked) a particular ingredient or meal. Sometimes the method of cooking determines whether a food is appealing or not (Brussel sprouts I’m looking at you). Additionally, classifying ingredients by flavor might also be helpful in suggesting similar (and even healthier) alternatives. Therefore, adding those tags as extra layers to the user-meal intersection will certainly provide better recommendations and opportunities to cook outside of the box. Dr. Altosaar’s fast might or might not have gotten him a boost in his stem cells, but he certainly succeeded in helping everyone else worry a bit less about what’s for dinner tonight.

Dr. Jaan Altosaar is a Postdoctoral Research Scientist in the Department of Biomedical Informatics and an active participant in CUPS. He publishes an awesome blog about machine learning and its implications.

Plasticity inception in a nutshell

Have you ever realized that you remember experiences associated with strong emotions more vividly? For example, you probably remember what you ate at your (or a close friend’s) wedding, but not last Tuesday. However, these persistent memories are not always pleasant. People exposed to actual or threatened death, serious injury, or sexual violence can develop Post-Traumatic Stress Disorder (PTSD), which involves recurring memories or dreams of the traumatic event, bodily reactions to reminders and active avoidance of those reminders. Treatment for PTSD combines psychotherapy and medication, and it aims at enabling the person to understand their trauma and detach the triggers from the responses.

The area in your brain responsible for the formation of such emotional memories is called the amygdala (from the Greek word for almond, due to its shape, Fig. 1). It can modify the way it will respond to similar stimuli in the future, and it can also affect how other brain areas, like the medial prefrontal cortex or the hippocampus, do as well. This ability to change and adapt is called plasticity, and it can start with something as “simple” as a synaptic connection becoming stronger or weaker. There are higher levels of plasticity, though. If changes alter the potential response of a region to a future challenge, this plasticity of plasticity is called metaplasticity.

Human and rodent brain with highlighted amygdala, medial prefrontal cortex and hippocampus.
Fig. 1. Depiction of a human and a rodent brain. Highlighted areas are responsible for establishing emotional memories, fear conditioning and extinction. Modified from Sokolowski and Corbin 2012.

In the recent review “Intra-Amygdala Metaplasticity Modulation of Fear Extinction Learning”, CUIMC postdoc Dr. Rinki Saha and colleagues provide a comprehensive account of recent literature on metaplasticity in the amygdala in the context of fear conditioning, and how it may lead to plasticity in other connected brain regions.

Fear conditioning is a classic rodent model in neuroscience research that allows scientists to study the mechanisms that lead to associations between neutral stimuli and unpleasant stimuli. The general experimental layout is as follows: first, a neutral stimulus (a light or a tone, for example) is consistently paired to precede an aversive stimulus (like an electric foot shock). After this exposure, animals learn that the neutral stimulus (called conditioned stimulus) predicts the aversive one (called unconditioned stimulus) and they develop a fear response which they perform right after the neutral stimulus (life freezing in place). The experiment can continue to study how they learn to dissociate them once the stimuli stop being paired. For this second part, called fear extinction learning, the neutral stimulus is presented by itself (without pairing it to the aversive one), and researchers measure the time it takes the animal to stop performing the fear response.

In order to study the amygdala’s role in fear extinction, scientists can inject different drugs into it with very fine syringes (in a procedure called stereotaxic surgery, Fig. 2). By either activating or inhibiting different signaling pathways, they can elucidate what roles those molecules play in the fear extinction process. In addition, experiences like stress and trauma can interfere with this extinction learning, as evidenced in people who suffer from PTSD and in rodent models exposed to different stressful situations, both acute and chronic.

Depiction of a stereotaxic surgery in a rodent. Detail of injection in the amygdala.
Fig. 2. Depiction of a stereotaxic surgery in a rodent. The anesthetized animal is fixed on the frame of the stereotaxic instrument, which has very accurate rulers for the three dimensions. A very fine syringe is introduced through the skull into the brain to administer the drug or virus in a very precise way.
Made with BioRender.

This paradigm has been used by many to study metaplasticity, where the change that occurs is not a modification of the baseline response but rather of the response to a subsequent plasticity-inducing stimulation. For example, Dr. Saha herself showed that it is possible to alter fear extinction learning by injecting a virus into a subregion of the amygdala that disrupts inhibitory synapses. Importantly, this happened without modifying the initial fear conditioning or the anxiety level of the animals. In addition, they also showed that those alterations in inhibitory synapses in the amygdala led to independent changes in the medial prefrontal cortex, hindering its intrinsic plasticity. The same intervention caused increased resilience to acute trauma and improved the performance of a task dependent on another brain region, the hippocampus. Hence, a very targeted intervention in the amygdala can cause an array of effects across multiple brain areas.

This body of research has tremendous implications in our understanding of the brain and how to treat its diseases. On a very pragmatic sense, it should serve as a cautionary tale for researchers to take into account and consider the potential for “undesired” plasticity in more than one place as a response to certain interventions. But more importantly, it opens up potential therapeutic strategies for trauma-related disorders like PTSD, stress or fear. Changes in one small region can lead to widespread effects through its connections to other brain areas. Hopefully, we are a little bit closer to tricking the brain into equating those traumatic memories with what you ate last Tuesday.

 

Dr. Rinki Saha is a Postdoctoral Research Fellow in the Department of Psychiatry researching  stress, and one of CUPS’ social media managers.

Follow this blog

Get every new post delivered right to your inbox.