Shining a Light on Microscale Innovations

What comes to mind when you think of light? Perhaps the vision of yourself basking in the sun on a California beach. Or the most recent book of Michelle Obama “The Light We Carry”. Or even the metaphorical light we’re not supposed to follow when our time comes. Here, we are going to talk about a concept even more fundamental: the physics of light.

Light, with its dual nature as both a wave and particles known as photons, exists in different “forms”, each characterized by its unique wavelength or color. For instance, the wavelength of the visible light, which is the electromagnetic radiation detectable by the human eye, ranges from 400 to 700 nanometers.

Image credits: Chiara Trovatello

Fascinatingly, we can manipulate light and change its wavelength using a powerful tool: nonlinear optics. Inside nonlinear materials light behaves in a “nonlinear” manner, meaning that its response is not directly proportional to the input. In such systems, the properties of light change as it interacts with the material, resulting in nonlinear phenomena. For example, photons interacting with a nonlinear material can be transformed, with a certain probability, into new photons with twice the frequency of the initial photons in a process called Second Harmonic Generation (SHG). Another phenomenon, even more intriguing, is the Spontaneous Parametric Down-Conversion (SPDC), in which one photon of higher energy is converted into a pair of entangled photons of lower energy. What makes these entangled photons unique is their intrinsic interconnection, even when they are separated by vast distances, or even placed at opposite corners of the universe. Changing the state of one of them will result in an instant change in the state of the other photon.

Nowadays, SPDC is especially relevant since it’s at the heart of many applications, for instance in quantum cryptography for secure communications and also for testing fundamental laws of physics in quantum mechanics. Inducing these nonlinear behaviors requires crystals with large thickness, typically of the order of a few millimeters. This requirement is due to the relatively low nonlinearity of these materials. Indeed, when the nonlinearity of a material is low, a larger volume of material is required in order to achieve an equivalent level of efficiency. This limitation has been overcome thanks to the innovative work of Chiara Trovatello, a Columbia postdoc from the Schuck Lab. She and her colleagues have successfully developed a method for generating nonlinear processes, such as SHG and SPDC, in highly nonlinear layered materials with remarkable efficiency, down to sizes as small as 1 micrometer – a thousand times thinner than a millimeter! To put this in perspective, the materials that Chiara and her colleagues have realized are 10-100 times thinner than standard nonlinear materials with similar performances.

While materials with a high conversion efficiency were already well-known in the field, achieving macroscopic efficiencies over microscopic thicknesses was still an open challenge. The critical component was to find a way to achieve the so-called phase matching condition in microscopic crystals with superior nonlinearity. Phase matching ensures efficient conversion of light by maintaining the fields of the interacting light waves in-phase during their propagation inside the nonlinear crystal. To this end, Chiara and her colleagues designed and realized multiple layers of crystalline sheets, stacked with alternating dipole orientation. With this procedure, they were able to change the way the material responds to light, meeting the phase matching condition over microscopic thicknesses. This methodology provides high efficiency in converting light at the nanoscale, and unlocks the possibility of embedding novel miniaturized entangled-photon sources on-chip. It paves the way for novel technologies and more compact optical devices, for applications in quantum optics. Projecting ourselves into the future, it’s not hard to envision a world where everyone will carry a sophisticated quantum device in their pockets, this time embracing the physical light rather than running away from it.

Reviewed by: Trang Nguyen

When Numbers Play Tricks: Unraveling The Brain’s Biases

Imagine living in a dark cave, with your entire understanding of the world based on shadows on the wall. Sounds unrealistic and terrifying, right? However, this allegory presented by Plato is an apt metaphor for our brain’s perception of the real world. While we might believe we perceive reality in its entirety, our brain can only provide a shadowy representation of the variables in our environment, and our decisions are based on these shadowy representations.

A comforting thought might be that numbers are a universal language for Western countries and, specifically, for those who use Arabic numerals. For instance, when the price of a good is marked as $14, it conveys an unambiguous and specific value, meaning that one would unequivocally expect to pay exactly that amount. However, experiments show that people make mistakes when dealing with Arabic numerals. For instance, under time constraints, the closer two numbers are in value, the more challenging it becomes for us to rapidly and accurately pinpoint the larger one. These mistakes are very similar to those made in psychophysics tasks involving physical stimuli, such as comparing the length of segments or averaging the orientation of tilted lines. These results, together with neurobiological studies, suggest the existence of a representation system for numbers that is similar to how we interpret physical stimuli.

A popular idea in theoretical neuroscience is that while the brain’s computational abilities have inherent limits, leading to imprecise representations, these representations are optimal within those constraints. This theory, called “efficient coding”, suggests that our brain’s perceptions are influenced by how often these magnitudes are encountered (i.e., the prior). For example, vertical and horizontal orientations are perceived with more clarity than oblique ones, likely because they’re more common in our environment.

A recent study (, led by Columbia postdoc Arthur Prat-Carrabin and published in Nature Human Behaviour, delves into whether our brain treats numbers the same way it treats physical stimuli. In their experiment, participants were asked to determine which series of numbers, red or green, had a higher average value. For instance, as shown in Figure 1, the number 79.60 would flash in red on the computer screen, followed by 44.92 in green, and so forth. Participants were tasked with rapidly and intuitively calculating the average of the red and green numbers to determine which sequence had the greater average. To investigate how the frequency at which numbers were encountered impacts their representation, numbers were drawn from different distributions: one, in which smaller numbers were more frequent (Downward prior); another, in which all numbers had equal chances (Uniform prior); and a third, in which larger numbers were more frequent (Upward prior).

Figure 1: Example trial of the task. Participants were presented with 5 red and 5 green numbers and had to choose the color with the higher average.

To analyze the participants’ decisions, the researchers compared their answers with multiple computational models characterized by two components: first, whether or not the numbers are encoded with a bias that depends on the value of the number, and second, whether or not the imprecision (the noise) with which numbers are represented varies with the number. Their results showed that the model that best aligned with the participants’ answers had to include both components. Notably, less common numbers are fuzzier in their perception. So, when using the Downward prior, bigger numbers are encoded with more noise, while with the Upward prior, the smaller numbers are the noisiest ones.

This discovery not only supports the “efficient coding” theory, which posits that the brain encodes and represents information in the most efficient way possible, but also showcases its applicability beyond just physical perceptions. Whether we’re assessing the speed of a car, the talent of a dancer, or the sweetness of a cake, our brain might use a universal mechanism to represent these variables. This mechanism dynamically adjusts to the statistical distribution of numbers that are expected or experienced, which implies that our understanding of numbers and magnitudes isn’t static but can be influenced by our prior experiences and expectations. In the near future, we might be able to design environments that help people refine their perceptions (such as by crafting digital games to enhance the consumer’s responsiveness to certain prices), allowing them to better discern specific value ranges and improve their decision-making.

Reviewed by: Emily Hokett, Trang Nguyen, Martina Proietti Onori


Follow this blog

Get every new post delivered right to your inbox.