Reading view

Targeting toxic protein chains could slow neurodegenerative disease

For decades, researchers have worked to untangle the biological causes of neurodegenerative conditions such as Alzheimer’s disease. A primary focus has been the accumulation of misfolded proteins that clump together in the brain and damage neurons. A new study reveals that specific repetitive chains of amino acids, known as polyserine domains, can damage brain cells and worsen the accumulation of toxic protein clumps associated with these diseases.

The findings suggest that these repetitive chains may be a driver of neurological decline. The research was published in the Proceedings of the National Academy of Sciences.

To understand this study, it is necessary to understand a protein called tau. In healthy brains, tau serves as a stabilizer for the internal skeleton of nerve cells. It helps maintain the tracks used to transport nutrients and molecules within the cell. In diseases collectively known as tauopathies, which include Alzheimer’s, tau molecules detach from this structure. They then chemically change and stick together. These sticky clumps, or aggregates, form tangles that choke the cell and eventually kill it.

Researchers are working to identify what causes tau to transition from a helpful stabilizer to a toxic clump. Previous investigations have observed that certain other proteins often appear alongside tau tangles in the brains of patients. These accompanying proteins often contain long, repetitive strings of the amino acid serine. Scientists call these strings polyserine domains.

Additionally, these polyserine chains are produced in specific genetic disorders. Diseases such as Huntington’s disease and spinocerebellar ataxia type 8 are caused by errors in the genetic code where a small segment of DNA repeats itself too many times. These genetic stutters can result in the production of toxic repetitive proteins, including those rich in serine.

Meaghan Van Alstyne, a researcher at the University of Colorado Boulder, led the study to determine if these polyserine domains are merely bystanders or active participants in brain disease. She worked with senior author Roy Parker, a distinguished professor of biochemistry at the same university. The team sought to answer whether the presence of polyserine alone is enough to harm a mammalian brain. They also wanted to know if it accelerates the problems caused by tau.

To investigate this, the team used a common laboratory tool known as an adeno-associated virus serotype 9. This virus is modified so that it cannot cause disease. Instead, it acts as a delivery vehicle to transport specific genetic instructions into cells. The researchers injected newborn mice with this viral carrier. The virus delivered instructions to brain cells to produce a protein containing a long tail of 42 serine molecules.

The researchers first observed the effects of this polyserine on normal, wild-type mice. As the mice aged, those producing the polyserine developed clear physical and behavioral problems. They weighed less than the control group. They also displayed difficulties with movement and coordination.

The team tested the motor skills of the mice using a rotarod assay. This test involves placing a mouse on a horizontal rotating rod that spins faster over time. The mice must keep walking to avoid falling off. It is similar to a lumberjack balancing on a rolling log. From four to six months of age, the mice expressing polyserine fell off the rod much sooner than the control mice.

Behavioral changes also emerged. The researchers placed the mice in a maze that is elevated above the ground. The maze has two enclosed arms and two open arms. Mice naturally prefer enclosed spaces because they feel safer. The mice with polyserine spent more time in the open arms. This behavior suggests a reduction in anxiety or a lack of natural caution.

The team also tested memory using a fear conditioning assay. In this test, mice learn to associate a specific sound or environment with a mild foot shock. When placed back in that environment later, a mouse with normal memory will freeze in anticipation. The polyserine mice froze much less often. This indicates they had severe deficits in learning and memory.

To find the biological cause of these behaviors, Van Alstyne and her colleagues examined the brains of the mice. they found a dramatic loss of a specific type of neuron called a Purkinje cell. These are large, distinctively shaped neurons located in the cerebellum. The cerebellum is the part of the brain responsible for coordinating voluntary movements.

The viral delivery system used in the study is known to be particularly effective at targeting Purkinje cells. In the mice receiving the polyserine gene, these cells were largely wiped out. The loss of these cells likely explains the coordination problems observed in the rotarod test.

Alongside the cell death, the researchers observed signs of gliosis. This is a reaction where support cells in the brain, known as glia, become overactive. It is a sign of inflammation and damage. The brain was reacting to the polyserine as a toxic presence.

The researchers then investigated where the polyserine went inside the surviving neurons. They found that the protein did not stay in the main body of the cell. Instead, it accumulated inside the nucleus. The nucleus is the control center of the cell that holds its DNA. The polyserine formed large clumps within the nucleus. These clumps were tagged with ubiquitin, a small molecule the cell uses to mark garbage for disposal. This suggests the cells were trying, and failing, to clear the toxic protein.

After establishing that polyserine is toxic on its own, the researchers tested its effect on tau. They used a specific strain of mice genetically engineered to produce a mutant form of human tau. These mice naturally develop tau tangles and neurodegeneration as they age.

The team injected these tau-prone mice with the polyserine-producing virus. The results showed that polyserine acts like fuel for the fire. The mice expressing both the mutant tau and the polyserine died significantly younger than those expressing only the mutant tau.

When the researchers analyzed the brain tissue of these mice, they found elevated levels of disease markers. There was an increase in phosphorylated tau. Phosphorylation is a chemical change that promotes aggregation. The study also found more insoluble tau, which refers to the hard, tangles that cannot be dissolved.

Furthermore, the team measured the “seeding” capacity of the tau. In disease states, misfolded tau can act like a template. It corrupts normal tau and causes it to misfold, spreading the pathology from cell to cell. Brain extracts from the mice with polyserine showed a higher ability to induce clumping in test cells. This indicates that polyserine makes the tau pathology more aggressive and transmissible.

Finally, the researchers asked if this effect was unique to serine. They compared it to other repetitive amino acid chains often found in genetic diseases, such as polyglutamine and polyalanine. They introduced these different chains into human neurons grown in a dish.

The results showed a high level of specificity. Only the polyserine chains recruited tau molecules into their clusters. The polyglutamine and polyalanine chains did not. This physical interaction between polyserine and tau appears to be the mechanism that accelerates the formation of toxic tau seeds.

There are caveats to consider in this research. The study used a virus to force the cells to make high levels of polyserine. This might result in higher concentrations of the protein than would naturally occur in a human disease. Future research will need to determine if lower, natural levels of polyserine cause the same degree of harm over a human lifespan.

The authors also noted that while they saw massive cell death in the cerebellum, other brain areas like the hippocampus seemed more resistant to cell loss, despite containing the protein. Understanding why some neurons die while others survive could offer clues for protection.

This study provides evidence that polyserine is not just a passive marker of disease. It suggests that these repetitive domains are active toxins that can kill neurons and worsen tauopathies. This opens a potential new avenue for therapy. If scientists can block the interaction between polyserine and tau, they might be able to slow the progression of diseases like Alzheimer’s.

“If we really want to treat Alzheimer’s and many of these other diseases, we have to block tau as early as possible,” said Parker. “These studies are an important step forward in understanding why tau aggregates in cells and how we can intervene.”

The study, “Polyserine domains are toxic and exacerbate tau pathology in mice,” was authored by Meaghan Van Alstyne, Vanessa L. Nguyen, Charles A. Hoeffer, and Roy Parker.

Exercise rivals therapy and medication for treating depression and anxiety

A new, comprehensive analysis confirms that physical activity is a highly effective treatment for depression and anxiety, offering benefits comparable to therapy or medication. The research suggests that specific types of exercise, such as group activities for depression or short-term programs for anxiety, can be tailored to maximize mental health benefits for different people. These findings were recently published in the British Journal of Sports Medicine.

Mental health disorders are a growing concern across the globe. Depression and anxiety affect a vast number of people, disrupting daily life and physical health. While antidepressants and psychotherapy are standard treatments, they are not always sufficient for every patient. Rates of these conditions continue to rise despite the availability of traditional cares.

Health experts have explored exercise as an alternative or add-on treatment for many years. However, previous attempts to summarize the evidence have faced challenges. Earlier reviews often mixed data from healthy individuals with data from patients suffering from chronic physical illnesses. This made it difficult to determine if mental improvements were due to exercise itself or simply a result of better physical health.

To address this uncertainty, a team of researchers conducted a “meta-meta-analysis,” also known as an umbrella review. This is a highly rigorous study design that sits at the top of the evidence hierarchy. Instead of running a new experiment on people, the researchers analyzed data from existing meta-analyses.

A meta-analysis pools the results of many individual scientific experiments to find a common truth. This umbrella review went a step further by pooling the results of those pools. The goal was to provide the most precise estimate possible of how exercise impacts mental health.

The research team was led by Neil Richard Munro from James Cook University in Queensland, Australia. He collaborated with colleagues from institutions in Australia and the United States. Their primary aim was to isolate the effect of exercise on mental health by excluding studies involving participants with pre-existing chronic physiological conditions.

This exclusion was a key part of their methodology. By removing data related to conditions like heart disease or cancer, the team removed potential confounding factors. They wanted to ensure that any observed benefits were due to the direct impact of exercise on the brain and psychological state.

The researchers searched five major electronic databases for relevant literature. They gathered data from studies published up to July 2025. The scope of their search was massive, covering children, adults, and older adults.

The final dataset included 63 umbrella reviews. These reviews encompassed 81 specific meta-analyses. In total, the analysis represented data from 1,079 individual studies and involved 79,551 participants.

The sheer volume of data allowed the researchers to look for subtle patterns. They examined different types of exercise, such as aerobic activities, resistance training, and mind-body practices like yoga. They also analyzed variables like intensity, duration, and whether the exercise was performed alone or in a group.

The overarching finding was clear and positive. Exercise reduced symptoms of both depression and anxiety across all population groups. The magnitude of the benefit was described as medium for depression and small-to-medium for anxiety.

For depression, the study found that all types of exercise were beneficial. However, aerobic exercise—activities that get the heart rate up, like running or cycling—showed the strongest impact. This suggests that cardiovascular engagement may trigger biological pathways that fight depressive symptoms.

The social context of the physical activity also appeared to matter greatly for depression. The data indicated that exercising in a group setting was more effective than exercising alone. Similarly, programs that were supervised by a professional yielded better results than unsupervised routines.

These findings regarding group and supervised settings point to the importance of social support. The shared experience of a class or team environment may provide a psychological sense of belonging. This social connection likely acts as an additional antidepressant mechanism alongside the physical exertion.

The study identified specific demographic groups that responded particularly well to exercise. “Emerging adults,” defined as individuals aged 18 to 30, saw the greatest benefits for depression. This is a critical age range, as it often coincides with the onset of many mental health challenges.

Another group that saw substantial benefits was women in the postnatal period. Postpartum depression is a severe and common condition. The finding that exercise is a highly effective intervention for this group offers a promising, non-pharmaceutical tool for maternal mental health.

When analyzing anxiety, the researchers found slightly different patterns. While aerobic exercise was still the most effective mode, all forms of movement helped reduce symptoms. This included resistance training and mind-body exercises like yoga or tai chi.

The optimal parameters for anxiety relief were notably different than for depression. The data suggested that shorter programs were highly effective. Interventions lasting up to eight weeks showed the strongest impact on anxiety symptoms.

Regarding intensity, the findings for anxiety were somewhat counterintuitive. Lower intensity exercise appeared to be more effective than high-intensity workouts. This could be because high-intensity exertion mimics some physical symptoms of anxiety, such as a racing heart, which might be uncomfortable for some patients.

The researchers compared the effects of exercise to traditional treatments. They found that the benefits of physical activity were comparable to those provided by psychotherapy and medications. This positions exercise not just as a lifestyle choice, but as a legitimate clinical intervention.

Despite the strength of these findings, the authors noted several caveats. The definitions of exercise intensity varied across the original studies, making it hard to set precise boundaries. What one study considers “moderate” might be “vigorous” in another.

There was also a potential sign of publication bias in the anxiety studies. This refers to the tendency for scientific journals to publish positive results more often than negative ones. However, the sheer number of studies analyzed provides a buffer against this potential distortion.

Another limitation was the overlap of participants in some of the underlying reviews. The researchers used a statistical method to check for this duplication. While some overlap existed, particularly in studies of youth and perinatal women, the overall quality of the evidence remained high.

The authors emphasized that motivation remains a hurdle. Knowing exercise helps is different from actually doing it. Future research needs to focus on how to help people with depression and anxiety stick to an exercise routine.

The study supports a shift in how mental health is treated clinically. The authors argue that health professionals should prescribe exercise with the same confidence as they prescribe pills. It is a cost-effective, accessible option with few side effects.

For public health policy, the implications are broad. The study suggests that guidelines should explicitly recommend exercise as a first-line treatment. This is especially relevant for young adults and new mothers, who showed the strongest responses.

Tailoring the prescription is key. A “one size fits all” approach does not apply to mental health. A depressed patient might benefit most from a running group, while an anxious patient might prefer a gentle, short-term yoga program.

The authors concluded that the evidence is now undeniable. Exercise is a potent medicine for the mind. The challenge now lies in integration and implementation within healthcare systems.

Mental health professionals can use these findings to offer evidence-based advice. They can move beyond vague recommendations to “be more active.” Instead, they can suggest specific formats, like group classes for depression, based on rigorous data.

Ultimately, this study serves as a comprehensive validation of movement as therapy. It strips away the noise of co-occurring physical diseases to show that exercise heals the brain. It offers a hopeful, empowering path for millions struggling with mental health issues.

The study, “Effect of exercise on depression and anxiety symptoms: systematic umbrella review with meta-meta-analysis,” was authored by Neil Richard Munro, Samantha Teague, Klaire Somoray, Aaron Simpson, Timothy Budden, Ben Jackson, Amanda Rebar, and James Dimmock.

Daily soda consumption linked to cognitive difficulties in teens

New research indicates that daily consumption of sodas and sports drinks may hinder the cognitive abilities of adolescents. A recent analysis suggests that these sugary beverages disrupt sleep patterns, which in turn leads to difficulties with memory, concentration, and decision-making. These findings were published in the journal Nutritional Neuroscience.

The adolescent brain undergoes a period of rapid development and reorganization. This phase is characterized by changes in the prefrontal cortex, the area of the brain responsible for planning and impulse control. Because the brain is still maturing, it is particularly sensitive to dietary inputs and environmental factors.

Researchers have previously identified links between high sugar intake and various health issues. However, the specific relationship between different types of sugary drinks and mental clarity in teenagers has remained less defined. Shuo Feng, a researcher at the Department of Health Behavior at Texas A&M University, sought to clarify this connection.

Feng designed the study to look beyond a simple direct link between sugar and brain function. The investigation aimed to determine if sleep duration acts as a “mediator.” A mediator is a variable that explains the process through which two other variables are related. In this case, the question was whether sugary drinks cause poor sleep, which then causes cognitive trouble.

The study utilized data from the 2021 Youth Risk Behavior Surveillance Survey (YRBS). This is a large-scale, national survey administered by the Centers for Disease Control and Prevention (CDC). It monitors health behaviors contributing to the leading causes of death and disability among youth.

The final dataset included responses from 8,229 high school students across the United States. The survey asked students to report how often they consumed soda and sports drinks over the past week. It also asked them to estimate their average nightly sleep duration.

To measure cognitive difficulties, the survey included a specific question regarding mental clarity. Students were asked if physical, mental, or emotional problems caused them “serious difficulty concentrating, remembering, or making decisions.” Feng used statistical models to analyze the relationships between these variables while accounting for factors like age, gender, and physical activity.

The analysis revealed distinct patterns based on the type of beverage and the sex of the student. Daily consumption of soda showed a strong association with cognitive difficulties for both boys and girls. Compared to non-drinkers, adolescents who drank soda every day had higher odds of reporting serious trouble with memory and concentration.

The results for sports drinks appeared slightly different. Daily consumption of sports drinks was linked to cognitive difficulties in girls. This association was not statistically clear for boys in the same daily consumption category.

A major component of the findings focused on the role of sleep. The data showed that higher intake of sugar-sweetened beverages correlated with fewer hours of rest. This reduction in sleep served as a pathway linking the drinks to cognitive struggles.

For both boys and girls, sleep duration mediated the relationship between soda intake and cognitive difficulties. This means that part of the reason soda drinkers struggle with focus is likely because they are not sleeping enough. A similar mediation effect was found regarding sports drinks.

The biological mechanisms behind these findings involve the brain’s chemical signaling systems. Many sugar-sweetened beverages contain caffeine. Caffeine acts as an antagonist to adenosine, a brain chemical that promotes sleepiness. By blocking adenosine receptors, caffeine increases alertness temporarily but disrupts the body’s natural drive for sleep.

Sugar itself also impacts the brain’s reward system. Consuming high amounts of sugar stimulates the release of dopamine. This is a neurotransmitter associated with pleasure and motivation.

Chronic overstimulation of this reward system during adolescence can alter gene expression in the hypothalamus. This brain region regulates various bodily functions, including sleep cycles and memory. Over time, these chemical changes may increase vulnerability to cognitive dysregulation.

The study also touched upon the concept of synaptic plasticity. This term refers to the brain’s ability to strengthen or weaken connections between neurons. Estrogens, particularly estradiol, play a role in enhancing this plasticity and promoting blood flow in the brain.

Biological differences in how males and females process these chemicals may explain the variation in results. For instance, the study notes that sex-specific mechanisms could influence how sugar affects the brain. This might shed light on why sports drinks showed a stronger negative association with cognitive function in girls than in boys.

The sugar content in sports drinks is generally lower than that of sodas. A typical 20-ounce sports drink contains about 34 grams of sugar. In contrast, a similar amount of soda may contain nearly double that amount.

This difference in sugar load might result in less stimulation of the dopamine reward system for sports drink consumers. Additionally, sports drinks are often consumed in the context of physical exercise. Exercise is known to improve metabolism and hormonal regulation.

Improved metabolism from exercise might help the body process unhealthy ingredients more rapidly. This could potentially buffer some of the negative effects on the brain. However, the study suggests that for girls consuming these drinks daily, the negative cognitive outcomes persist.

The researcher pointed out that socioeconomic factors often influence dietary choices. Marketing for sugary beverages frequently targets younger demographics. The availability of these drinks in schools and communities remains high.

There are limitations to this study that require consideration. The data comes from a cross-sectional survey. This means it captures a snapshot in time rather than following individuals over years.

Because of this design, the study cannot definitively prove that sugary drinks cause cognitive decline. It can only show that the two are statistically linked. It is possible that students with cognitive difficulties are more prone to drinking sugary beverages, rather than the other way around.

Another limitation is the reliance on self-reported data. Students might not accurately remember how many drinks they consumed in the past week. They might also struggle to estimate their average sleep duration precisely.

The measurement of cognitive difficulties relied on a single, broad question. This question combined memory, concentration, and decision-making into one category. Future research would benefit from using more granular tests to measure these specific mental functions separately.

The study also had to exclude a number of participants due to missing data. A sensitivity analysis showed that the final group of students was slightly older and more racially diverse than those excluded. This could potentially introduce selection bias into the final results.

Despite these caveats, the research offers evidence supporting public health interventions. Reducing the intake of sugar-sweetened beverages could be a practical strategy to improve youth health. Such a reduction may lead to better sleep duration and improved academic performance.

Educators and health professionals might consider emphasizing sleep hygiene as part of nutritional counseling. Addressing the consumption of caffeine and sugar, particularly in the evening, could help restore natural sleep cycles. This is vital for the developing adolescent brain.

Future studies should aim to replicate these findings using objective measures. Wearable technology could provide more accurate data on sleep duration and quality. controlled trials could also help isolate the effects of specific ingredients like high-fructose corn syrup or caffeine.

The study highlights a clear intersection between diet, rest, and mental function. It suggests that what teenagers drink has consequences that extend beyond physical weight or dental health. The impact reaches into the classroom and their daily ability to process information.

The study, “The association of sugar-sweetened beverages consumption with cognitive difficulties among U.S. adolescents: a mediation effect of sleep using Youth Risk Behavior Surveillance Survey 2021,” was authored by Shuo Feng.

Scientists use machine learning to control specific brain circuits

A team of researchers in Japan has developed an artificial intelligence tool called YORU that can identify specific animal behaviors in real time and immediately interact with the animals’ brain circuits. This open-source software, described in a study published in Science Advances, allows biologists to study social interactions with greater speed and precision than previously possible. By treating complex actions as distinct visual objects, the system enables computers to “watch” behaviors like courtship or food sharing and respond within milliseconds.

Biologists have struggled for years to automate the analysis of how animals interact. Social behaviors such as courtship or aggression involve dynamic movements where individuals often touch or obscure one another from the camera’s view. Previous software solutions typically relied on a method called pose estimation. This technique tracks specific body points like a joint, a knee, or a wing tip across many video frames to calculate movement.

These older methods often fail when animals get too close to one another. When two insects overlap, the computer frequently loses track of which leg belongs to which individual. This confusion makes it difficult to trigger experiments at the exact moment a behavior occurs. To solve this, a team including Hayato M. Yamanouchi and Ryosuke F. Takeuchi sought a different approach. They worked under the guidance of senior author Azusa Kamikouchi at Nagoya University.

The group aimed to build a system capable of “closed-loop” feedback. This term refers to an experimental setup where a computer watches an animal and instantly creates a stimulus in response. For example, a computer might turn on a light the moment a fly extends its wing. Achieving this requires software that processes video data faster than the animal moves.

The researchers built their system using a deep learning algorithm known as object detection. Unlike pose estimation, this method analyzes the entire shape of an animal in a single video frame. The team named their software YORU. This acronym stands for Your Optimal Recognition Utility.

YORU identifies a specific action as a distinct “behavior object.” The software recognizes the visual pattern of two ants sharing food or a male fly vibrating its wing. This approach allows the computer to classify social interactions even when the animals are touching. By viewing the behavior as a unified object rather than a collection of points, the system bypasses the confusion caused by overlapping limbs.

The team tested YORU on several different species to verify its versatility. They recorded videos of fruit flies courting, ants engaging in mouth-to-mouth food transfer—a behavior known as trophallaxis—and zebrafish orienting toward one another. The system achieved detection accuracy rates ranging from roughly 90 to 98 percent compared to human observation.

The software also proved effective at analyzing brain activity in mice. The researchers placed mice on a treadmill within a virtual reality setup. YORU accurately identified behaviors such as running, grooming, and whisker movements. The system matched these physical actions with simultaneous recordings of neural activity in the mouse cortex. This confirmed that the AI could reliably link visible movements to the invisible firing of neurons.

The most advanced test involved a technique called optogenetics. This method allows scientists to switch specific neurons on or off using light. The team genetically modified male fruit flies so that the neurons responsible for their courtship song would be silenced by green light. These neurons are known as pIP10 descending neurons.

YORU watched the flies in real time. When the system detected a male extending his wing to sing, it triggered a green light within milliseconds. The male fly immediately stopped his courtship song. This interruption caused a decrease in mating success that was statistically significant.

Hayato M. Yamanouchi, co-first author from Nagoya University’s Graduate School of Science, highlighted the difference in their approach. He noted, “Instead of tracking body points over time, YORU recognizes entire behaviors from their appearance in a single video frame. It spotted behaviors in flies, ants, and zebrafish with 90-98% accuracy and ran 30% faster than competing tools.”

The researchers then took the experiment a step further by using a projector. They wanted to manipulate only one animal in a pair without affecting the other. They genetically modified female flies to have light-sensitive hearing neurons. Specifically, they targeted neurons in the Johnston’s organ, which is the fly’s equivalent of an ear.

When the male fly extended his wing, YORU calculated the female’s exact position. The system then projected a small circle of light onto her thorax. This light silenced her hearing neurons exactly when the male tried to sing. The female ignored the male’s advances because she could not hear him.

This experiment confirmed the software’s ability to target individuals in a group. Azusa Kamikouchi explained the significance of this precision. “We can silence fly courtship neurons the instant YORU detects wing extension. In a separate experiment, we used targeted light that followed individual flies and blocked just one fly’s hearing neurons while others moved freely nearby.”

The speed of the system was a primary focus for the researchers. They benchmarked YORU against SLEAP, a popular pose-estimation tool. YORU exhibited a mean latency—the delay between seeing an action and reacting to it—of approximately 31 milliseconds. This was roughly 30 percent faster than the alternative method. Such speed is necessary for studying neural circuits, which operate on extremely fast timescales.

The system is also designed to be user-friendly for biologists who may not be experts in computer programming. It includes a graphical user interface that allows researchers to label behaviors and train the AI without writing code. The team has made the software open-source, allowing laboratories worldwide to download and adapt it for their own specific animal models.

While the system offers speed and precision, it relies on the appearance of behavior in a single frame. This design means YORU cannot easily identify behaviors that depend on a sequence of events over time. For example, distinguishing between the beginning and end of a foraging run might require additional analysis. The software excels at spotting “states” of being rather than complex narratives.

The current version also does not automatically track the identity of individual animals over long periods. If two animals look identical and swap places, the software might not distinguish between them without supplementary tools. Researchers may need to combine YORU with other tracking software for studies requiring long-term individual histories.

Hardware limitations present another challenge for the projector-based system. Fast-moving animals might exit the illuminated area before the light pulses if the projector has a slight delay. Future updates could incorporate predictive algorithms to anticipate where an animal will be millisecond by millisecond.

Despite these limitations, YORU represents a new way to interrogate the brain. By allowing computers to recognize social behaviors as they happen, scientists can now ask questions about how the brain navigates the complex social world. The ability to turn specific senses on and off during social exchanges opens new avenues for understanding the neural basis of communication.

The study, “YORU: Animal behavior detection with object-based approach for real-time closed-loop feedback,” was authored by Hayato M. Yamanouchi, Ryosuke F. Takeuchi, Naoya Chiba, Koichi Hashimoto, Takashi Shimizu, Fumitaka Osakada, Ryoya Tanaka, and Azusa Kamikouchi.

Virtual parenting games may boost desire for real children, study finds

Declining birth rates present a demographic challenge for nations across the globe, particularly in East Asia. A new study published in Frontiers in Psychology suggests that playing life simulation video games may influence a player’s desire to have children in the real world. The research indicates that the emotional bonds formed with virtual characters can serve as a psychological pathway to shaping reproductive attitudes.

Societies such as China are currently experiencing a transition marked by persistently low fertility rates. Young adults aged 18 to 35 often report a reluctance to marry and bear children. This hesitation is frequently attributed to high economic costs associated with housing and education. It is also linked to a phenomenon researchers call “risk consciousness.” This mindset involves anxiety regarding the potential loss of personal freedom and the financial burdens of parenthood.

In this environment, digital entertainment has become a primary venue for social interaction and relaxation. Some scholars have argued that online activities might replace real-world relationships. This substitution could theoretically weaken the motivation to start a family. However, other researchers contend that specific types of games might offer a different outcome.

The researchers leading this study are Yuan Qi of Anhui Normal University and Gao Jie of Nanjing University. They collaborated with colleagues to investigate the psychological impact of life simulation games. They focused specifically on a popular game titled Chinese Parents. This game allows players to simulate the experience of raising a child from birth to adulthood. It incorporates culturally specific elements such as academic pressure and intergenerational expectations.

The team sought to understand if the virtual experience of raising a digital child could translate into a real-world desire for parenthood. To do this, they relied on two primary psychological concepts. The first is attachment theory, which typically describes the bonds between humans. The second is the concept of parasocial relationships.

Parasocial relationships refer to one-sided psychological connections that media users form with characters. While the user knows the character is fictional, the feelings of friendship, empathy, or affection feel real. The researchers hypothesized that these virtual bonds might act as a buffer against real-world anxieties. They proposed an “Emotional Compensation Hypothesis.” This hypothesis suggests that the safety of a virtual environment allows young people to experience the emotional rewards of parenting without the immediate financial or social risks.

To test their model, the researchers conducted a survey of 612 gamers who played Chinese Parents. The participants ranged in age from 18 to 35 years old. This age bracket represents the primary demographic for marriage and childbearing decisions. The group was recruited from online gaming communities and university campuses in China.

The survey utilized a statistical approach known as Partial Least Squares Structural Equation Modeling. This method allows scientists to identify complex relationships between different variables. The researchers measured several specific psychological factors.

The first factor was game concentration. This refers to the depth of immersion a player feels. It is a state of flow where the player becomes absorbed in the virtual world. The second factor was identification friendship. This measures the degree to which a player views the virtual character as a friend or an extension of themselves.

The researchers then looked at parasocial relationships, which they divided into two distinct categories. The first category is parasocial cognition. This involves thinking about the character’s motivations and understanding their perspective intellectually. The second category is parasocial emotions. This involves feeling empathy, warmth, and affection toward the character. Finally, the researchers measured fertility desire, which is the self-reported intention to have children in the real world.

The analysis revealed a specific psychological pathway. The researchers found that game concentration did not directly change a player’s desire to have children. Simply being immersed in the game was not enough to alter real-world life planning.

Instead, the results showed that immersion acted as a catalyst for other feelings. High levels of concentration led players to develop a sense of identification friendship with their virtual characters. Players began to see these digital figures as distinct social entities worthy of care.

This sense of friendship then triggered the critical component of the model: parasocial emotions. Players reported feeling genuine empathy and support for their virtual children. The data showed that these emotional connections were the bridge to real-world attitudes. When players formed strong emotional attachments to their in-game characters, they reported a higher desire to have children in real life.

The researchers found that the emotional pathway was the only successful route to influencing fertility desire. The study examined a cognitive pathway, where players intellectually analyzed the character’s situation. The results for this path were not statistically significant regarding the final outcome. Understanding the logic of the character did not correlate with a desire for parenthood. Only the emotional experience of caring for the character had an association with real-world reproductive goals.

The findings support the researchers’ “Emotional Compensation Hypothesis.” In a high-pressure society, simulation games provide a low-stakes environment. Players can satisfy their innate need for caregiving and intimacy through the game. Rather than replacing the desire for real children, this virtual fulfillment appears to keep the positive idea of parenthood alive. The game functions as a “secure base.” It allows individuals to practice the emotions of parenting without the fear of real-world consequences.

There are several limitations to this study that contextualize the findings. The research used a cross-sectional design. This means the data represents a snapshot in time. It shows a correlation between playing the game and wanting children, but it cannot definitively prove that playing the game caused the desire. It is possible that people who already want children are more likely to play parenting simulation games.

The data relied on self-reported questionnaires. This method depends on the honesty and self-awareness of the participants. Additionally, the study focused on a specific game within a specific cultural context. Chinese Parents is deeply rooted in Chinese social norms. The results might not apply to gamers in other countries or players of different genres of simulation games.

The researchers suggest that future studies should employ longitudinal designs. Tracking players over a long period would help determine if these virtual desires translate into actual decisions to have children years later. They also recommend expanding the research to include different cultural backgrounds.

Future investigations could also explore the potential of using such games as psychological tools. If these simulations can provide a safe space for emotional expression, they might help individuals with anxiety regarding family planning. The study opens a conversation about how digital experiences in the modern age intersect with fundamental biological and social motivations.

The study, “From virtual attachments to real-world fertility desires: emotional pathways in game character attachment and parasocial relationships,” was authored by Yuan Qi, Gao Jie, Du Yun, and Ding Yi Zhuo.

Strong ADHD symptoms may boost creative problem-solving through sudden insight

New research suggests that the distinctive cognitive traits associated with Attention-Deficit/Hyperactivity Disorder, or ADHD, may provide a specific advantage in how people tackle creative challenges. A study conducted by psychologists found that individuals reporting high levels of ADHD symptoms are more likely to solve problems through sudden bursts of insight rather than through methodical analysis.

These findings indicate that while ADHD is often defined by its deficits, the condition may also facilitate a unique style of thinking that bypasses conscious logic to reach a solution. The results were published in the journal Personality and Individual Differences.

Attention-Deficit/Hyperactivity Disorder is a neurodevelopmental condition typically characterized by difficulty maintaining focus, impulsive behavior, and hyperactivity. These symptoms are often viewed through the lens of executive function deficits. Executive function refers to the brain’s management system. It acts like an air traffic controller that directs attention, filters out distractions, and keeps mental processes organized.

When this system works efficiently, a person can focus on a specific task and block out irrelevant information. However, researchers have long hypothesized that a “leaky” attention filter might have a hidden upside. If the brain does not filter out irrelevant information efficiently, it may allow remote ideas and associations to enter conscious awareness. This broader associative net could theoretically help a person connect seemingly unrelated concepts.

To test this theory, a team of researchers led by Hannah Maisano and Christine Chesebrough, along with senior author John Kounios, designed an experiment to measure problem-solving styles. Maisano is a doctoral student at Drexel University, and Chesebrough is a researcher at the Feinstein Institutes for Biomedical Research. They collaborated with Fengqing Zhang and Brian Daly of Drexel University and Mark Beeman of Northwestern University.

The researchers recruited 299 undergraduate students to participate in an online study. The team did not limit the study to individuals with a formal medical diagnosis. Instead, they asked all participants to complete the Adult ADHD Self-Report Scale. This is a standard survey used to measure the frequency and severity of symptoms such as inattention and hyperactivity. This approach allowed the scientists to examine the effects of these traits across a full spectrum of severity.

The core of the experiment involved a test known as the Compound Remote Associates task. Psychologists frequently use this task to measure convergent thinking, which is the ability to find a single correct answer to a problem. In this test, participants view three words that appear unrelated. Their goal is to find a fourth word that creates a familiar compound word or phrase with each of the three.

For example, a participant might see the words “pine,” “crab,” and “sauce.” The correct answer is “apple,” which forms “pineapple,” “crabapple,” and “applesauce.” The participants attempted to solve sixty of these puzzles.

After each successful solution, the participants reported how they arrived at the answer. They had to choose between two distinct cognitive styles. The first style is analysis. This involves a deliberate, step-by-step search for the answer. It is a conscious and slow process. The second style is insight. This is often described as an “Aha!” moment. It occurs when the solution pops into awareness suddenly and surprisingly, often after the person has stopped actively trying to force a result.

The data revealed a distinct pattern in how different groups approached the puzzles. Participants who reported the highest levels of ADHD symptoms relied heavily on insight. They were statistically significantly more likely to solve the problems through sudden realization than through step-by-step logic.

In contrast, the participants with the lowest levels of ADHD symptoms displayed a different profile. This group used a balance of both insight and analysis to find the answers. They did not favor one method overwhelmingly over the other.

“We found that individuals reporting the strongest ADHD symptoms relied significantly more on insight to solve problems,” said Maisano. “They appear to favor unconscious, associative processing that can produce sudden creative breakthroughs.”

The researchers also analyzed the total number of problems solved correctly by each group. This analysis produced an unexpected U-shaped curve. The group with the highest symptoms and the group with the lowest symptoms both performed very well. They solved the most puzzles overall. However, the participants in the middle of the spectrum performed the worst.

This U-shaped result suggests that high and low levels of executive control lead to success through different routes. People with high executive control can effectively use analytical strategies. They can systematically test words until they find a match. People with low executive control, such as those with high ADHD symptoms, struggle with that systematic approach. However, their tendency toward unfocused thought allows their brains to stumble upon the answer unconsciously.

The individuals in the middle appear to be at a disadvantage in this specific context. They may not have enough executive control to be highly effective at analysis. Simultaneously, they may have too much control to allow their minds to wander freely enough for frequent insight.

Kounios explains the implication of this finding. “Our results show that having strong ADHD symptoms can mean being a better creative problem-solver than most people, that is, than people who have low to moderate ADHD symptoms.”

The study aligns with the concept of dual-process theories of thought. Psychologists often distinguish between Type 1 and Type 2 processing. Type 1 processing is fast, automatic, and unconscious. It is the engine behind intuitive insight. Type 2 processing is slow, effortful, and conscious. It drives analytical reasoning.

ADHD symptoms are generally associated with a weakness in Type 2 processing. The effort required to maintain focus and manipulate information in working memory is often impaired. The researchers argue that this deficit in Type 2 processing forces—or perhaps allows—individuals with ADHD symptoms to rely on Type 1 processing.

This reliance on Type 1 processing is not merely a compensation strategy. It appears to be a robust pathway to solution in its own right. The high-symptom group did not just fail to analyze; they succeeded through insight. The regression analyses performed by the team showed that as ADHD symptoms increased, the probability of using analysis dropped, while the probability of using insight rose.

“Being both very high or very low in executive control can be beneficial for creative problem-solving, but you get to the right answer in different ways,” said Chesebrough.

Kounios and his colleagues emphasize that these findings challenge the traditional view of ADHD as purely a disorder of deficits. While the condition certainly presents challenges in environments that require rigid focus and organization, it offers advantages in situations that demand creative connections.

The study does have limitations. It relied on a sample of university students rather than a broader slice of the general population. Additionally, the study used self-reported symptoms rather than clinical diagnoses confirmed by a physician. It is possible that other undiagnosed conditions could have influenced the results.

The researchers also note that they excluded participants who reported poor sleep or substance use, as these factors can impair cognitive performance. Future research will need to replicate these findings with larger groups and formally diagnosed clinical populations to confirm the robustness of the U-shaped performance curve.

Despite these caveats, the research offers a new perspective on neurodiversity in the context of problem-solving. It suggests that the cognitive profile associated with ADHD is not simply a broken version of “normal” cognition. Instead, it represents a different functional organization of the brain. This organization favors spontaneous processing over deliberate control.

Understanding this strength could help educators and employers create environments that harness the natural abilities of individuals with ADHD. Rather than forcing these individuals to adopt analytical strategies that do not fit their cognitive style, it may be more effective to encourage their intuitive approaches.

“Understanding these strengths could help people harness their natural problem-solving style in school, work and everyday life,” said Kounios.

The study, “ADHD symptom magnitude predicts creative problem-solving performance and insight versus analysis solving modes,” was authored by Hannah Maisano, Christine Chesebrough, Fengqing Zhang, Brian Daly, Mark Beeman, and John Kounios.

New research links childhood inactivity to depression in a vicious cycle

New research suggests a bidirectional relationship exists between how much time children spend sitting and their mental health, creating a cycle where inactivity feeds feelings of depression and vice versa. This dynamic appears to extend beyond the individual child, as a child’s mood and inactivity levels can eventually influence their parent’s mental well-being. These results were published in the journal Mental Health and Physical Activity.

For decades, health experts have recognized that humans spend a large portion of their waking hours in sedentary behaviors. This term refers to any waking behavior characterized by an energy expenditure of 1.5 metabolic equivalents or less while in a sitting, reclining, or lying posture. Common examples include watching television, playing video games while seated, or sitting in a classroom. While the physical health consequences of this inactivity are well documented, the impact on mental health is a growing area of concern.

In recent years, screen time has risen considerably among adolescents. This increase has prompted researchers to question how these behaviors interact with mood disorders such as depression. Most prior studies examining this link have focused on adults. When studies do involve younger populations, they often rely on the participants to report their own activity levels. Self-reported data is frequently inaccurate, as people struggle to recall exactly how many minutes they spent sitting days or weeks ago.

There is also a gap in understanding how these behaviors function within a family unit. Parents and children do not exist in isolation. They form a “dyad,” or a two-person group wherein the behavior and emotions of one person can impact the other. To address these gaps, a team of researchers led by Maria Siwa from the SWPS University in Poland investigated these associations using objective measurement tools. The researchers aimed to see if depression leads to more sitting, or if sitting leads to more depression. They also sought to understand if these effects spill over from child to parent.

The research team recruited 203 parent-child dyads to participate in the study. The children ranged in age from 9 to 15 years old. The parents involved were predominantly mothers, accounting for nearly 87 percent of the adult participants. The study was longitudinal, meaning the researchers tracked the participants over an extended period to observe changes. Data collection occurred at three specific points: the beginning of the study (Time 1), an eight-month follow-up (Time 2), and a 14-month follow-up (Time 3).

To ensure accuracy, the researchers did not rely solely on questionnaires for activity data. Instead, they asked participants to wear accelerometers. These are small devices worn on the hip that measure movement intensity and frequency. Participants wore these devices for six consecutive days during waking hours. This provided a precise, objective record of how much time each parent and child spent being sedentary versus being active.

For the assessment of mental health, the researchers used the Patient Health Questionnaire. This is a standard screening tool used to identify the presence and severity of depressive symptoms. It asks individuals to rate the frequency of specific symptoms over the past two weeks. The study took place in the context of a healthy lifestyle education program. Between the first and second measurement points, all families received education on the health consequences of sedentary behaviors and strategies to interrupt long periods of sitting.

The analysis of the data revealed a reciprocal relationship within the children. Children who spent more time being sedentary at the start of the study displayed higher levels of depressive symptoms eight months later. This supports the theory that physical inactivity can contribute to the development of poor mood. Proposed biological mechanisms for this include changes in inflammation markers or neurobiological pathways that affect how the brain regulates emotion.

However, the reverse was also true. Children who exhibited higher levels of depressive symptoms at the start of the study spent more time being sedentary at the eight-month mark. This suggests a “vicious cycle” where symptoms of depression, such as low energy or withdrawal, lead to less movement. The lack of movement then potentially exacerbates the depressive symptoms. This bidirectional pattern highlights how difficult it can be to break the cycle of inactivity and low mood.

The study also identified an effect that crossed from one person to the other. High levels of depressive symptoms in a child at the start of the study predicted increased sedentary time for that child eight months later. This increase in the child’s sedentary behavior was then linked to higher levels of depressive symptoms in the parent at the 14-month mark.

This “across-person” finding suggests a domino effect within the family. A child’s mental health struggles may lead them to withdraw into sedentary activities. Observing this behavior and potentially feeling ineffective in helping the child change their habits may then take a toll on the parent’s mental health. This aligns with psychological theories regarding parental stress. Parents often feel distress when they perceive their parenting strategies as ineffective, especially when trying to manage a child’s health behaviors.

One particular finding was unexpected. Children who reported lower levels of depressive symptoms at the eight-month mark actually spent more time sitting at the final 14-month check-in. The researchers hypothesize that this might be due to a sense of complacency. If adolescents feel mentally well, they may not feel a pressing need to follow the program’s advice to reduce sitting time. They might associate their current well-being with their current lifestyle, leading to less motivation to become more active.

The researchers controlled for moderate-to-vigorous physical activity in their statistical models. This ensures that the results specifically reflect the impact of sedentary time, rather than just a lack of exercise. Even when accounting for exercise, the links between sitting and depression remained relevant in specific pathways.

There are caveats to consider when interpreting these results. The sample consisted largely of families with higher education levels and average or above-average economic status. This limits how well the findings apply to the general population or to families facing economic hardship. Additionally, the study was conducted in Poland, and cultural factors regarding parenting and leisure time could influence the results.

Another limitation is the nature of the device used. While accelerometers are excellent for measuring stillness versus movement, they cannot distinguish between different types of sedentary behavior. They cannot tell the difference between sitting while doing homework, reading a book, or mindlessly scrolling through social media. Different types of sedentary behavior might have different psychological impacts.

The study also focused on a community sample rather than a clinical one. Most participants reported mild to moderate symptoms rather than severe clinical depression. The associations might look different in a population with diagnosed major depressive disorder. Furthermore, while the study found links over time, the observed effects were relatively small. Many other factors likely contribute to both depression and sedentary behavior that were not measured in this specific analysis.

Despite these limitations, the implications for public health are clear. Interventions aimed at improving youth mental health should not ignore physical behavior. Conversely, programs designed to get kids moving should address mental health barriers. The findings support the use of family-based interventions. Treating the child in isolation may miss the important dynamic where the child’s behavior impacts the parent’s well-being.

Future research should investigate the specific mechanisms that drive these connections. For example, it would be beneficial to study whether parental beliefs about their own efficacy mediate the link between a child’s inactivity and the parent’s mood. Researchers should also look at different types of sedentary behavior to see if screen time is more harmful than other forms of sitting. Understanding these nuances could lead to better guidance for families trying to navigate the complex relationship between physical habits and emotional health.

The study, “Associations between depressive symptoms and sedentary behaviors in parent-child Dyads: Longitudinal effects within- and across- person,” was authored by Maria Siwa, Dominika Wietrzykowska, Zofia Szczuka, Ewa Kulis, Monika Boberska, Anna Banik, Hanna Zaleskiewicz, Paulina Krzywicka, Nina Knoll, Anita DeLongis, Bärbel Knäuper, and Aleksandra Luszczynska.

No association found between COVID-19 shots during pregnancy and autism or behavioral issues

Recent research provides new evidence regarding the safety of COVID-19 vaccinations during pregnancy. The study, presented at the Society for Maternal-Fetal Medicine (SMFM) 2026 Pregnancy Meeting, indicates that receiving an mRNA vaccine while pregnant does not negatively impact a toddler’s brain development. The findings suggest that children born to vaccinated mothers show no difference in reaching developmental milestones compared to those born to unvaccinated mothers.

The question of vaccine safety during pregnancy has been a primary concern for expectant parents since the introduction of COVID-19 immunizations. Messenger RNA, or mRNA, vaccines function by introducing a genetic sequence that instructs the body’s cells to produce a specific protein. This protein triggers the immune system to create antibodies against the virus.

While health organizations have recommended these vaccines to prevent severe maternal illness, data regarding the longer-term effects on infants has been accumulating slowly. Parents often worry that the immune activation in the mother could theoretically alter the delicate process of fetal brain formation.

To address these specific concerns, a team of researchers investigated the neurodevelopmental outcomes of children aged 18 to 30 months. The study was led by George R. Saade from Eastern Virginia Medical School at Old Dominion University and Brenna L. Hughes from Duke University School of Medicine. They conducted this work as part of the Maternal-Fetal Medicine Units Network. This network is a collaboration of research centers funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development.

The researchers designed a prospective observational study. This type of study follows a group of participants over time to observe outcomes rather than intervening or experimenting on them. The team identified women who had received at least one dose of an mRNA SARS-CoV-2 vaccine. To be included in the exposed group, the mothers must have received the vaccine either during their pregnancy or within the 30 days prior to becoming pregnant.

The research team compared these women to a control group of mothers who did not receive the vaccine during that same period. To ensure the comparison was scientifically valid, the researchers used a technique called matching. Each vaccinated mother was paired with an unvaccinated mother who shared key characteristics.

These characteristics included the specific medical site where they delivered the baby and the date of the delivery. They also matched participants based on their insurance status and their race. This matching process is essential in observational research. It helps rule out other variables, such as access to healthcare or socioeconomic status, which could independently influence a child’s development.

The study applied strict exclusion criteria to isolate the effect of the vaccine. The researchers did not include women who delivered their babies before 37 weeks of gestation. This decision was necessary because preterm birth is a known cause of developmental delays. Including premature infants could have obscured the results. The team also excluded multifetal pregnancies, such as twins or triplets, and children born with major congenital malformations.

Ultimately, the study analyzed 217 matched pairs, resulting in a total of 434 children. The primary tool used to measure development was the Ages and Stages Questionnaire, Third Edition, often referred to as the ASQ-3. This is a standardized screening tool widely used in pediatrics. It relies on parents to observe and report their child’s abilities in five distinct developmental areas.

The first area is communication, which looks at how a child understands language and speaks. The second is gross motor skills, involving large movements like walking or jumping. The third is fine motor skills, which involves smaller movements like using fingers to pick up tiny objects. The fourth is problem-solving, and the fifth is personal-social interaction, covering how the child plays and interacts with others.

The researchers analyzed the data by looking for statistical equivalence. They established a specific margin of 10 points on the ASQ-3 scale. If the difference between the average scores of the vaccinated and unvaccinated groups was less than 10 points, the outcomes were considered practically identical.

The results demonstrated that the neurodevelopmental outcomes were indeed equivalent. The median total ASQ-3 score for the vaccinated group was 255. The median score for the unvaccinated group was 260. After adjusting for other factors, the difference was calculated to be -3.4 points. This falls well within the 10-point margin of equivalence, meaning there was no meaningful difference in development between the two groups.

Beyond the general developmental scores, the researchers utilized several secondary screening tools to check for specific conditions. They employed the Modified Checklist for Autism in Toddlers to assess the risk of autism spectrum disorder. The findings showed no statistical difference in risk levels.

Approximately 5 percent of the children in the vaccinated group screened positive for potential autism risk. This was comparable to the 6 percent observed in the unvaccinated group. These percentages suggest that vaccination status did not influence the likelihood of an autism diagnosis.

The team also used the Child Behavior Checklist. This tool evaluates various behavioral and emotional challenges. It looks at internalizing behaviors, such as anxiety, withdrawal, or sadness. It also examines externalizing behaviors, such as aggression or rule-breaking.

The scores for both internalizing and externalizing behaviors were nearly identical between the two groups. For example, 93 percent of children in the vaccinated group fell within the normal range for total behavioral problems. This was the exact same percentage found in the unvaccinated group.

Finally, the researchers assessed temperament using the Early Childhood Behavior Questionnaire. This measures traits such as “surgency,” which relates to positive emotional reactivity and high energy. It also measures “effortful control,” which is the ability to focus attention and inhibit impulses. Across all these psychological domains, the study found no association between maternal vaccination and negative outcomes.

The demographics of the two groups were largely similar due to the matching process. However, one difference remained. Mothers in the vaccinated group were more likely to be nulliparous. This is a medical term indicating that the woman had never given birth before the pregnancy in question.

Additionally, the children in the vaccinated group were slightly younger at the time of the assessment. Their median age was 25.4 months, compared to 25.9 months for the unvaccinated group. The researchers used statistical models to adjust for these slight variations. Even after these adjustments, the conclusion remained that the developmental outcomes were equivalent.

“Neurodevelopment outcomes in children born to mothers who received the COVID-19 vaccine during or shortly before pregnancy did not differ from those born to mothers who did not receive the vaccine,” said Saade.

While the findings are positive, there are context and limitations to consider. The study was observational, meaning it cannot prove causation as definitively as a randomized controlled trial. However, randomized trials are rarely feasible for widely recommended vaccines due to ethical considerations.

Another factor is the reliance on parent-reported data. Tools like the ASQ-3 depend on the accuracy of the parents’ observations, which can introduce some subjectivity. Furthermore, the study followed children only up to 30 months of age. Some subtle neurodevelopmental issues may not manifest until children are older and face the demands of school.

Despite these limitations, the rigorous matching and the use of multiple standardized screening tools provide a high level of confidence in the results for the toddler age group. The study fills a knowledge gap regarding the safety of mRNA technology for the next generation.

“This study, conducted through a rigorous scientific process in an NIH clinical trials network, demonstrates reassuring findings regarding the long-term health of children whose mothers received COVID-19 vaccination during pregnancy,” said Hughes.

The study, “Association Between SARS-CoV-2 Vaccine in Pregnancy and Child Neurodevelopment at 18–30 Months,” was authored by George R. Saade and Brenna L. Hughes, and will be published in the February 2026 issue of PREGNANCY.

Ultra-processed foods in early childhood linked to lower IQ scores

Toddlers who consume a diet high in processed meats, sugary snacks, and soft drinks may have lower intelligence scores by the time they reach early school age. A new study published in the British Journal of Nutrition suggests that this negative association is even stronger for children who faced physical growth delays in infancy. These findings add to the growing body of evidence linking early childhood nutrition to long-term brain development.

The first few years of human life represent a biological window of rapid change. The brain grows quickly during this time and builds the neural connections necessary for learning and memory. This process requires a steady supply of specific nutrients to work correctly. Without enough iron, zinc, or healthy fats, the brain might not develop to its full capacity.

Recent trends in global nutrition show that families are increasingly relying on ultra-processed foods. These are industrial products that often contain high levels of sugar, fat, and artificial additives but very few essential vitamins. Researchers are concerned that these foods might displace nutrient-rich options. They also worry that the additives or high sugar content could directly harm biological systems.

Researchers from the Federal University of Pelotas in Brazil and the University of Illinois Urbana-Champaign investigated this issue. The lead author is Glaucia Treichel Heller, a researcher in the Postgraduate Program in Epidemiology in Pelotas. She worked alongside colleagues including Thaynã Ramos Flores and Pedro Hallal to analyze data from thousands of children. The team wanted to determine if eating habits established at age two could predict cognitive abilities years later.

The researchers used data from the 2015 Pelotas Birth Cohort. This is a large, long-term project that tracks the health of children born in the city of Pelotas, Brazil. The team analyzed information from more than 3,400 children. When the children were two years old, their parents answered questions about what the toddlers usually ate.

The scientists did not just look at single foods like apples or candy. Instead, they used a statistical method called principal component analysis. This technique allows researchers to find general dietary patterns based on which foods are typically eaten together. They identified two main types of eating habits in this population.

One pattern was labeled “healthy” by the researchers. This diet included regular consumption of beans, fruits, vegetables, and natural fruit juices. The other pattern was labeled “unhealthy.” This diet featured instant noodles, sausages, soft drinks, packaged snacks, and sweets.

When the children reached six or seven years of age, trained psychologists assessed their intelligence. They used a standard test called the Wechsler Intelligence Scale for Children. This test measures different mental skills to generate an IQ score. The researchers then looked for a statistical link between the diet at age two and the test results four years later.

The analysis showed a clear connection between the unhealthy dietary pattern and lower cognitive scores. Children who frequently ate processed and sugary foods at age two tended to have lower IQ scores at school age. This link remained even when the researchers accounted for other factors that influence intelligence. They adjusted the data for the mother’s education, family income, and how much mental stimulation the child received at home.

The researchers faced a challenge in isolating the effect of diet. Many factors can shape a child’s development. For example, a family with more money might buy healthier food and also buy more books. To manage this, the team identified potential confounding factors. Thaynã Ramos Flores, one of the study authors, noted, “The covariates were identified as potential confounding factors based on a literature review and the construction of a directed acyclic graph.”

The team used these adjustments to ensure the results were not simply reflecting the family’s socioeconomic status. Even with these controls, the negative association between processed foods and IQ persisted. The findings suggest that diet quality itself plays a specific role.

The negative impact appeared to be worse for children who were already biologically vulnerable. The study looked at children who had early-life deficits. These were defined as having low weight, height, or head circumference for their age during their first two years.

For these children, a diet high in processed foods was linked to a drop of nearly 5 points in IQ. This is a substantial difference that could affect school performance. For children without these early physical growth problems, the decline was smaller but still present. In those cases, the reduction was about 2 points.

This finding points to a concept known as cumulative disadvantage. It appears that biological vulnerability and environmental exposures like poor diet interact with each other. A child who is already struggling physically may be less resilient to the harms of a poor diet.

The researchers also looked at the impact of the healthy dietary pattern. They did not find a statistical link between eating healthy foods and higher IQ scores. This result might seem counterintuitive, as fruits and vegetables are known to be good for the brain. However, the authors explain that this result is likely due to the specific population studied.

Most children in the Pelotas cohort ate beans, fruits, and vegetables regularly. Because almost everyone ate the healthy foods, there was not enough difference between the children to show a statistical effect. Flores explained, “The lack of association observed for the healthy dietary pattern can be largely explained by its lower variability.” She added that “approximately 92% of children habitually consumed four or more of the foods that characterize the healthy pattern.”

The study suggests potential biological mechanisms for why the unhealthy diet lowers IQ. One theory involves the gut-brain axis. The human gut contains trillions of bacteria that communicate with the brain. Diets high in sugar and processed additives can alter this bacterial community. These changes might lead to systemic inflammation that affects brain function.

Another possibility involves oxidative stress. Ultra-processed foods often lack the antioxidants found in fresh produce. Without these protective compounds, brain cells might be more susceptible to damage during development. The rapid growth of the brain in early childhood makes it highly sensitive to these physiological stressors.

There are limitations to this type of research. The study is observational, which means it cannot prove that the food directly caused the lower scores. Other factors that the researchers could not measure might explain the difference. For example, the study relied on parents to report what their children ate. Parents might not always remember or report this accurately.

Additionally, the study did not measure the parents’ IQ scores. Parental intelligence is a strong predictor of a child’s intelligence. However, the researchers used maternal education and home stimulation scores as proxies. These measures help account for the intellectual environment of the home.

The findings have implications for public health policy. The results suggest that officials need to focus on reducing the intake of processed foods in early childhood. Merely encouraging fruit and vegetable intake may not be enough if children are still consuming high amounts of processed items. This is particularly important for children who have already shown signs of growth delays.

Future studies could look at how these dietary habits change as children become teenagers. It would also be helpful to see if these results are similar in countries with different food cultures. The team notes that early nutrition is a specific window of opportunity for supporting brain health.

The study, “Dietary patterns at age 2 and cognitive performance at ages 6-7: an analysis of the 2015 Pelotas Birth Cohort (Brazil),” was authored by Glaucia Treichel Heller, Thaynã Ramos Flores, Marina Xavier Carpena, Pedro Curi Hallal, Marlos Rodrigues Domingues, and Andréa Dâmaso Bertoldi.

Childhood trauma and genetics drive alcoholism at different life stages

New research suggests that the path to alcohol dependence may differ depending on when the condition begins. A study published in Drug and Alcohol Dependence identifies distinct roles for genetic variations and childhood experiences in the development of Alcohol Use Disorder (AUD). The findings indicate that severe early-life trauma accelerates the onset of the disease, whereas specific genetic factors are more closely linked to alcoholism that develops later in adulthood. This separation of causes provides a more nuanced view of a condition that affects millions of people globally.

Alcohol Use Disorder is a chronic medical condition characterized by an inability to stop or control alcohol use despite adverse consequences. Researchers understand that the risk of developing this condition stems from a combination of biological and environmental factors. Genetic predisposition accounts for approximately half of the risk. The remaining risk comes from life experiences, particularly those occurring during formative years. However, the specific ways these factors interact have remained a subject of debate.

One specific gene of interest produces a protein called Brain-Derived Neurotrophic Factor, or BDNF. This protein acts much like a fertilizer for the brain. It supports the survival of existing neurons and encourages the growth of new connections and synapses. This process is essential for neuroplasticity, which is the brain’s ability to reorganize itself by forming new neural connections.

Variations in the BDNF gene can alter how the brain adapts to stress and foreign substances. Because alcohol consumption changes the brain’s structure, the gene that regulates brain plasticity is a prime suspect in the search for biological causes of addiction.

Yi-Wei Yeh and San-Yuan Huang, researchers from the Tri-Service General Hospital and National Defense Medical University in Taiwan, led the investigation. They aimed to untangle how BDNF gene variants, childhood trauma, and family dysfunction contribute to alcoholism. They specifically wanted to determine if these factors worked alone or if they amplified each other. For example, they sought to answer whether a person with a specific genetic variant would be more susceptible to the damaging effects of a difficult childhood.

The team recruited 1,085 participants from the Han Chinese population in Taiwan. After excluding individuals with incomplete data or DNA issues, the final analysis compared 518 patients diagnosed with Alcohol Use Disorder against 548 healthy control subjects.

The researchers categorized the patients based on when their drinking became a disorder. They defined early-onset as occurring at or before age 25 and late-onset as occurring after age 25. This distinction allowed them to see if different drivers were behind the addiction at different life stages.

To analyze the biological factors, the researchers collected blood samples from all participants. They extracted DNA to examine four distinct locations on the BDNF gene. These specific locations are known as single-nucleotide polymorphisms. They represent single-letter changes in the genetic code that can alter how the gene functions. The team looked for patterns in these variations to see if any were more common in the group with alcoholism.

Participants also completed detailed psychological assessments. The Childhood Trauma Questionnaire asked about physical, emotional, and sexual abuse, as well as physical and emotional neglect. A second survey measured Adverse Childhood Experiences (ACEs), which covers a broader range of household challenges such as divorce or incarcerated family members. A third tool, the Family APGAR, assessed how well the participants’ families functioned in terms of emotional support, communication, and adaptability.

The genetic analysis revealed a specific pattern of DNA variations associated with the disorder. This pattern, known as a haplotype, appeared more frequently in patients with Alcohol Use Disorder. A deeper look at the data showed that this genetic link was specific to late-onset alcoholism. This category includes individuals who developed the condition after the age of 25. This was a somewhat unexpected finding, as earlier research has often linked strong genetic factors to early-onset disease. The authors suggest that genetic influences on brain plasticity might become more pronounced as the brain ages.

The results regarding childhood experiences painted a different picture. Patients with Alcohol Use Disorder reported much higher rates of childhood trauma compared to the healthy control group. This included higher scores for physical abuse, emotional abuse, and neglect. The study found a clear mathematical relationship between trauma and age. The more severe the childhood trauma, the younger the patient was when they developed a dependency on alcohol. This supports the theory that some individuals use alcohol to self-medicate the emotional pain of early abuse.

The impact of Adverse Childhood Experiences (ACEs) was particularly stark. The data showed a compounding risk. Individuals with one or more adverse experiences were roughly 3.5 times more likely to develop the disorder than those with none. For individuals with two or more adverse experiences, the likelihood skyrocketed. They were 48 times more likely to develop Alcohol Use Disorder. This suggests that there may be a tipping point where the cumulative burden of stress overwhelms a young person’s coping mechanisms.

The researchers uncovered distinct differences between men and women regarding trauma. Men with the disorder reported higher rates of physical abuse in childhood compared to female patients. Women with the disorder reported higher rates of sexual abuse compared to males. The data suggested that for women, a history of sexual abuse was associated with developing alcoholism seven to ten years earlier than those without such history. This highlights a critical need for gender-specific approaches when addressing trauma in addiction treatment.

Family environment played a major role across the board. Patients with the disorder consistently reported lower family functioning compared to healthy individuals. This dysfunction was present regardless of whether the alcoholism started early or late in life. It appears that a lack of family support is a general risk factor rather than a specific trigger for a certain type of the disease. A supportive family acts as a buffer against stress. When that buffer is missing, the risk of maladaptive coping strategies increases.

The team tested the hypothesis that trauma might change how the BDNF gene affects a person. The analysis did not support this idea. The genetic risks and the environmental risks appeared to operate independently of one another. The gene variants did not make the trauma worse, and the trauma did not activate the gene in a specific way. This suggests that while both factors lead to the same outcome, they may travel along parallel biological pathways to get there.

There are limitations to this study that affect how the results should be interpreted. The participants were all Han Chinese, so the genetic findings might not apply to other ethnic populations. Genetic variations often differ by ancestry, and what is true for one group may not hold for another.

The study also relied on adults remembering their childhoods. This retrospective approach can introduce errors, as memory is not always a perfect record of the past. Additionally, the number of female participants was relatively small compared to males, which mirrors the prevalence of the disorder but limits statistical power for that subgroup.

The study also noted high rates of nicotine use among the alcohol-dependent group. Approximately 85 percent of the patients used nicotine. Since smoking can also affect brain biology, it adds another layer of complexity to the genetic analysis. The researchers attempted to control for this, but it remains a variable to consider.

Despite these caveats, the research offers a valuable perspective for clinicians. It suggests that patients who develop alcoholism early in life are likely driven by environmental trauma. Treatment for these individuals might prioritize trauma-informed therapy and psychological processing of past events. In contrast, patients who develop the disorder later in life might be grappling with a genetic vulnerability that becomes relevant as the brain ages. This could point toward different biological targets for medication or different behavioral strategies.

The authors recommend that future research should focus on replicating these findings in larger and more diverse groups. They also suggest using brain imaging technologies. Seeing how these gene variants affect the physical structure of the brain could explain why they predispose older adults to addiction.

Understanding the distinct mechanisms of early versus late-onset alcoholism is a step toward personalized medicine in psychiatry. By identifying whether a patient is fighting a genetic predisposition or the ghosts of a traumatic past, doctors may eventually be able to tailor treatments that address the root cause of the addiction.

The study, “Childhood trauma, family functioning, and the BDNF gene may affect the development of alcohol use disorder,” was authored by Yi-Wei Yeh, Catherine Shin Huey Chen, Shin-Chang Kuo, Chun-Yen Chen, Yu-Chieh Huang, Jyun-Teng Huang, You-Ping Yang, Jhih-Syuan Huang, Kuo-Hsing Ma, and San-Yuan Huang.

Most Americans experience passionate love only twice in a lifetime, study finds

Most adults in the United States experience the intense rush of passionate love only about twice throughout their lives, according to a recent large-scale survey. The study, published in the journal Interpersona, suggests that while this emotional state is a staple of human romance, it remains a relatively rare occurrence for many individuals. The findings provide a new lens through which to view the frequency of deep romantic attachment across the entire adult lifespan.

The framework for this research relies on a classic model where love consists of three parts: passion, intimacy, and commitment. Passion is described as the physical attraction and intense longing that often defines the start of a romantic connection. Amanda N. Gesselman, a researcher at the Kinsey Institute at Indiana University, led the team of scientists who conducted this work.

The research team set out to quantify how often this specific type of love happens because earlier theories suggest passion is high at the start of a relationship but fades as couples become more comfortable. As a relationship matures, it often shifts toward companionate love, which is defined by deep affection and entwined lives rather than obsessive longing. Because this intense feeling is often fleeting, it might happen several times as people move through different stages of life.

The researchers wanted to see if social factors like age, gender, or sexual orientation influenced how often someone falls in love. Some earlier studies on university students suggested that most young people fall in love at least once by the end of high school. However, very little data existed regarding how these experiences accumulate for adults as they reach middle age or later life.

To find these answers, the team analyzed data from more than 10,000 single adults in the U.S. between the ages of 18 and 99. Participants were recruited to match the general demographic makeup of the country based on census data. This large group allowed the researchers to look at a wide variety of life histories and romantic backgrounds.

Participants were asked to provide a specific number representing how many times they had ever been passionately in love during their lives. On average, the respondents reported experiencing this intense feeling 2.05 times. This number suggests that for the average person, passionate love is a rare event that happens only a few times in a century of living.

A specific portion of the group, about 14 percent, stated they had never felt passionate love at all. About 28 percent had felt it once, while 30 percent reported two experiences. Another 17 percent had three experiences, and about 11 percent reported four or more. These figures show that while the experience is common, it is certainly not a daily or even a yearly occurrence for most.

The study also looked at how these numbers varied based on the specific characteristics of the participants. Age showed a small link to the number of experiences, meaning older adults reported slightly more instances than younger ones. This result is likely because older people have had more years and more opportunities to encounter potential partners.

The increase with age was quite small, which suggests that people do not necessarily keep falling in love at a high rate as they get older. One reason for this might be biological, as the brain systems involved in reward and excitement are often most active during late adolescence and early adulthood. As people transition into mature adulthood, their responsibilities and self-reflection might change how they perceive or pursue new romantic passion.

Gender differences were present in the data, with men reporting slightly more experiences than women. This difference was specifically found among heterosexual participants, where heterosexual men reported more instances of passionate love than heterosexual women. This finding aligns with some previous research suggesting that men may be socialized to fall in love or express those feelings earlier in a relationship.

Among gay, lesbian, and bisexual participants, the number of experiences did not differ by gender. The researchers did not find that sexual orientation on its own created any differences in how many times a person fell in love. For example, the difference between heterosexual and bisexual participants was not statistically significant.

The researchers believe these results have important applications for how people view their own romantic lives. Many people feel pressure from movies, songs, and social media to constantly chase a state of high passion. Knowing that the average person only feels this a couple of times may help people feel more normal if they are not currently in a state of intense romance.

In a clinical or counseling setting, these findings could help people who feel they are behind in their romantic development. If someone has never been passionately in love, they are part of a group that includes more than one in ten adults. Seeing this as a common variation in human experience rather than a problem can reduce feelings of shame.

The researchers also noted that people might use a process called retrospective cognitive discounting. This happens when a person looks back at their past and views old relationships through a different lens based on their current feelings. An older person might look back at a past “crush” and decide it was not true passionate love, which would lower their total count.

This type of self-reflection might help people stay resilient after a breakup. By reinterpreting a past relationship as something other than passionate love, they might remain more open to finding a new connection in the future. This mental flexibility is part of how humans navigate the ups and downs of their romantic histories.

There are some limitations to the study that should be considered. Because the researchers only surveyed single people, the results might be different if they had included people who are currently married or in long-term partnerships. People who are in stable relationships might have different ways of remembering their past experiences compared to those who are currently unattached.

The study also relied on people remembering their entire lives accurately, which can be a challenge for older participants. Future research could follow the same group of people over many years to see how their feelings change as they happen. This would remove the need for participants to rely solely on their memories of the distant past.

The participants were all located in the United States, so these findings might not apply to people in other cultures. Different societies have different rules about how people meet, how they express emotion, and what they consider to be love. A global study would be needed to see if the “twice in a lifetime” average holds true in other parts of the world.

Additionally, the survey did not provide a specific definition of passionate love for the participants. Each person might have used their own personal standard for what counts as being passionately in love. Using a more standardized definition in future studies could help ensure that everyone is answering the question in the same way.

The researchers also mentioned that they did not account for individual personality traits or attachment styles. Some people are naturally more prone to falling in love quickly, while others are more cautious or reserved. These internal traits likely play a role in how many times someone experiences passion throughout their life.

Finally, the study did not include a large enough number of people with diverse gender identities beyond the categories of men and women. Expanding the research to include more gender-diverse individuals would provide a more complete picture of the human experience. Despite these gaps, the current study provides a foundation for understanding the frequency of one of life’s most intense emotions.

The study, “Twice in a lifetime: quantifying passionate love in U.S. single adults,” was authored by Amanda N. Gesselman, Margaret Bennett-Brown, Jessica T. Campbell, Malia Piazza, Zoe Moscovici, Ellen M. Kaufman, Melissa Blundell Osorio, Olivia R. Adams, Simon Dubé, Jessica J. Hille, Lee Y. S. Weeks, and Justin R. Garcia.

Blue light exposure may counteract anxiety caused by chronic vibration

Living in a modern environment often means enduring a constant hum of background noise and physical vibration. From the rumble of heavy traffic to the oscillation of industrial machinery, these invisible stressors can gradually erode mental well-being.

A new study suggests that a specific color of light might offer a simple way to counter the anxiety caused by this chronic environmental agitation. The research indicates that blue light exposure can calm the nervous system even when the physical stress of vibration continues. These findings were published in the journal Physiology & Behavior.

Anxiety disorders are among the most common mental health challenges globally. They typically arise from a complicated mix of biological traits and social pressures. Environmental factors are playing an increasingly large role in this equation. Chronic exposure to low-frequency noise and vibration is known to disrupt the body’s hormonal balance. This disruption frequently leads to psychological symptoms such as irritability, fatigue, and persistent anxiety.

Doctors often prescribe medication to manage these conditions once a diagnosis is clear. These drugs usually work by altering the chemical signals in the brain to inhibit anxious feelings. However, pharmaceutical interventions are not always the best first step for early-stage anxiety. There is a growing demand for therapies that are accessible and carry fewer side effects. This has led scientists to investigate light therapy as a promising alternative.

Light does more than allow us to see. It also regulates our internal biological clocks and influences our mood. Specialized cells in the eyes detect light and send signals directly to the brain regions that control hormones. This pathway allows light to modulate the release of neurotransmitters associated with emotional well-being.

Despite this general knowledge, there has been little research on how specific light wavelengths might combat anxiety caused specifically by vibration. A team of researchers decided to fill this gap using zebrafish as a model organism. Zebrafish are small, tropical freshwater fish that are widely used in neuroscience. Their brain chemistry and genetic structure share many similarities with humans.

The study was led by Longfei Huo and senior author Muqing Liu from the School of Information Science and Technology at Fudan University in China. They aimed to identify if light could serve as a preventative measure against vibration-induced stress. The team designed a controlled experiment to first establish which vibrations caused the most stress. They subsequently tested whether light could reverse that stress.

The researchers began by separating the zebrafish into different groups. Each group was exposed to a specific frequency of vibration for one hour daily. The frequencies tested were 30, 50, and 100 Hertz. To ensure consistency, the acceleration of the vibration was kept constant across all groups. This phase of the experiment lasted for one week.

To measure anxiety in fish, the scientists relied on established behavioral patterns. When zebrafish are comfortable, they swim freely throughout their tank. When they are anxious, they tend to sink to the bottom. They also exhibit “thigmotaxis,” which is a tendency to hug the walls of the tank rather than exploring open water.

The team utilized a “novel tank test” to observe these behaviors. They placed the fish in a new environment and recorded how much time they spent in the lower half. The results showed that daily exposure to vibration made the fish act more anxious. The effect was strongest in the group exposed to 100 Hertz. These fish spent a statistically significant amount of time at the bottom of the tank.

The researchers also used a “light-dark box test.” In this setup, half the tank is illuminated and the other half is dark. Anxious fish prefer to hide in the dark. The fish exposed to 100 Hertz vibration spent much more time in the dark zones compared to the control group. This confirmed that the vibration was inducing a strong anxiety-like state.

After establishing that 100 Hertz vibration caused the most stress, the researchers moved to the second phase of the study. They wanted to see if light color could mitigate this effect. They repeated the vibration exposure but added a light therapy component. While the fish underwent vibration, they were bathed in either red, green, blue, or white light.

The blue light used in the experiment had a wavelength of 455 nanometers. The red light was 654 nanometers, and the green was 512 nanometers. The light exposure lasted for two hours each day. The researchers then ran a comprehensive battery of behavioral tests to see if the light made a difference.

The team found that the color of the light had a profound impact on the mental state of the fish. Zebrafish exposed to the blue light showed much less anxiety than those in the other groups. In the novel tank test, the blue-light group spent less time at the bottom. They explored the upper regions of the water almost as much as fish that had never been vibrated at all.

In contrast, the red light appeared to offer no benefit. In some metrics, the red light seemed to make the anxiety slightly worse. Fish under red light spent the longest time hiding in the dark during the light-dark box test. This suggests that the calming effect is specific to the wavelength of the light and not just the brightness.

The researchers also introduced two innovative testing methods to validate their results. One was a “social interaction test.” Zebrafish are social animals and usually prefer to be near others. Stress often causes them to withdraw. The researchers placed a group of fish inside a transparent cylinder within the tank. They then measured how much time the test fish spent near this cylinder.

Fish exposed to vibration and white light avoided the group. However, the fish treated with blue light spent a large amount of time near their peers. This indicated that their social anxiety had been alleviated. The blue light restored their natural desire to interact with others.

The second new method was a “pipeline swimming test.” This involved placing the fish in a tube with a gentle current. The setup allowed the scientists to easily measure swimming distance and smoothness of movement. Stressed fish tended to swim erratically or struggle against the flow. The blue-light group swam longer distances with smoother trajectories.

To understand the biological mechanism behind these behavioral changes, the scientists analyzed the fish’s brain chemistry. They measured the levels of three key chemicals: cortisol, norepinephrine, and serotonin. Cortisol is the primary stress hormone in both fish and humans. High levels of cortisol are a hallmark of physiological stress.

The analysis revealed that vibration exposure caused a spike in cortisol and norepinephrine. This hormonal surge matched the anxious behavior observed in the tanks. However, the application of blue light blocked this increase. The fish treated with blue light had cortisol levels comparable to the unstressed control group.

Even more striking was the effect on serotonin. Serotonin is a neurotransmitter that helps regulate mood and promotes feelings of well-being. The study found that 455 nm blue light specifically boosted serotonin levels in the fish. This suggests that blue light works by simultaneously lowering stress hormones and enhancing mood-regulating chemicals.

The authors propose that the blue light activates specific cells in the retina. These cells, known as intrinsically photosensitive retinal ganglion cells, contain a pigment called melanopsin. Melanopsin is highly sensitive to blue wavelengths. When activated, these cells send calming signals to the brain’s emotional centers.

There are some limitations to this study that must be considered. The research focused heavily on specific frequencies and wavelengths. It is possible that other combinations of light and vibration could yield different results. The study also did not investigate potential interaction effects between the light and vibration in a full factorial design.

Additionally, while zebrafish are a good model, they are not humans. The neural pathways are similar, but the complexity of human anxiety involves higher-level cognitive processes. Future research will need to replicate these findings in mammals. Scientists will also need to determine the optimal intensity and duration of light exposure for therapeutic use.

The study opens up new possibilities for managing environmental stress. It suggests that modifying our lighting environments could protect against the invisible toll of noise and vibration. For those living or working in industrial areas, blue light therapy could become a simple, non-invasive tool for mental health.

The study, “Blue light exposure mitigates vibration noise-induced anxiety by enhancing serotonin levels,” was authored by Longfei Huo, Xiaojing Miao, Yi Ren, Xuran Zhang, Qiqi Fu, Jiali Yang, and Muqing Liu.

❌