Reading view

New research links childhood inactivity to depression in a vicious cycle

New research suggests a bidirectional relationship exists between how much time children spend sitting and their mental health, creating a cycle where inactivity feeds feelings of depression and vice versa. This dynamic appears to extend beyond the individual child, as a child’s mood and inactivity levels can eventually influence their parent’s mental well-being. These results were published in the journal Mental Health and Physical Activity.

For decades, health experts have recognized that humans spend a large portion of their waking hours in sedentary behaviors. This term refers to any waking behavior characterized by an energy expenditure of 1.5 metabolic equivalents or less while in a sitting, reclining, or lying posture. Common examples include watching television, playing video games while seated, or sitting in a classroom. While the physical health consequences of this inactivity are well documented, the impact on mental health is a growing area of concern.

In recent years, screen time has risen considerably among adolescents. This increase has prompted researchers to question how these behaviors interact with mood disorders such as depression. Most prior studies examining this link have focused on adults. When studies do involve younger populations, they often rely on the participants to report their own activity levels. Self-reported data is frequently inaccurate, as people struggle to recall exactly how many minutes they spent sitting days or weeks ago.

There is also a gap in understanding how these behaviors function within a family unit. Parents and children do not exist in isolation. They form a “dyad,” or a two-person group wherein the behavior and emotions of one person can impact the other. To address these gaps, a team of researchers led by Maria Siwa from the SWPS University in Poland investigated these associations using objective measurement tools. The researchers aimed to see if depression leads to more sitting, or if sitting leads to more depression. They also sought to understand if these effects spill over from child to parent.

The research team recruited 203 parent-child dyads to participate in the study. The children ranged in age from 9 to 15 years old. The parents involved were predominantly mothers, accounting for nearly 87 percent of the adult participants. The study was longitudinal, meaning the researchers tracked the participants over an extended period to observe changes. Data collection occurred at three specific points: the beginning of the study (Time 1), an eight-month follow-up (Time 2), and a 14-month follow-up (Time 3).

To ensure accuracy, the researchers did not rely solely on questionnaires for activity data. Instead, they asked participants to wear accelerometers. These are small devices worn on the hip that measure movement intensity and frequency. Participants wore these devices for six consecutive days during waking hours. This provided a precise, objective record of how much time each parent and child spent being sedentary versus being active.

For the assessment of mental health, the researchers used the Patient Health Questionnaire. This is a standard screening tool used to identify the presence and severity of depressive symptoms. It asks individuals to rate the frequency of specific symptoms over the past two weeks. The study took place in the context of a healthy lifestyle education program. Between the first and second measurement points, all families received education on the health consequences of sedentary behaviors and strategies to interrupt long periods of sitting.

The analysis of the data revealed a reciprocal relationship within the children. Children who spent more time being sedentary at the start of the study displayed higher levels of depressive symptoms eight months later. This supports the theory that physical inactivity can contribute to the development of poor mood. Proposed biological mechanisms for this include changes in inflammation markers or neurobiological pathways that affect how the brain regulates emotion.

However, the reverse was also true. Children who exhibited higher levels of depressive symptoms at the start of the study spent more time being sedentary at the eight-month mark. This suggests a “vicious cycle” where symptoms of depression, such as low energy or withdrawal, lead to less movement. The lack of movement then potentially exacerbates the depressive symptoms. This bidirectional pattern highlights how difficult it can be to break the cycle of inactivity and low mood.

The study also identified an effect that crossed from one person to the other. High levels of depressive symptoms in a child at the start of the study predicted increased sedentary time for that child eight months later. This increase in the child’s sedentary behavior was then linked to higher levels of depressive symptoms in the parent at the 14-month mark.

This “across-person” finding suggests a domino effect within the family. A child’s mental health struggles may lead them to withdraw into sedentary activities. Observing this behavior and potentially feeling ineffective in helping the child change their habits may then take a toll on the parent’s mental health. This aligns with psychological theories regarding parental stress. Parents often feel distress when they perceive their parenting strategies as ineffective, especially when trying to manage a child’s health behaviors.

One particular finding was unexpected. Children who reported lower levels of depressive symptoms at the eight-month mark actually spent more time sitting at the final 14-month check-in. The researchers hypothesize that this might be due to a sense of complacency. If adolescents feel mentally well, they may not feel a pressing need to follow the program’s advice to reduce sitting time. They might associate their current well-being with their current lifestyle, leading to less motivation to become more active.

The researchers controlled for moderate-to-vigorous physical activity in their statistical models. This ensures that the results specifically reflect the impact of sedentary time, rather than just a lack of exercise. Even when accounting for exercise, the links between sitting and depression remained relevant in specific pathways.

There are caveats to consider when interpreting these results. The sample consisted largely of families with higher education levels and average or above-average economic status. This limits how well the findings apply to the general population or to families facing economic hardship. Additionally, the study was conducted in Poland, and cultural factors regarding parenting and leisure time could influence the results.

Another limitation is the nature of the device used. While accelerometers are excellent for measuring stillness versus movement, they cannot distinguish between different types of sedentary behavior. They cannot tell the difference between sitting while doing homework, reading a book, or mindlessly scrolling through social media. Different types of sedentary behavior might have different psychological impacts.

The study also focused on a community sample rather than a clinical one. Most participants reported mild to moderate symptoms rather than severe clinical depression. The associations might look different in a population with diagnosed major depressive disorder. Furthermore, while the study found links over time, the observed effects were relatively small. Many other factors likely contribute to both depression and sedentary behavior that were not measured in this specific analysis.

Despite these limitations, the implications for public health are clear. Interventions aimed at improving youth mental health should not ignore physical behavior. Conversely, programs designed to get kids moving should address mental health barriers. The findings support the use of family-based interventions. Treating the child in isolation may miss the important dynamic where the child’s behavior impacts the parent’s well-being.

Future research should investigate the specific mechanisms that drive these connections. For example, it would be beneficial to study whether parental beliefs about their own efficacy mediate the link between a child’s inactivity and the parent’s mood. Researchers should also look at different types of sedentary behavior to see if screen time is more harmful than other forms of sitting. Understanding these nuances could lead to better guidance for families trying to navigate the complex relationship between physical habits and emotional health.

The study, “Associations between depressive symptoms and sedentary behaviors in parent-child Dyads: Longitudinal effects within- and across- person,” was authored by Maria Siwa, Dominika Wietrzykowska, Zofia Szczuka, Ewa Kulis, Monika Boberska, Anna Banik, Hanna Zaleskiewicz, Paulina Krzywicka, Nina Knoll, Anita DeLongis, Bärbel Knäuper, and Aleksandra Luszczynska.

No association found between COVID-19 shots during pregnancy and autism or behavioral issues

Recent research provides new evidence regarding the safety of COVID-19 vaccinations during pregnancy. The study, presented at the Society for Maternal-Fetal Medicine (SMFM) 2026 Pregnancy Meeting, indicates that receiving an mRNA vaccine while pregnant does not negatively impact a toddler’s brain development. The findings suggest that children born to vaccinated mothers show no difference in reaching developmental milestones compared to those born to unvaccinated mothers.

The question of vaccine safety during pregnancy has been a primary concern for expectant parents since the introduction of COVID-19 immunizations. Messenger RNA, or mRNA, vaccines function by introducing a genetic sequence that instructs the body’s cells to produce a specific protein. This protein triggers the immune system to create antibodies against the virus.

While health organizations have recommended these vaccines to prevent severe maternal illness, data regarding the longer-term effects on infants has been accumulating slowly. Parents often worry that the immune activation in the mother could theoretically alter the delicate process of fetal brain formation.

To address these specific concerns, a team of researchers investigated the neurodevelopmental outcomes of children aged 18 to 30 months. The study was led by George R. Saade from Eastern Virginia Medical School at Old Dominion University and Brenna L. Hughes from Duke University School of Medicine. They conducted this work as part of the Maternal-Fetal Medicine Units Network. This network is a collaboration of research centers funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development.

The researchers designed a prospective observational study. This type of study follows a group of participants over time to observe outcomes rather than intervening or experimenting on them. The team identified women who had received at least one dose of an mRNA SARS-CoV-2 vaccine. To be included in the exposed group, the mothers must have received the vaccine either during their pregnancy or within the 30 days prior to becoming pregnant.

The research team compared these women to a control group of mothers who did not receive the vaccine during that same period. To ensure the comparison was scientifically valid, the researchers used a technique called matching. Each vaccinated mother was paired with an unvaccinated mother who shared key characteristics.

These characteristics included the specific medical site where they delivered the baby and the date of the delivery. They also matched participants based on their insurance status and their race. This matching process is essential in observational research. It helps rule out other variables, such as access to healthcare or socioeconomic status, which could independently influence a child’s development.

The study applied strict exclusion criteria to isolate the effect of the vaccine. The researchers did not include women who delivered their babies before 37 weeks of gestation. This decision was necessary because preterm birth is a known cause of developmental delays. Including premature infants could have obscured the results. The team also excluded multifetal pregnancies, such as twins or triplets, and children born with major congenital malformations.

Ultimately, the study analyzed 217 matched pairs, resulting in a total of 434 children. The primary tool used to measure development was the Ages and Stages Questionnaire, Third Edition, often referred to as the ASQ-3. This is a standardized screening tool widely used in pediatrics. It relies on parents to observe and report their child’s abilities in five distinct developmental areas.

The first area is communication, which looks at how a child understands language and speaks. The second is gross motor skills, involving large movements like walking or jumping. The third is fine motor skills, which involves smaller movements like using fingers to pick up tiny objects. The fourth is problem-solving, and the fifth is personal-social interaction, covering how the child plays and interacts with others.

The researchers analyzed the data by looking for statistical equivalence. They established a specific margin of 10 points on the ASQ-3 scale. If the difference between the average scores of the vaccinated and unvaccinated groups was less than 10 points, the outcomes were considered practically identical.

The results demonstrated that the neurodevelopmental outcomes were indeed equivalent. The median total ASQ-3 score for the vaccinated group was 255. The median score for the unvaccinated group was 260. After adjusting for other factors, the difference was calculated to be -3.4 points. This falls well within the 10-point margin of equivalence, meaning there was no meaningful difference in development between the two groups.

Beyond the general developmental scores, the researchers utilized several secondary screening tools to check for specific conditions. They employed the Modified Checklist for Autism in Toddlers to assess the risk of autism spectrum disorder. The findings showed no statistical difference in risk levels.

Approximately 5 percent of the children in the vaccinated group screened positive for potential autism risk. This was comparable to the 6 percent observed in the unvaccinated group. These percentages suggest that vaccination status did not influence the likelihood of an autism diagnosis.

The team also used the Child Behavior Checklist. This tool evaluates various behavioral and emotional challenges. It looks at internalizing behaviors, such as anxiety, withdrawal, or sadness. It also examines externalizing behaviors, such as aggression or rule-breaking.

The scores for both internalizing and externalizing behaviors were nearly identical between the two groups. For example, 93 percent of children in the vaccinated group fell within the normal range for total behavioral problems. This was the exact same percentage found in the unvaccinated group.

Finally, the researchers assessed temperament using the Early Childhood Behavior Questionnaire. This measures traits such as “surgency,” which relates to positive emotional reactivity and high energy. It also measures “effortful control,” which is the ability to focus attention and inhibit impulses. Across all these psychological domains, the study found no association between maternal vaccination and negative outcomes.

The demographics of the two groups were largely similar due to the matching process. However, one difference remained. Mothers in the vaccinated group were more likely to be nulliparous. This is a medical term indicating that the woman had never given birth before the pregnancy in question.

Additionally, the children in the vaccinated group were slightly younger at the time of the assessment. Their median age was 25.4 months, compared to 25.9 months for the unvaccinated group. The researchers used statistical models to adjust for these slight variations. Even after these adjustments, the conclusion remained that the developmental outcomes were equivalent.

“Neurodevelopment outcomes in children born to mothers who received the COVID-19 vaccine during or shortly before pregnancy did not differ from those born to mothers who did not receive the vaccine,” said Saade.

While the findings are positive, there are context and limitations to consider. The study was observational, meaning it cannot prove causation as definitively as a randomized controlled trial. However, randomized trials are rarely feasible for widely recommended vaccines due to ethical considerations.

Another factor is the reliance on parent-reported data. Tools like the ASQ-3 depend on the accuracy of the parents’ observations, which can introduce some subjectivity. Furthermore, the study followed children only up to 30 months of age. Some subtle neurodevelopmental issues may not manifest until children are older and face the demands of school.

Despite these limitations, the rigorous matching and the use of multiple standardized screening tools provide a high level of confidence in the results for the toddler age group. The study fills a knowledge gap regarding the safety of mRNA technology for the next generation.

“This study, conducted through a rigorous scientific process in an NIH clinical trials network, demonstrates reassuring findings regarding the long-term health of children whose mothers received COVID-19 vaccination during pregnancy,” said Hughes.

The study, “Association Between SARS-CoV-2 Vaccine in Pregnancy and Child Neurodevelopment at 18–30 Months,” was authored by George R. Saade and Brenna L. Hughes, and will be published in the February 2026 issue of PREGNANCY.

Ultra-processed foods in early childhood linked to lower IQ scores

Toddlers who consume a diet high in processed meats, sugary snacks, and soft drinks may have lower intelligence scores by the time they reach early school age. A new study published in the British Journal of Nutrition suggests that this negative association is even stronger for children who faced physical growth delays in infancy. These findings add to the growing body of evidence linking early childhood nutrition to long-term brain development.

The first few years of human life represent a biological window of rapid change. The brain grows quickly during this time and builds the neural connections necessary for learning and memory. This process requires a steady supply of specific nutrients to work correctly. Without enough iron, zinc, or healthy fats, the brain might not develop to its full capacity.

Recent trends in global nutrition show that families are increasingly relying on ultra-processed foods. These are industrial products that often contain high levels of sugar, fat, and artificial additives but very few essential vitamins. Researchers are concerned that these foods might displace nutrient-rich options. They also worry that the additives or high sugar content could directly harm biological systems.

Researchers from the Federal University of Pelotas in Brazil and the University of Illinois Urbana-Champaign investigated this issue. The lead author is Glaucia Treichel Heller, a researcher in the Postgraduate Program in Epidemiology in Pelotas. She worked alongside colleagues including Thaynã Ramos Flores and Pedro Hallal to analyze data from thousands of children. The team wanted to determine if eating habits established at age two could predict cognitive abilities years later.

The researchers used data from the 2015 Pelotas Birth Cohort. This is a large, long-term project that tracks the health of children born in the city of Pelotas, Brazil. The team analyzed information from more than 3,400 children. When the children were two years old, their parents answered questions about what the toddlers usually ate.

The scientists did not just look at single foods like apples or candy. Instead, they used a statistical method called principal component analysis. This technique allows researchers to find general dietary patterns based on which foods are typically eaten together. They identified two main types of eating habits in this population.

One pattern was labeled “healthy” by the researchers. This diet included regular consumption of beans, fruits, vegetables, and natural fruit juices. The other pattern was labeled “unhealthy.” This diet featured instant noodles, sausages, soft drinks, packaged snacks, and sweets.

When the children reached six or seven years of age, trained psychologists assessed their intelligence. They used a standard test called the Wechsler Intelligence Scale for Children. This test measures different mental skills to generate an IQ score. The researchers then looked for a statistical link between the diet at age two and the test results four years later.

The analysis showed a clear connection between the unhealthy dietary pattern and lower cognitive scores. Children who frequently ate processed and sugary foods at age two tended to have lower IQ scores at school age. This link remained even when the researchers accounted for other factors that influence intelligence. They adjusted the data for the mother’s education, family income, and how much mental stimulation the child received at home.

The researchers faced a challenge in isolating the effect of diet. Many factors can shape a child’s development. For example, a family with more money might buy healthier food and also buy more books. To manage this, the team identified potential confounding factors. Thaynã Ramos Flores, one of the study authors, noted, “The covariates were identified as potential confounding factors based on a literature review and the construction of a directed acyclic graph.”

The team used these adjustments to ensure the results were not simply reflecting the family’s socioeconomic status. Even with these controls, the negative association between processed foods and IQ persisted. The findings suggest that diet quality itself plays a specific role.

The negative impact appeared to be worse for children who were already biologically vulnerable. The study looked at children who had early-life deficits. These were defined as having low weight, height, or head circumference for their age during their first two years.

For these children, a diet high in processed foods was linked to a drop of nearly 5 points in IQ. This is a substantial difference that could affect school performance. For children without these early physical growth problems, the decline was smaller but still present. In those cases, the reduction was about 2 points.

This finding points to a concept known as cumulative disadvantage. It appears that biological vulnerability and environmental exposures like poor diet interact with each other. A child who is already struggling physically may be less resilient to the harms of a poor diet.

The researchers also looked at the impact of the healthy dietary pattern. They did not find a statistical link between eating healthy foods and higher IQ scores. This result might seem counterintuitive, as fruits and vegetables are known to be good for the brain. However, the authors explain that this result is likely due to the specific population studied.

Most children in the Pelotas cohort ate beans, fruits, and vegetables regularly. Because almost everyone ate the healthy foods, there was not enough difference between the children to show a statistical effect. Flores explained, “The lack of association observed for the healthy dietary pattern can be largely explained by its lower variability.” She added that “approximately 92% of children habitually consumed four or more of the foods that characterize the healthy pattern.”

The study suggests potential biological mechanisms for why the unhealthy diet lowers IQ. One theory involves the gut-brain axis. The human gut contains trillions of bacteria that communicate with the brain. Diets high in sugar and processed additives can alter this bacterial community. These changes might lead to systemic inflammation that affects brain function.

Another possibility involves oxidative stress. Ultra-processed foods often lack the antioxidants found in fresh produce. Without these protective compounds, brain cells might be more susceptible to damage during development. The rapid growth of the brain in early childhood makes it highly sensitive to these physiological stressors.

There are limitations to this type of research. The study is observational, which means it cannot prove that the food directly caused the lower scores. Other factors that the researchers could not measure might explain the difference. For example, the study relied on parents to report what their children ate. Parents might not always remember or report this accurately.

Additionally, the study did not measure the parents’ IQ scores. Parental intelligence is a strong predictor of a child’s intelligence. However, the researchers used maternal education and home stimulation scores as proxies. These measures help account for the intellectual environment of the home.

The findings have implications for public health policy. The results suggest that officials need to focus on reducing the intake of processed foods in early childhood. Merely encouraging fruit and vegetable intake may not be enough if children are still consuming high amounts of processed items. This is particularly important for children who have already shown signs of growth delays.

Future studies could look at how these dietary habits change as children become teenagers. It would also be helpful to see if these results are similar in countries with different food cultures. The team notes that early nutrition is a specific window of opportunity for supporting brain health.

The study, “Dietary patterns at age 2 and cognitive performance at ages 6-7: an analysis of the 2015 Pelotas Birth Cohort (Brazil),” was authored by Glaucia Treichel Heller, Thaynã Ramos Flores, Marina Xavier Carpena, Pedro Curi Hallal, Marlos Rodrigues Domingues, and Andréa Dâmaso Bertoldi.

Childhood trauma and genetics drive alcoholism at different life stages

New research suggests that the path to alcohol dependence may differ depending on when the condition begins. A study published in Drug and Alcohol Dependence identifies distinct roles for genetic variations and childhood experiences in the development of Alcohol Use Disorder (AUD). The findings indicate that severe early-life trauma accelerates the onset of the disease, whereas specific genetic factors are more closely linked to alcoholism that develops later in adulthood. This separation of causes provides a more nuanced view of a condition that affects millions of people globally.

Alcohol Use Disorder is a chronic medical condition characterized by an inability to stop or control alcohol use despite adverse consequences. Researchers understand that the risk of developing this condition stems from a combination of biological and environmental factors. Genetic predisposition accounts for approximately half of the risk. The remaining risk comes from life experiences, particularly those occurring during formative years. However, the specific ways these factors interact have remained a subject of debate.

One specific gene of interest produces a protein called Brain-Derived Neurotrophic Factor, or BDNF. This protein acts much like a fertilizer for the brain. It supports the survival of existing neurons and encourages the growth of new connections and synapses. This process is essential for neuroplasticity, which is the brain’s ability to reorganize itself by forming new neural connections.

Variations in the BDNF gene can alter how the brain adapts to stress and foreign substances. Because alcohol consumption changes the brain’s structure, the gene that regulates brain plasticity is a prime suspect in the search for biological causes of addiction.

Yi-Wei Yeh and San-Yuan Huang, researchers from the Tri-Service General Hospital and National Defense Medical University in Taiwan, led the investigation. They aimed to untangle how BDNF gene variants, childhood trauma, and family dysfunction contribute to alcoholism. They specifically wanted to determine if these factors worked alone or if they amplified each other. For example, they sought to answer whether a person with a specific genetic variant would be more susceptible to the damaging effects of a difficult childhood.

The team recruited 1,085 participants from the Han Chinese population in Taiwan. After excluding individuals with incomplete data or DNA issues, the final analysis compared 518 patients diagnosed with Alcohol Use Disorder against 548 healthy control subjects.

The researchers categorized the patients based on when their drinking became a disorder. They defined early-onset as occurring at or before age 25 and late-onset as occurring after age 25. This distinction allowed them to see if different drivers were behind the addiction at different life stages.

To analyze the biological factors, the researchers collected blood samples from all participants. They extracted DNA to examine four distinct locations on the BDNF gene. These specific locations are known as single-nucleotide polymorphisms. They represent single-letter changes in the genetic code that can alter how the gene functions. The team looked for patterns in these variations to see if any were more common in the group with alcoholism.

Participants also completed detailed psychological assessments. The Childhood Trauma Questionnaire asked about physical, emotional, and sexual abuse, as well as physical and emotional neglect. A second survey measured Adverse Childhood Experiences (ACEs), which covers a broader range of household challenges such as divorce or incarcerated family members. A third tool, the Family APGAR, assessed how well the participants’ families functioned in terms of emotional support, communication, and adaptability.

The genetic analysis revealed a specific pattern of DNA variations associated with the disorder. This pattern, known as a haplotype, appeared more frequently in patients with Alcohol Use Disorder. A deeper look at the data showed that this genetic link was specific to late-onset alcoholism. This category includes individuals who developed the condition after the age of 25. This was a somewhat unexpected finding, as earlier research has often linked strong genetic factors to early-onset disease. The authors suggest that genetic influences on brain plasticity might become more pronounced as the brain ages.

The results regarding childhood experiences painted a different picture. Patients with Alcohol Use Disorder reported much higher rates of childhood trauma compared to the healthy control group. This included higher scores for physical abuse, emotional abuse, and neglect. The study found a clear mathematical relationship between trauma and age. The more severe the childhood trauma, the younger the patient was when they developed a dependency on alcohol. This supports the theory that some individuals use alcohol to self-medicate the emotional pain of early abuse.

The impact of Adverse Childhood Experiences (ACEs) was particularly stark. The data showed a compounding risk. Individuals with one or more adverse experiences were roughly 3.5 times more likely to develop the disorder than those with none. For individuals with two or more adverse experiences, the likelihood skyrocketed. They were 48 times more likely to develop Alcohol Use Disorder. This suggests that there may be a tipping point where the cumulative burden of stress overwhelms a young person’s coping mechanisms.

The researchers uncovered distinct differences between men and women regarding trauma. Men with the disorder reported higher rates of physical abuse in childhood compared to female patients. Women with the disorder reported higher rates of sexual abuse compared to males. The data suggested that for women, a history of sexual abuse was associated with developing alcoholism seven to ten years earlier than those without such history. This highlights a critical need for gender-specific approaches when addressing trauma in addiction treatment.

Family environment played a major role across the board. Patients with the disorder consistently reported lower family functioning compared to healthy individuals. This dysfunction was present regardless of whether the alcoholism started early or late in life. It appears that a lack of family support is a general risk factor rather than a specific trigger for a certain type of the disease. A supportive family acts as a buffer against stress. When that buffer is missing, the risk of maladaptive coping strategies increases.

The team tested the hypothesis that trauma might change how the BDNF gene affects a person. The analysis did not support this idea. The genetic risks and the environmental risks appeared to operate independently of one another. The gene variants did not make the trauma worse, and the trauma did not activate the gene in a specific way. This suggests that while both factors lead to the same outcome, they may travel along parallel biological pathways to get there.

There are limitations to this study that affect how the results should be interpreted. The participants were all Han Chinese, so the genetic findings might not apply to other ethnic populations. Genetic variations often differ by ancestry, and what is true for one group may not hold for another.

The study also relied on adults remembering their childhoods. This retrospective approach can introduce errors, as memory is not always a perfect record of the past. Additionally, the number of female participants was relatively small compared to males, which mirrors the prevalence of the disorder but limits statistical power for that subgroup.

The study also noted high rates of nicotine use among the alcohol-dependent group. Approximately 85 percent of the patients used nicotine. Since smoking can also affect brain biology, it adds another layer of complexity to the genetic analysis. The researchers attempted to control for this, but it remains a variable to consider.

Despite these caveats, the research offers a valuable perspective for clinicians. It suggests that patients who develop alcoholism early in life are likely driven by environmental trauma. Treatment for these individuals might prioritize trauma-informed therapy and psychological processing of past events. In contrast, patients who develop the disorder later in life might be grappling with a genetic vulnerability that becomes relevant as the brain ages. This could point toward different biological targets for medication or different behavioral strategies.

The authors recommend that future research should focus on replicating these findings in larger and more diverse groups. They also suggest using brain imaging technologies. Seeing how these gene variants affect the physical structure of the brain could explain why they predispose older adults to addiction.

Understanding the distinct mechanisms of early versus late-onset alcoholism is a step toward personalized medicine in psychiatry. By identifying whether a patient is fighting a genetic predisposition or the ghosts of a traumatic past, doctors may eventually be able to tailor treatments that address the root cause of the addiction.

The study, “Childhood trauma, family functioning, and the BDNF gene may affect the development of alcohol use disorder,” was authored by Yi-Wei Yeh, Catherine Shin Huey Chen, Shin-Chang Kuo, Chun-Yen Chen, Yu-Chieh Huang, Jyun-Teng Huang, You-Ping Yang, Jhih-Syuan Huang, Kuo-Hsing Ma, and San-Yuan Huang.

Most Americans experience passionate love only twice in a lifetime, study finds

Most adults in the United States experience the intense rush of passionate love only about twice throughout their lives, according to a recent large-scale survey. The study, published in the journal Interpersona, suggests that while this emotional state is a staple of human romance, it remains a relatively rare occurrence for many individuals. The findings provide a new lens through which to view the frequency of deep romantic attachment across the entire adult lifespan.

The framework for this research relies on a classic model where love consists of three parts: passion, intimacy, and commitment. Passion is described as the physical attraction and intense longing that often defines the start of a romantic connection. Amanda N. Gesselman, a researcher at the Kinsey Institute at Indiana University, led the team of scientists who conducted this work.

The research team set out to quantify how often this specific type of love happens because earlier theories suggest passion is high at the start of a relationship but fades as couples become more comfortable. As a relationship matures, it often shifts toward companionate love, which is defined by deep affection and entwined lives rather than obsessive longing. Because this intense feeling is often fleeting, it might happen several times as people move through different stages of life.

The researchers wanted to see if social factors like age, gender, or sexual orientation influenced how often someone falls in love. Some earlier studies on university students suggested that most young people fall in love at least once by the end of high school. However, very little data existed regarding how these experiences accumulate for adults as they reach middle age or later life.

To find these answers, the team analyzed data from more than 10,000 single adults in the U.S. between the ages of 18 and 99. Participants were recruited to match the general demographic makeup of the country based on census data. This large group allowed the researchers to look at a wide variety of life histories and romantic backgrounds.

Participants were asked to provide a specific number representing how many times they had ever been passionately in love during their lives. On average, the respondents reported experiencing this intense feeling 2.05 times. This number suggests that for the average person, passionate love is a rare event that happens only a few times in a century of living.

A specific portion of the group, about 14 percent, stated they had never felt passionate love at all. About 28 percent had felt it once, while 30 percent reported two experiences. Another 17 percent had three experiences, and about 11 percent reported four or more. These figures show that while the experience is common, it is certainly not a daily or even a yearly occurrence for most.

The study also looked at how these numbers varied based on the specific characteristics of the participants. Age showed a small link to the number of experiences, meaning older adults reported slightly more instances than younger ones. This result is likely because older people have had more years and more opportunities to encounter potential partners.

The increase with age was quite small, which suggests that people do not necessarily keep falling in love at a high rate as they get older. One reason for this might be biological, as the brain systems involved in reward and excitement are often most active during late adolescence and early adulthood. As people transition into mature adulthood, their responsibilities and self-reflection might change how they perceive or pursue new romantic passion.

Gender differences were present in the data, with men reporting slightly more experiences than women. This difference was specifically found among heterosexual participants, where heterosexual men reported more instances of passionate love than heterosexual women. This finding aligns with some previous research suggesting that men may be socialized to fall in love or express those feelings earlier in a relationship.

Among gay, lesbian, and bisexual participants, the number of experiences did not differ by gender. The researchers did not find that sexual orientation on its own created any differences in how many times a person fell in love. For example, the difference between heterosexual and bisexual participants was not statistically significant.

The researchers believe these results have important applications for how people view their own romantic lives. Many people feel pressure from movies, songs, and social media to constantly chase a state of high passion. Knowing that the average person only feels this a couple of times may help people feel more normal if they are not currently in a state of intense romance.

In a clinical or counseling setting, these findings could help people who feel they are behind in their romantic development. If someone has never been passionately in love, they are part of a group that includes more than one in ten adults. Seeing this as a common variation in human experience rather than a problem can reduce feelings of shame.

The researchers also noted that people might use a process called retrospective cognitive discounting. This happens when a person looks back at their past and views old relationships through a different lens based on their current feelings. An older person might look back at a past “crush” and decide it was not true passionate love, which would lower their total count.

This type of self-reflection might help people stay resilient after a breakup. By reinterpreting a past relationship as something other than passionate love, they might remain more open to finding a new connection in the future. This mental flexibility is part of how humans navigate the ups and downs of their romantic histories.

There are some limitations to the study that should be considered. Because the researchers only surveyed single people, the results might be different if they had included people who are currently married or in long-term partnerships. People who are in stable relationships might have different ways of remembering their past experiences compared to those who are currently unattached.

The study also relied on people remembering their entire lives accurately, which can be a challenge for older participants. Future research could follow the same group of people over many years to see how their feelings change as they happen. This would remove the need for participants to rely solely on their memories of the distant past.

The participants were all located in the United States, so these findings might not apply to people in other cultures. Different societies have different rules about how people meet, how they express emotion, and what they consider to be love. A global study would be needed to see if the “twice in a lifetime” average holds true in other parts of the world.

Additionally, the survey did not provide a specific definition of passionate love for the participants. Each person might have used their own personal standard for what counts as being passionately in love. Using a more standardized definition in future studies could help ensure that everyone is answering the question in the same way.

The researchers also mentioned that they did not account for individual personality traits or attachment styles. Some people are naturally more prone to falling in love quickly, while others are more cautious or reserved. These internal traits likely play a role in how many times someone experiences passion throughout their life.

Finally, the study did not include a large enough number of people with diverse gender identities beyond the categories of men and women. Expanding the research to include more gender-diverse individuals would provide a more complete picture of the human experience. Despite these gaps, the current study provides a foundation for understanding the frequency of one of life’s most intense emotions.

The study, “Twice in a lifetime: quantifying passionate love in U.S. single adults,” was authored by Amanda N. Gesselman, Margaret Bennett-Brown, Jessica T. Campbell, Malia Piazza, Zoe Moscovici, Ellen M. Kaufman, Melissa Blundell Osorio, Olivia R. Adams, Simon Dubé, Jessica J. Hille, Lee Y. S. Weeks, and Justin R. Garcia.

Blue light exposure may counteract anxiety caused by chronic vibration

Living in a modern environment often means enduring a constant hum of background noise and physical vibration. From the rumble of heavy traffic to the oscillation of industrial machinery, these invisible stressors can gradually erode mental well-being.

A new study suggests that a specific color of light might offer a simple way to counter the anxiety caused by this chronic environmental agitation. The research indicates that blue light exposure can calm the nervous system even when the physical stress of vibration continues. These findings were published in the journal Physiology & Behavior.

Anxiety disorders are among the most common mental health challenges globally. They typically arise from a complicated mix of biological traits and social pressures. Environmental factors are playing an increasingly large role in this equation. Chronic exposure to low-frequency noise and vibration is known to disrupt the body’s hormonal balance. This disruption frequently leads to psychological symptoms such as irritability, fatigue, and persistent anxiety.

Doctors often prescribe medication to manage these conditions once a diagnosis is clear. These drugs usually work by altering the chemical signals in the brain to inhibit anxious feelings. However, pharmaceutical interventions are not always the best first step for early-stage anxiety. There is a growing demand for therapies that are accessible and carry fewer side effects. This has led scientists to investigate light therapy as a promising alternative.

Light does more than allow us to see. It also regulates our internal biological clocks and influences our mood. Specialized cells in the eyes detect light and send signals directly to the brain regions that control hormones. This pathway allows light to modulate the release of neurotransmitters associated with emotional well-being.

Despite this general knowledge, there has been little research on how specific light wavelengths might combat anxiety caused specifically by vibration. A team of researchers decided to fill this gap using zebrafish as a model organism. Zebrafish are small, tropical freshwater fish that are widely used in neuroscience. Their brain chemistry and genetic structure share many similarities with humans.

The study was led by Longfei Huo and senior author Muqing Liu from the School of Information Science and Technology at Fudan University in China. They aimed to identify if light could serve as a preventative measure against vibration-induced stress. The team designed a controlled experiment to first establish which vibrations caused the most stress. They subsequently tested whether light could reverse that stress.

The researchers began by separating the zebrafish into different groups. Each group was exposed to a specific frequency of vibration for one hour daily. The frequencies tested were 30, 50, and 100 Hertz. To ensure consistency, the acceleration of the vibration was kept constant across all groups. This phase of the experiment lasted for one week.

To measure anxiety in fish, the scientists relied on established behavioral patterns. When zebrafish are comfortable, they swim freely throughout their tank. When they are anxious, they tend to sink to the bottom. They also exhibit “thigmotaxis,” which is a tendency to hug the walls of the tank rather than exploring open water.

The team utilized a “novel tank test” to observe these behaviors. They placed the fish in a new environment and recorded how much time they spent in the lower half. The results showed that daily exposure to vibration made the fish act more anxious. The effect was strongest in the group exposed to 100 Hertz. These fish spent a statistically significant amount of time at the bottom of the tank.

The researchers also used a “light-dark box test.” In this setup, half the tank is illuminated and the other half is dark. Anxious fish prefer to hide in the dark. The fish exposed to 100 Hertz vibration spent much more time in the dark zones compared to the control group. This confirmed that the vibration was inducing a strong anxiety-like state.

After establishing that 100 Hertz vibration caused the most stress, the researchers moved to the second phase of the study. They wanted to see if light color could mitigate this effect. They repeated the vibration exposure but added a light therapy component. While the fish underwent vibration, they were bathed in either red, green, blue, or white light.

The blue light used in the experiment had a wavelength of 455 nanometers. The red light was 654 nanometers, and the green was 512 nanometers. The light exposure lasted for two hours each day. The researchers then ran a comprehensive battery of behavioral tests to see if the light made a difference.

The team found that the color of the light had a profound impact on the mental state of the fish. Zebrafish exposed to the blue light showed much less anxiety than those in the other groups. In the novel tank test, the blue-light group spent less time at the bottom. They explored the upper regions of the water almost as much as fish that had never been vibrated at all.

In contrast, the red light appeared to offer no benefit. In some metrics, the red light seemed to make the anxiety slightly worse. Fish under red light spent the longest time hiding in the dark during the light-dark box test. This suggests that the calming effect is specific to the wavelength of the light and not just the brightness.

The researchers also introduced two innovative testing methods to validate their results. One was a “social interaction test.” Zebrafish are social animals and usually prefer to be near others. Stress often causes them to withdraw. The researchers placed a group of fish inside a transparent cylinder within the tank. They then measured how much time the test fish spent near this cylinder.

Fish exposed to vibration and white light avoided the group. However, the fish treated with blue light spent a large amount of time near their peers. This indicated that their social anxiety had been alleviated. The blue light restored their natural desire to interact with others.

The second new method was a “pipeline swimming test.” This involved placing the fish in a tube with a gentle current. The setup allowed the scientists to easily measure swimming distance and smoothness of movement. Stressed fish tended to swim erratically or struggle against the flow. The blue-light group swam longer distances with smoother trajectories.

To understand the biological mechanism behind these behavioral changes, the scientists analyzed the fish’s brain chemistry. They measured the levels of three key chemicals: cortisol, norepinephrine, and serotonin. Cortisol is the primary stress hormone in both fish and humans. High levels of cortisol are a hallmark of physiological stress.

The analysis revealed that vibration exposure caused a spike in cortisol and norepinephrine. This hormonal surge matched the anxious behavior observed in the tanks. However, the application of blue light blocked this increase. The fish treated with blue light had cortisol levels comparable to the unstressed control group.

Even more striking was the effect on serotonin. Serotonin is a neurotransmitter that helps regulate mood and promotes feelings of well-being. The study found that 455 nm blue light specifically boosted serotonin levels in the fish. This suggests that blue light works by simultaneously lowering stress hormones and enhancing mood-regulating chemicals.

The authors propose that the blue light activates specific cells in the retina. These cells, known as intrinsically photosensitive retinal ganglion cells, contain a pigment called melanopsin. Melanopsin is highly sensitive to blue wavelengths. When activated, these cells send calming signals to the brain’s emotional centers.

There are some limitations to this study that must be considered. The research focused heavily on specific frequencies and wavelengths. It is possible that other combinations of light and vibration could yield different results. The study also did not investigate potential interaction effects between the light and vibration in a full factorial design.

Additionally, while zebrafish are a good model, they are not humans. The neural pathways are similar, but the complexity of human anxiety involves higher-level cognitive processes. Future research will need to replicate these findings in mammals. Scientists will also need to determine the optimal intensity and duration of light exposure for therapeutic use.

The study opens up new possibilities for managing environmental stress. It suggests that modifying our lighting environments could protect against the invisible toll of noise and vibration. For those living or working in industrial areas, blue light therapy could become a simple, non-invasive tool for mental health.

The study, “Blue light exposure mitigates vibration noise-induced anxiety by enhancing serotonin levels,” was authored by Longfei Huo, Xiaojing Miao, Yi Ren, Xuran Zhang, Qiqi Fu, Jiali Yang, and Muqing Liu.

Specific brain training regimen linked to lower dementia risk in 20-year study

A specific regimen of computer-based brain exercises focused on visual processing speed may lower the long-term risk of receiving a dementia diagnosis. A new analysis of data spanning two decades suggests that older adults who engaged in this adaptive training, provided they participated in follow-up sessions, were approximately 25 percent less likely to be diagnosed with dementia compared to a control group. These results were published in the journal Alzheimer’s & Dementia: Translational Research & Clinical Interventions.

The search for effective ways to prevent or delay Alzheimer’s disease and related dementias is a primary focus of modern medical research. While physical exercise and diet are frequently cited as potential protective factors, the role of specific cognitive training remains a subject of intense debate. Many commercial products promise to sharpen the mind, yet scientific evidence supporting their ability to prevent disease has been inconsistent. To address this uncertainty, researchers revisited data from a gold-standard clinical trial to see if specific interventions had lasting effects on brain health.

The research was led by Norma B. Coe, a professor at the Perelman School of Medicine at the University of Pennsylvania. Coe and her colleagues sought to understand if the benefits of cognitive training could be detected in medical records twenty years after the training took place. They focused on whether different types of mental exercises had varying impacts on the likelihood of a patient developing dementia as they aged into their eighties and nineties.

The team utilized data from the Advanced Cognitive Training for Independent and Vital Elderly study. Known as the ACTIVE study, this large-scale project began in the late 1990s. It was designed as a randomized controlled trial, which is widely considered the most rigorous method for determining cause and effect in science. The original trial enrolled nearly 3,000 healthy adults over the age of 65 living in the community.

Participants in the ACTIVE study were randomly assigned to one of four groups. The first group received memory training. This instruction focused on teaching strategies for remembering word lists and sequences of items. The second group received reasoning training. These sessions involved identifying patterns in number series and solving problems related to daily living. The third group received speed of processing training. The fourth group served as a control and received no training.

The speed of processing intervention was distinct from the other two. It involved a computer-based task designed to improve the user’s visual attention. Participants were asked to identify an object in the center of the screen while simultaneously locating a target in the periphery. As the user improved, the program became faster and the tasks became more difficult. This made the training “adaptive,” meaning it constantly pushed the participant to the limit of their ability.

The initial training period lasted for five to six weeks. Researchers offered a subset of participants “booster” sessions. These additional training blocks occurred one year and three years after the initial enrollment. The goal of these boosters was to reinforce the skills learned during the first phase.

To determine long-term outcomes, Coe and her team linked the original study data with Medicare claims records spanning from 1999 to 2019. This allowed the researchers to track the participants for up to 20 years. They looked for diagnostic codes indicating Alzheimer’s disease or other forms of dementia. By using insurance claims, the team could identify diagnoses made by doctors in real-world clinical settings, even for participants who had stopped communicating with the original study organizers.

The analysis included 2,021 of the original participants. The results revealed a specific and isolated benefit. Participants who underwent the speed of processing training and attended at least one booster session showed a reduced risk of diagnosed dementia. The hazard ratio was 0.75, indicating a 25 percent lower risk compared to the control group.

The study did not find similar benefits for the other groups. Participants who received memory training or reasoning training did not show a statistically distinct difference in dementia diagnosis rates compared to the control group. This was true even if they attended booster sessions. Additionally, individuals in the speed training group who did not attend the booster sessions showed no reduction in risk. The protective effect appeared to depend on the combination of the specific visual speed task and the reinforcement provided by the follow-up sessions.

The researchers propose several reasons why the speed training might have yielded different results than the memory or reasoning exercises. One hypothesis centers on the type of memory engaged. The memory and reasoning interventions relied on “declarative memory.” This involves learning explicit strategies and conscious techniques to solve problems. In contrast, the speed training engaged “procedural memory.” This type of learning becomes automatic and unconscious through repetition, similar to riding a bike.

Another key difference was the adaptive nature of the speed task. The computer program adjusted the difficulty in real-time. This ensured that participants were always challenged, potentially stimulating the brain more effectively than the static strategies taught in the other groups. The authors suggest that this intense, adaptive engagement of the brain’s processing systems might facilitate neuroplasticity, or the brain’s ability to rewire itself.

The findings align with previous, shorter-term analyses of the ACTIVE study, which had hinted at cognitive benefits for the speed training group. However, this is the first analysis to use Medicare claims to confirm a reduction in diagnosed disease over such a lengthened timeframe.

“This work conveys a clear message but also leads us to ask many new questions. We are keen to dig deeper to understand the underlying mechanisms at play here, but ultimately this is a great problem to have,” said Marilyn Albert, the corresponding study author and director of the Johns Hopkins Alzheimer’s Disease Research Center at the Johns Hopkins School of Medicine.

There are limitations to the study that provide context for the results. The analysis relied on administrative billing codes rather than direct neurological examinations of every participant. This means a diagnosis would only be recorded if a participant visited a doctor and the doctor coded the visit correctly. It is possible that some participants developed dementia but were never formally diagnosed.

The study also excluded participants who were enrolled in Medicare Advantage plans because complete claims data were not available for them. If the population in Medicare Advantage plans differs in health or socioeconomic status from those in traditional Medicare, it could influence the generalizability of the findings. Additionally, the researchers noted that individuals with higher education levels or better access to healthcare are often more likely to receive a dementia diagnosis, which could introduce bias into the claims data.

Despite these caveats, the results offer a potential avenue for preventative intervention. “The findings reported here suggest that moderate cognitive training could delay the onset of dementia over subsequent years,” said Richard Hodes, director of the National Institute on Aging, in a press release. “There is still more research to be done to determine about how this works, but this promising lead may move the field further into developing effective interventions to delay or prevent onset of dementia.”

Future research will likely focus on isolating the specific mechanisms that made the speed training effective. Scientists need to understand if the benefit comes from the visual aspect of the task, the speed component, or the adaptive difficulty. Understanding why the memory and reasoning strategies failed to prevent disease diagnosis is equally important for designing future public health programs.

The study also raises questions about the optimal “dose” of training. Since the benefit was only seen in those who received booster sessions, it suggests that brain training may be like physical exercise: it requires maintenance to remain effective.

“This study shows that simple brain training, done for just weeks, may help people stay mentally healthy for years longer,” said Jay Bhattacharya, a director at the National Institutes of Health. “That’s a powerful idea — that practical, affordable tools could help delay dementia and help older adults keep their independence and quality of life.”

The study, “Impact of cognitive training on claims-based diagnosed dementia over 20 years: evidence from the ACTIVE study,” was authored by Norma B. Coe, Katherine E. M. Miller, Chuxuan Sun, Elizabeth Taggert, Alden L. Gross, Richard N. Jones, Cynthia Felix, Marilyn S. Albert, George W. Rebok, Michael Marsiske, Karlene K. Ball, and Sherry L. Willis.

Hippocampal neurons shift their activity backward in time to anticipate rewards

Recent experimental findings suggest that the hippocampus, the brain region primarily associated with memory and navigation, actively reorganizes its neural patterns to anticipate future events. Researchers observed that as mice learned to navigate a complex task, the neural signals associated with a reward shifted backward in time to predict the outcome before it happened. These results were published in the journal Nature.

The hippocampus is a seahorse-shaped structure located deep within the temporal lobes of the brain. Neuroscientists have recognized for decades that this region is essential for forming new memories. It is also responsible for creating a cognitive map. This internal representation allows an organism to visualize its environment and navigate through space.

Biologists have traditionally viewed the cognitive map as a relatively static record of the environment. Under this view, the hippocampus encodes features such as landmarks, borders, and the location of resources. However, survival requires more than just a record of the past. An animal must use its prior experiences to predict where food or safety will be located in the future.

This necessity leads to the theory of predictive coding. This theory suggests that the brain is constantly generating models of the world to estimate future outcomes. When an outcome matches the prediction, the brain learns that its model is correct. When an outcome is unexpected, the brain must update the model.

While this theory is widely accepted in computational neuroscience, observing the physical reorganization of cells in the hippocampus over long periods has been a technical challenge. Most neural recording technologies can only track brain activity for short durations. This limitation makes it difficult to see how internal maps evolve as learning consolidates over weeks.

Mohammad Yaghoubi, a researcher at McGill University, aimed to bridge this gap. Working with senior author Mark Brandon at the Douglas Research Centre, Yaghoubi designed an experiment to track specific neurons across an extended timeframe. They sought to determine if the hippocampal map restructures itself to prioritize the prediction of rewards.

The research team employed a sophisticated imaging technique known as calcium imaging. They injected a modified virus into the brains of mice. This virus caused neurons to express a fluorescent protein that glows when calcium enters the cell, which happens when a neuron fires.

The researchers then implanted a gradient refractive index lens, a tiny microscope component, above the hippocampus. This setup allowed them to attach a miniature camera, weighing only a few grams, to the head of the mouse. The camera recorded the fluorescence of hundreds of individual neurons while the animal moved freely.

Because this method relies on optical imaging rather than physical electrodes, it is less invasive to the tissue over time. This stability allowed Yaghoubi and his colleagues to identify and monitor the exact same neurons day after day for several weeks. They could then correlate specific cellular activity with the animal’s behavior during learning.

The mice were trained to perform a task known as “delayed nonmatching-to-location” inside an automated chamber. The apparatus featured a touch-sensitive screen at one end and a reward dispenser at the other. The task required the mouse to initiate a trial and then observe a sample location lighting up on the screen.

After a short delay, the screen displayed the original location alongside a new, novel location. To receive a reward, the mouse had to ignore the familiar spot and touch the new location. The reward was a small amount of strawberry milkshake delivered at the opposite end of the chamber. This task is cognitively demanding because it requires the animal to hold information in working memory and apply a specific rule.

At the beginning of the training, the researchers noted that a distinct population of hippocampal neurons fired vigorously when the mouse received the milkshake. These cells appeared to be tuned specifically to the experience of consuming the reward. The neural map at this stage was heavily focused on the outcome itself.

As the mice repeated the task over weeks and their performance improved, the neural patterns began to change. The researchers observed a phenomenon described as backpropagation of neural tuning. The cells that originally fired only upon receiving the reward began to fire earlier in the sequence of events.

“What we found was surprising,” said Brandon. “Neural activity that initially peaked at the reward gradually shifted to earlier moments, eventually appearing before mice reached the reward.”

By the time the mice had mastered the task, these specific neurons were firing while the animal was still approaching the reward port. In some instances, the firing shifted all the way back to the moment the mouse made the correct choice on the touchscreen. The cells had transformed from sensors of the present reward into predictors of the future reward.

The study also analyzed the activity of the neuronal population as a whole. In the early stages of learning, a large percentage of the recorded cells were dedicated to encoding the reward location. This resulted in an over-representation of the reward site in the mouse’s mental map.

As the weeks passed, the proportion of neurons tuned to the reward itself decreased. Simultaneously, the number of neurons encoding the approach and the choice period increased. The brain appeared to be efficient. Once the reward was predictable, fewer resources were needed to represent it. The cognitive effort shifted toward the actions required to obtain it.

This reorganization supports the idea that the hippocampus acts as a predictive device. The backward shift in timing allows the brain to signal an upcoming event based on the current context. This predictive signal likely helps guide the animal’s behavior, reinforcing the actions that lead to a positive outcome.

The researchers confirmed that this shift was not due to simple changes in the animal’s speed or position. They used statistical controls to ensure that the change in firing timing was a true remapping of the cognitive representation. The consistency of the findings across multiple animals suggests a fundamental biological mechanism.

“The hippocampus is often described as the brain’s internal model of the world,” said Brandon. “What we are seeing is that this model is not static; it is updated day by day as the brain learns from prediction errors. As outcomes become expected, hippocampal neurons start to respond earlier as they learn what will happen next.”

There are limitations to the study that warrant mention. The research was conducted on mice, and while the hippocampus is evolutionarily conserved, human cognition involves additional layers of complexity. Further research is necessary to confirm if identical cellular mechanisms drive predictive learning in the human brain.

Additionally, the study focused on a reward-based task. It remains to be seen if the hippocampus utilizes the same predictive backpropagation for negative or aversive outcomes. Future experiments will likely investigate whether the brain rewires itself similarly to predict threats or punishments.

The findings may have implications for understanding neurodegenerative disorders. Individuals with Alzheimer’s disease often exhibit disorientation and difficulty learning from new experiences. If the predictive coding mechanism in the hippocampus is disrupted, it could explain why patients struggle to anticipate consequences or navigate familiar environments.

By demonstrating that memory circuits are dynamic and predictive, this study offers a new perspective on how the brain interacts with time. The hippocampus does not merely archive the past. It actively reconstructs it to prepare for the future.

The study, “Predictive Coding of Reward in the Hippocampus,” was authored by Mohammad Yaghoubi, Andres Nieto-Posadas, Coralie-Anne Mosser, Thomas Gisiger, Émmanuel Wilson, Sylvain Williams, and Mark P. Brandon.

Staying off social media isn’t always a sign of a healthy social life

New research suggests that the way adolescents use social media is not a uniform experience but rather splits into distinct personality-driven profiles that yield varying social results. The findings indicate that digital platforms largely reinforce existing friendships rather than helping isolated youth build new connections. These results were published in the journal Computers in Human Behavior.

Psychologists have debated whether apps like Instagram, TikTok, or Snapchat help or harm adolescent development for years. Some theories propose that these platforms simulate meaningful connection and allow young people to practice social skills. Other perspectives argue that digital interactions replace face-to-face communication with superficial scrolling, leading to isolation.

However, most previous inquiries looked at average behaviors across large groups or focused on simple metrics like screen time. This approach often misses the nuance of individual habits. Real-world usage is rarely just about logging on or logging off. It involves a mix of browsing, posting, liking, and chatting.

Federica Angelini, the lead author from the Department of Developmental and Social Psychology at the University of Padova in Italy, worked with colleagues to move beyond these binary categories. They wanted to understand how specific combinations of online behaviors cluster together. They also sought to determine if a teenager’s underlying social motivations drive these habits.

The research team recognized that early adolescence is a formative period for social and emotional growth. During these years, close relationships with peers become central to a young person’s identity. Because these interactions now occur simultaneously in physical and digital spaces, the authors argued that science needs better models to capture this complexity.

To achieve this, the team tracked 1,211 Dutch students between the ages of 10 and 15 over the course of three years. They used surveys to measure how often students looked at content, posted about themselves, interacted with others, and shared personal feelings. The researchers also assessed the students’ psychological motivations, such as the fear of missing out or a desire for popularity.

Using a statistical technique called latent profile analysis, the investigators identified four distinct types of users. The largest group, comprising about 54 percent of the participants, was labeled “All-round users.” These teens engaged in a moderate amount of all activities, from scrolling to posting.

The study found that All-round users generally maintained moderate-to-high quality friendships throughout the three-year period. Their digital habits appeared to be an extension of a healthy offline social life. They used these platforms to keep in touch and share experiences with friends they already saw in person.

The second largest group, making up roughly 30 percent, was identified as “Low users.” These individuals rarely engaged with social media in any form, whether passive scrolling or active posting. While it might seem beneficial to be less dependent on screens, the data showed a different story for this specific group.

These Low users reported lower quality friendships at the start of the research compared to their peers. Their lack of online engagement appeared to mirror a lack of connection in the real world. Without a strong peer group to interact with, they had little motivation to log on. The data suggests they were not simply opting out of technology but were missing out on the social reinforcement that happens online.

A smaller group, about 8 percent, was termed “High self-disclosing users.” These adolescents frequently used digital platforms to share personal feelings, secrets, and emotional updates. They tended to prefer online communication over face-to-face talk.

This group scored higher on measures of anxiety and depression. The researchers suggest these teens might use the internet to compensate for difficulties in offline social situations. The reduced pressure of online chat, which lacks nonverbal cues like eye contact, may make it easier for them to open up. Despite their emotional struggles, this group maintained high-quality friendships, suggesting their vulnerability online helped sustain their bonds.

The final group, labeled “High self-oriented users,” made up roughly 7 percent of the sample. These teens focused heavily on posting content about themselves but showed less interest in what peers were doing. They were driven by a desire for status and attention.

Unlike the other groups, High self-oriented users were less concerned with the fear of missing out. Their primary goal appeared to be self-promotion rather than connection. Notably, this was the only group that saw a decline in the quality of their close friendships over the three years. Their focus on gaining an audience rather than engaging in reciprocal friendship likely failed to deepen their personal relationships.

The analysis revealed that social media generally acts as an amplifier of offline social dynamics. Teens with strong existing friendships used the platforms to maintain those bonds. Those with weaker connections did not seem to benefit from the technology.

This supports the idea that the benefits of social media rely heavily on pre-existing relationships. Adolescents who struggle socially in person may find it difficult to use these tools to build meaningful relationships from scratch. Instead of bridging the gap, the technology might leave them further behind.

The study also highlighted the role of motivation. Teens who used social media to seek status were more likely to fall into the self-oriented or self-disclosing categories. Those who simply wanted to stay in the loop tended to be All-round users.

There are limitations to consider regarding this research. The data relied on self-reported surveys, which can sometimes be inaccurate as people may not remember their habits perfectly. Additionally, the study was conducted in the Netherlands, so the results might not apply universally to adolescents in other cultural contexts.

The researchers noted that some participants dropped out of the study over the three years, which is common in longitudinal work. The study also did not strictly differentiate between friends met online versus friends met offline, though most participants indicated they communicated with people they knew in real life.

Future research could benefit from using objective measures, such as tracking app usage data directly from smartphones. It would also be beneficial to investigate how these profiles evolve as teens move into young adulthood. Understanding these patterns could help parents and educators tailor their advice, rather than giving generic warnings about screen time.

The study, “Adolescent social media use profiles: A longitudinal study of friendship quality and socio-motivational factors,” was authored by Federica Angelini, Ina M. Koning, Gianluca Gini, Claudia Marino, and Regina J.J.M. van den Eijnden.

Moderate coffee and tea consumption linked to lower risk of dementia

A new analysis of long-term dietary habits suggests that your daily cup of coffee or tea might do more than just provide a morning jolt. Researchers have determined that moderate consumption of caffeinated beverages is linked to a lower risk of dementia and better physical brain function over time. These results were published in the journal JAMA.

Dementia and Alzheimer’s disease represent a growing health challenge as the global population ages. Current medical treatments offer limited benefits once symptoms appear, and they cannot reverse the condition. This reality has prompted medical experts to look for lifestyle habits that might delay the onset of cognitive decline. Diet is a primary area of focus because it is a factor that individuals can control in their daily lives.

Coffee and tea are of particular interest to nutritional scientists. These beverages contain chemical compounds that may protect brain cells from damage. These include caffeine and polyphenols, which are plant-based micronutrients with antioxidant properties.

Prior attempts to measure this potential benefit have yielded mixed results. Some earlier inquiries relied on participants remembering their dietary habits from the distant past. Others checked in with participants only once, failing to capture how habits change over a lifetime. To address these limitations, a team led by Yu Zhang and Daniel Wang from the Harvard T.H. Chan School of Public Health and Mass General Brigham undertook a more expansive approach.

The investigators analyzed data from two massive, long-running groups of medical professionals. The study included over 130,000 female nurses and male health professionals who provided updates on their health and diet for up to forty-three years. Unlike smaller snapshots of time, this project tracked dietary habits repeatedly. Participants filled out detailed questionnaires about what they ate and drank every two to four years.

This distinct method allowed the researchers to reduce errors associated with memory. It also helped them calculate a cumulative average of caffeine intake over decades. The team looked for associations between these drinking habits and three specific outcomes: the clinical diagnosis of dementia, self-reported memory problems, and performance on objective cognitive tests.

The data revealed a distinct pattern regarding the consumption of caffeinated beverages. Individuals who drank caffeinated coffee had a lower chance of developing dementia compared to those who avoided it. The relationship followed a specific curve rather than a straight line.

The greatest reduction in risk appeared among people who drank approximately two to three cups of caffeinated coffee per day. Consuming more than this amount did not result in additional benefits, but it also did not appear to cause harm. This finding contradicts some earlier fears that high caffeine intake might be detrimental to the aging brain.

Tea drinkers saw similar benefits. Consuming one to two cups of tea daily was linked to a lower likelihood of dementia diagnosis. In contrast, the researchers found that the results were not statistically significant among those who drank decaffeinated coffee. This distinction suggests that caffeine itself may play a central role in the observed neuroprotection.

The study also looked at how well participants could think and remember as they aged. In a subset of the participants who underwent telephone-based testing, higher caffeinated coffee intake tracked with better scores on performance tasks. These tests measured verbal memory, attention, and executive function.

The difference in scores was roughly equivalent to being several months younger in terms of brain aging. Even among people who carried genes that usually increase the risk of Alzheimer’s, the link between caffeine and better brain health remained consistent. The researchers also assessed “subjective cognitive decline.” This is a stage where individuals feel they are having memory slips before a doctor can detect them. Higher caffeine intake was associated with fewer reports of these subjective problems.

These results add weight to a growing body of evidence linking caffeine to neurological health. However, the findings do not perfectly align with every previous study. For example, recent analyses of the UK Biobank database also found that coffee drinkers had a lower risk of neurodegenerative conditions. That research highlighted that unsweetened coffee seemed most beneficial.

The UK Biobank findings differed slightly regarding decaffeinated coffee. While the Harvard team found no link between decaf and dementia risk, the UK study suggested decaf might still offer some protection. This discrepancy implies that other compounds in coffee besides caffeine might play a role, or that different populations metabolize these beverages differently.

Other research utilizing brain imaging has offered clues about why this might happen. A study from the Australian Imaging, Biomarkers and Lifestyle study of aging found that higher coffee consumption was associated with a slower buildup of amyloid proteins in the brain. These proteins are the sticky clumps associated with Alzheimer’s disease.

The new Harvard study aligns with the theory that caffeine helps maintain neural networks. It supports the idea that moderate stimulation of the brain’s chemical receptors might reduce inflammation. Caffeine blocks specific receptors in the brain known as adenosine receptors. When these receptors are blocked, it affects the release of neurotransmitters and may reduce the stress on brain cells.

Researchers have also observed in animal models that caffeine can suppress the enzymes that create amyloid plaques. It appears to enhance the function of mitochondria, which are the power plants of the cell. By improving how brain cells use energy, caffeine might help them survive longer in the face of aging.

Additional context comes from the National Health and Nutrition Examination Survey in the United States. That separate analysis found that older adults who consumed more caffeine performed better on tests of processing speed and attention. The consistency of these findings across different populations strengthens the argument that caffeine has a measurable effect on cognition.

Despite the large sample size of the new Harvard analysis, the study has limitations inherent to observational research. It demonstrates an association but cannot definitively prove that coffee causes the reduction in dementia cases. It is possible that people who start to experience subtle cognitive decline naturally stop drinking coffee before they are diagnosed. This phenomenon is often called reverse causation.

The researchers attempted to account for this by conducting sensitivity analyses. They looked at the data in ways that excluded the years immediately preceding a diagnosis. The protective link remained, suggesting that reverse causation does not fully explain the results.

The participants in this study were primarily white medical professionals. This fact means the results might not apply perfectly to the general population or to other racial and ethnic groups. Additionally, the questionnaires did not distinguish between different preparation methods. The study could not separate the effects of espresso versus drip coffee, or green tea versus black tea.

Unmeasured factors could also be at play. Coffee drinkers might share other lifestyle habits that protect the brain, such as higher levels of social activity or different dietary patterns. The researchers used statistical models to adjust for smoking, exercise, and overall diet quality. However, observational studies can never fully eliminate the possibility of residual confounding variables.

Future science needs to clarify the biological mechanisms at play. Researchers must determine if caffeine is acting alone or in concert with other antioxidants found in these plants. Clinical trials that assign specific amounts of caffeine to participants could help confirm these observational findings.

The senior author of the study, Daniel Wang, noted the perspective needed when interpreting these results. “While our results are encouraging, it’s important to remember that the effect size is small and there are lots of important ways to protect cognitive function as we age,” Wang said. “Our study suggests that caffeinated coffee or tea consumption can be one piece of that puzzle.”

For now, the data suggests that a moderate coffee or tea habit is a generally healthy choice for the aging brain. It appears that consumption of about three cups of coffee or two cups of tea provides the maximum potential benefit. This study provides reassurance that this common daily ritual does not harm cognitive function and may help preserve it.

The study, “Coffee and Tea Intake, Dementia Risk, and Cognitive Function,” was authored by Yu Zhang, Yuxi Liu, Yanping Li, Yuhan Li, Xiao Gu, Jae H. Kang, A. Heather Eliassen, Molin Wang, Eric B. Rimm, Walter C. Willett, Frank B. Hu, Meir J. Stampfer, and Dong D. Wang.

New research connects the size of the beauty market to male parenting effort

New research suggests that the size of a country’s cosmetics industry may be directly linked to how much fathers contribute to childcare and the level of economic inequality within that society. The findings propose that in cultures where men are active parents or where the gap between the rich and poor is wide, women are more likely to invest in their appearance to compete for partners. These results were published in the journal Evolution and Human Behavior.

Charles Darwin originally proposed the theory of sexual selection to explain why males of many species possess exaggerated physical traits. He observed that peafowl are sexually dimorphic, meaning the males and females look different. The peacock displays a massive, colorful tail to attract a mate, while the peahen remains relatively plain.

This dynamic typically arises from the biological costs of reproduction. In most species, females expend more biological energy through the production of eggs, gestation, and lactation. Because their investment in each offspring is higher, females tend to be the choosier sex. Males must consequently compete with one another to be selected.

Humans, however, do not always fit neatly into this classical model. Human females often utilize conspicuous traits or cultural enhancements, such as makeup, to increase their attractiveness. Jun-Hong Kim, a researcher at the Pohang University of Science and Technology in the Republic of Korea, sought to explain this exception.

Kim aimed to determine if human mating follows a “revised” sexual selection theory. This framework suggests that the direction of mate choice depends on which partner contributes more resources to the relationship. If males provide substantial care and support, they become a limited and sought-after resource.

When men invest heavily in parenting, the cost of reproduction becomes high for them as well. The theory predicts that under these conditions, men will become more discriminating in their choice of partner. Consequently, women may compete for these high-investment males by enhancing their physical appearance.

The researcher also considered the role of economic environment. In societies with high economic inequality, a partner with resources can provide a substantial advantage in survival and reproductive success. This suggests that financial stratification might also intensify female competition for high-status mates.

To test these hypotheses, Kim conducted a cross-cultural analysis involving data from up to 55 countries. The study used the total financial size of the cosmetics industry in each nation as a proxy for female ornamentation and male choice. This data was sourced from Euromonitor, excluding baby products and men’s grooming items.

The researcher needed a way to measure how much fathers contribute to family life across different cultures. Kim utilized data from the OECD regarding the ratio of unpaid domestic work and childcare hours performed by women versus men. A lower ratio indicates that men are doing a larger share of the domestic work.

Economic inequality was measured using income inequality data from the CIA and a social mobility index from the World Economic Forum. These metrics helped determine how difficult it is to move between economic classes. The study also controlled for factors like urbanization and Gross Domestic Product per capita.

Kim’s analysis revealed a strong association between paternal effort and the beauty market. In countries where men performed a higher proportion of childcare and domestic labor, per capita spending on cosmetics was higher. This supports the idea that when men are active caregivers, they become “prizes” that warrant increased mating effort from women.

The study quantified this relationship with specific monetary figures. The data indicated that for every hour increase in paternal investment relative to maternal investment, per capita spending on cosmetics rose by roughly $2.17. This trend held true even when accounting for the general wealth of the nation.

Economic disparity also emerged as a strong predictor of beauty spending. The analysis showed that as income inequality and social mobility scores increased, so did the size of the cosmetics industry. This suggests that in stratified societies, women may invest more in their appearance to attract partners who can offer financial security.

The study posits that this behavior is a form of mutual mate choice. Unlike many mammals where one sex is clearly the chooser and the other is the competitor, humans appear to engage in a bidirectional assessment. Men evaluate potential partners based on cues of fitness and fertility, which cosmetics can highlight.

Kim also tested other variables that frequently appear in evolutionary psychology literature. One such variable was the operational sex ratio, which compares the number of marriageable men to women. Previous theories suggested that a surplus of women would lead to higher competition and beauty spending.

However, the results for sex ratio were not statistically significant in this model. The density of the population also failed to predict variations in cosmetics use. The primary drivers remained paternal investment and economic stratification.

The researcher checked for geographic clustering to ensure the results were not simply due to neighboring countries acting similarly. Visualizing the data on maps showed no distinct regional patterns that would skew the statistics. This suggests the link between parenting, economics, and cosmetics is not merely a byproduct of shared regional culture.

There are limitations to this type of cross-cultural research. The study relies on observational data, which can identify associations but cannot definitively prove causation. It is possible that other unmeasured cultural factors influence both how men parent and how women spend money.

The measurement of paternal investment was also restricted by data availability. Because the study relied on OECD time-use surveys, the analysis regarding childcare was limited to developed nations. This reduces the ability to generalize the findings to non-industrialized or developing societies.

Kim also notes that unpaid work hours are an imperfect proxy for total paternal investment. This metric does not capture the quality of care or the emotional support provided by fathers. It focuses strictly on the time spent on domestic tasks.

Future research could address these gaps by using more direct measures of parenting effort. Kim suggests that standardized surveys across a wider range of cultures could provide granular detail on how fathers contribute. This would allow for a more robust test of the revised sexual selection theory.

The study provides a new lens through which to view the multi-billion dollar beauty industry. Rather than seeing cosmetics solely as a product of modern marketing, the research frames them as tools in an ancient biological strategy. It highlights how economic structures and family dynamics shape human behavior.

This perspective challenges the stereotype that sexual selection is always male-driven. It underscores that in humans, the high cost of raising children makes distinct demands on both parents. When men step up as caregivers, the dynamics of attraction and competition appear to shift in measurable ways.

The study, “Paternal investment and economic inequality predict cross-cultural variation in male choice,” was authored by Jun-Hong Kim.

Unexpected study results complicate the use of brain stimulation for anxiety

A new study suggests that a promising noninvasive brain stimulation technique may not function exactly as psychiatrists had hoped for patients with combined depression and anxiety. Researchers found that while electrical stimulation of the brain’s frontal cortex improved mental focus and reaction times, it also unexpectedly heightened sensitivity to potential threats.

These findings indicate that the treatment might wake up the brain’s alertness systems rather than simply calming down fear responses. The results were published in the journal Biological Psychiatry: Cognitive Neuroscience and Neuroimaging.

Major depressive disorder is one of the world’s most persistent public health burdens. It becomes even harder to treat when accompanied by anxiety. This combination is common. Patients with both conditions often experience more severe symptoms and are less likely to respond to standard antidepressants or talk therapy. This resistance to treatment has led scientists to look for biological causes within the brain’s circuitry.

Neuroscientists have identified specific patterns of brain activity in people with anxious depression. Typically, the prefrontal cortex shows lower than average activity. This area sits just behind the forehead. It is responsible for planning, decision-making, and regulating emotions. At the same time, the amygdala often shows hyperactivity. The amygdala is a deep brain structure that acts as the body’s alarm system for danger. In a healthy brain, the prefrontal cortex helps quiet the amygdala when a threat is not real. In anxious depression, this regulatory system often fails.

Researchers have been exploring transcranial direct current stimulation as a way to correct this imbalance. This technique involves placing electrodes on the scalp to deliver a weak electrical current. The goal is to encourage neurons in the prefrontal cortex to fire more readily. Theoretically, boosting the “thinking” part of the brain should help it exert better control over the “feeling” alarm system.

A team led by Tate Poplin and senior author Maria Ironside at the Laureate Institute for Brain Research in Tulsa, Oklahoma, sought to test this theory in a large clinical sample. They recruited 101 adults who were currently experiencing a major depressive episode and high levels of anxiety. The researchers wanted to see if a single session of stimulation could alter the way these patients processed threats.

The study was designed as a double-blind, randomized trial. This is the gold standard for clinical research. The participants were divided into two groups. One group received thirty minutes of active stimulation to the dorsolateral prefrontal cortex. The other group received a sham, or placebo, stimulation. The sham version mimicked the physical sensations of the device but did not deliver the therapeutic current. This ensured that neither the patients nor the staff knew who was receiving the real treatment.

The researchers administered the stimulation while the participants lay inside a magnetic resonance imaging scanner. This allowed the team to observe changes in blood flow within the brain in real time. During the scan, the participants completed a cognitive task. They viewed pictures of faces with fearful or neutral expressions. Letters were superimposed over the faces. The participants had to identify the letters.

This task was designed to measure “attentional load.” Some rounds were easy and required little mental effort. Other rounds were difficult and demanded intense focus. This design allowed the researchers to see how the brain prioritized information. They wanted to know if the stimulation would help the brain ignore the fearful faces and focus on the letters.

After the brain scans, the participants underwent a physical test of their anxiety levels. This involved measuring the startle reflex. The researchers placed sensors on the participants’ faces to detect eye blinks. The participants then listened to bursts of white noise. Sometimes the noise signaled a predictable electric shock. Other times, the shock was unpredictable.

This distinction is important in psychology. Reacting to a known danger is considered fear. Reacting to an unknown or unpredictable threat is considered anxiety. By measuring how hard the participants blinked in anticipation of the shock, the researchers could physically quantify their threat sensitivity.

The findings painted a complex picture of how the stimulation affected the brain. On one hand, the treatment appeared to improve cognitive performance. The group that received active stimulation was more accurate at identifying the letters than the placebo group. They also reacted faster.

The brain scans supported this behavioral improvement. When the task was difficult, the active group showed increased activity in the inferior frontal gyrus and the parietal cortex. These regions are heavily involved in attention and executive control. This suggests the stimulation successfully engaged the brain’s command centers.

However, the results regarding emotional regulation contradicted the team’s original predictions. The researchers hypothesized that the stimulation would reduce the amygdala’s reaction to the fearful faces. Instead, the opposite occurred during the easy version of the task. The amygdala showed greater activation in the active group compared to the placebo group.

The startle test revealed a similar pattern. The researchers found that active stimulation did not calm the participants’ physical reflexes. In fact, it made them jumpier. The active group showed a stronger startle response during the unpredictable threat condition. They also reported feeling higher levels of anxiety during these moments of uncertainty.

Ironside noted the dual nature of these results. “Compared to the sham stimulation, frontal tDCS increased the activation of the bilateral inferior frontal gyrus… when the task was more cognitively demanding and, unexpectedly, increased amygdala… response when the task was less cognitively demanding,” she said.

Ironside also highlighted the physical findings. “We also observed that tDCS increased eyeblink startle response under conditions of unpredictable threat.”

These results suggest that transcranial direct current stimulation does not act as a simple tranquilizer for the brain. Instead, it may function as a general amplifier of arousal and engagement. By boosting the excitability of the frontal cortex, the treatment might make the brain more alert to everything. This includes both the task at hand and potential threats in the environment.

The increase in startle response might reflect a state of heightened vigilance. When the brain is more engaged, it may process all incoming signals more intensely. This interpretation aligns with the improved reaction times on the cognitive task. The participants were “sharper,” but this sharpness came with a cost of increased sensitivity to anxiety-provoking stimuli.

There are several important caveats to consider regarding this study. First, the participants only received a single session of stimulation. Clinical treatments for depression typically involve daily sessions over several weeks. It is possible that the cumulative effect of repeated stimulation is different from the acute effect of a single dose. Long-term changes in brain plasticity might take time to develop.

Second, the environment may have influenced the results. Undergoing a brain scan can be stressful. The MRI machine is loud and confining. For people who already suffer from high anxiety, this environment might have heightened their baseline stress levels. Receiving electrical stimulation in such a high-stress context could have interacted with their anxiety in unique ways.

The researchers also noted that the demographics of the study leaned heavily toward women. While this reflects the higher prevalence of depression and anxiety in women, it means the results might not fully generalize to men.

Despite the unexpected increase in threat sensitivity, the authors believe the findings offer a path forward. The clear improvement in task engagement and frontal brain activity is a positive signal. It suggests that the stimulation is effectively reaching the target brain regions and altering their function.

The failure to reduce anxiety might be due to the passive nature of the treatment. In this study, participants received stimulation while resting or doing a simple task. The researchers suggest that future trials should explore “context-dependent” stimulation.

This approach would involve pairing the brain stimulation with active therapy. For example, if a patient is undergoing exposure therapy to face their fears, the stimulation might help them engage more fully with the therapeutic exercises. If the stimulation boosts the brain’s ability to focus and learn, it could act as a catalyst for psychological interventions.

The study, “Frontal Cortex Stimulation Modulates Attentional Circuits and Increases Anxiety-Potentiated Startle in Anxious Depression,” was authored by Tate Poplin, Rayus Kuplicki, Ebony A. Walker, Kyle Goldman, Cheldyn Ramsey, Nicholas Balderston, Robin L. Aupperle, Martin P. Paulus, and Maria Ironside.

❌