Normal view

Today — 14 February 2026English

New research links childhood inactivity to depression in a vicious cycle

14 February 2026 at 01:00

New research suggests a bidirectional relationship exists between how much time children spend sitting and their mental health, creating a cycle where inactivity feeds feelings of depression and vice versa. This dynamic appears to extend beyond the individual child, as a child’s mood and inactivity levels can eventually influence their parent’s mental well-being. These results were published in the journal Mental Health and Physical Activity.

For decades, health experts have recognized that humans spend a large portion of their waking hours in sedentary behaviors. This term refers to any waking behavior characterized by an energy expenditure of 1.5 metabolic equivalents or less while in a sitting, reclining, or lying posture. Common examples include watching television, playing video games while seated, or sitting in a classroom. While the physical health consequences of this inactivity are well documented, the impact on mental health is a growing area of concern.

In recent years, screen time has risen considerably among adolescents. This increase has prompted researchers to question how these behaviors interact with mood disorders such as depression. Most prior studies examining this link have focused on adults. When studies do involve younger populations, they often rely on the participants to report their own activity levels. Self-reported data is frequently inaccurate, as people struggle to recall exactly how many minutes they spent sitting days or weeks ago.

There is also a gap in understanding how these behaviors function within a family unit. Parents and children do not exist in isolation. They form a “dyad,” or a two-person group wherein the behavior and emotions of one person can impact the other. To address these gaps, a team of researchers led by Maria Siwa from the SWPS University in Poland investigated these associations using objective measurement tools. The researchers aimed to see if depression leads to more sitting, or if sitting leads to more depression. They also sought to understand if these effects spill over from child to parent.

The research team recruited 203 parent-child dyads to participate in the study. The children ranged in age from 9 to 15 years old. The parents involved were predominantly mothers, accounting for nearly 87 percent of the adult participants. The study was longitudinal, meaning the researchers tracked the participants over an extended period to observe changes. Data collection occurred at three specific points: the beginning of the study (Time 1), an eight-month follow-up (Time 2), and a 14-month follow-up (Time 3).

To ensure accuracy, the researchers did not rely solely on questionnaires for activity data. Instead, they asked participants to wear accelerometers. These are small devices worn on the hip that measure movement intensity and frequency. Participants wore these devices for six consecutive days during waking hours. This provided a precise, objective record of how much time each parent and child spent being sedentary versus being active.

For the assessment of mental health, the researchers used the Patient Health Questionnaire. This is a standard screening tool used to identify the presence and severity of depressive symptoms. It asks individuals to rate the frequency of specific symptoms over the past two weeks. The study took place in the context of a healthy lifestyle education program. Between the first and second measurement points, all families received education on the health consequences of sedentary behaviors and strategies to interrupt long periods of sitting.

The analysis of the data revealed a reciprocal relationship within the children. Children who spent more time being sedentary at the start of the study displayed higher levels of depressive symptoms eight months later. This supports the theory that physical inactivity can contribute to the development of poor mood. Proposed biological mechanisms for this include changes in inflammation markers or neurobiological pathways that affect how the brain regulates emotion.

However, the reverse was also true. Children who exhibited higher levels of depressive symptoms at the start of the study spent more time being sedentary at the eight-month mark. This suggests a “vicious cycle” where symptoms of depression, such as low energy or withdrawal, lead to less movement. The lack of movement then potentially exacerbates the depressive symptoms. This bidirectional pattern highlights how difficult it can be to break the cycle of inactivity and low mood.

The study also identified an effect that crossed from one person to the other. High levels of depressive symptoms in a child at the start of the study predicted increased sedentary time for that child eight months later. This increase in the child’s sedentary behavior was then linked to higher levels of depressive symptoms in the parent at the 14-month mark.

This “across-person” finding suggests a domino effect within the family. A child’s mental health struggles may lead them to withdraw into sedentary activities. Observing this behavior and potentially feeling ineffective in helping the child change their habits may then take a toll on the parent’s mental health. This aligns with psychological theories regarding parental stress. Parents often feel distress when they perceive their parenting strategies as ineffective, especially when trying to manage a child’s health behaviors.

One particular finding was unexpected. Children who reported lower levels of depressive symptoms at the eight-month mark actually spent more time sitting at the final 14-month check-in. The researchers hypothesize that this might be due to a sense of complacency. If adolescents feel mentally well, they may not feel a pressing need to follow the program’s advice to reduce sitting time. They might associate their current well-being with their current lifestyle, leading to less motivation to become more active.

The researchers controlled for moderate-to-vigorous physical activity in their statistical models. This ensures that the results specifically reflect the impact of sedentary time, rather than just a lack of exercise. Even when accounting for exercise, the links between sitting and depression remained relevant in specific pathways.

There are caveats to consider when interpreting these results. The sample consisted largely of families with higher education levels and average or above-average economic status. This limits how well the findings apply to the general population or to families facing economic hardship. Additionally, the study was conducted in Poland, and cultural factors regarding parenting and leisure time could influence the results.

Another limitation is the nature of the device used. While accelerometers are excellent for measuring stillness versus movement, they cannot distinguish between different types of sedentary behavior. They cannot tell the difference between sitting while doing homework, reading a book, or mindlessly scrolling through social media. Different types of sedentary behavior might have different psychological impacts.

The study also focused on a community sample rather than a clinical one. Most participants reported mild to moderate symptoms rather than severe clinical depression. The associations might look different in a population with diagnosed major depressive disorder. Furthermore, while the study found links over time, the observed effects were relatively small. Many other factors likely contribute to both depression and sedentary behavior that were not measured in this specific analysis.

Despite these limitations, the implications for public health are clear. Interventions aimed at improving youth mental health should not ignore physical behavior. Conversely, programs designed to get kids moving should address mental health barriers. The findings support the use of family-based interventions. Treating the child in isolation may miss the important dynamic where the child’s behavior impacts the parent’s well-being.

Future research should investigate the specific mechanisms that drive these connections. For example, it would be beneficial to study whether parental beliefs about their own efficacy mediate the link between a child’s inactivity and the parent’s mood. Researchers should also look at different types of sedentary behavior to see if screen time is more harmful than other forms of sitting. Understanding these nuances could lead to better guidance for families trying to navigate the complex relationship between physical habits and emotional health.

The study, “Associations between depressive symptoms and sedentary behaviors in parent-child Dyads: Longitudinal effects within- and across- person,” was authored by Maria Siwa, Dominika Wietrzykowska, Zofia Szczuka, Ewa Kulis, Monika Boberska, Anna Banik, Hanna Zaleskiewicz, Paulina Krzywicka, Nina Knoll, Anita DeLongis, Bärbel Knäuper, and Aleksandra Luszczynska.

Feelings of entrapment and powerlessness link job uncertainty to suicidality

13 February 2026 at 23:00

A qualitative study in Scotland examined the links between financial instability, employment insecurity, and suicidality. Results indicated that financial stressors create a cycle of unmet basic needs, powerlessness, and social isolation. Job precarity and lack of support further exacerbate these relationships, contributing to suicidal ideation. The research was published in Death Studies.

Suicide is the act of intentionally causing one’s own death. World Health Organization statistics indicate that 700,000 people die by suicide every year worldwide, making it a significant global public health issue. Although major religions have historically condemned suicide, contemporary public health and psychological perspectives view it as a preventable outcome arising from complex interactions rather than a moral failing. Suicide rarely has a single cause; instead, it reflects the intersection of personal, relational, community, and societal factors.

Economic instability, job insecurity, and financial distress are consistently linked to higher suicide risk, with those in insecure employment disproportionately affected. Evidence from the U.K. and Scotland shows particularly high vulnerability among working-age adults, even as poverty increasingly affects households where someone is employed.

Precarious work conditions—such as low income, unpredictable hours, limited rights, and low job autonomy—contribute to chronic stress and poorer mental health. Furthermore, stigma surrounding financial hardship and job insecurity can deter help-seeking, increasing isolation and risk.

Study author Nicola Cogan and her colleagues wanted to explore how insecure employment and financial instability are perceived to contribute toward suicidal thoughts and behaviors among adults living in Scotland. They also sought to identify risk and protective factors associated with the mental health impacts of economic insecurity and offer policy recommendations for improving mental health support for people facing economic precarity.

The study included 24 individuals from Scotland who reported being paid less than the living wage or below the minimum income standard, were on zero-hours contracts, working in the gig economy, were job-seeking long term, or had experience with Universal Credit (the UK’s main welfare benefit system). Sixteen participants were men. The participants’ average age was 30 years. On average, participants reported that their last suicidal thoughts or behaviors occurred more than six months prior. Individuals who were currently suicidal were not included in the study.

Participants took part in semi-structured interviews focusing on the interplay between employment status, financial instability, and experiences of suicidal ideation or behavior. They received a £20 voucher for their participation. The researchers transcribed the interviews and conducted reflexive thematic analysis with the goal of identifying the key themes within the narratives.

Analysis of the interviews identified six key themes. The first theme was the “struggle to meet basic needs and the vicious cycle.” When participants experienced financial instability, it created a struggle to meet basic needs like food, housing, and healthcare. This battle degraded their mental health. Diminished mental health, in turn, reduced their ability to improve their financial situation, creating a vicious cycle.

The second theme was “feeling trapped and powerless.” Participants reported that feelings of entrapment intersected with suicidal thoughts and behaviors, as they struggled to envision any escape from the situation. Theme three was the “stigma of financial instability.” Feeling financially unstable negatively impacted participants’ self-worth and self-esteem, making them feel inadequate and helpless. Theme four was “thinking about suicide and acting on such thoughts.” During these times, many of them imagined suicide to be the only way out of their struggles.

The fifth theme was the “need for hope and support from supportive others.” For many participants, hope and support from friends, family, and other individuals fostered resilience and prevented them from acting on suicidal thoughts.

The sixth theme was “active help-seeking and gaining a sense of control.” For many participants, actively seeking help was a turning point in managing the intersecting challenges of financial instability and mental health distress. This enabled them to regain a sense of control over their circumstances.

“Reflexive thematic analysis identified key themes, highlighting how financial stressors create a cycle of unmet basic needs, powerlessness, and social isolation, which exacerbates suicidal distress. Workplace conditions including job precarity and lack of support, further intensified these experiences, while protective factors included supportive relationships and proactive help-seeking,” the study authors concluded.

The study contributes to the scientific understanding of the mental health effects of financial instability. However, the study deliberately excluded prospective participants currently experiencing suicidality. Because of this, it did not fully capture the perspectives of individuals at the highest risk of suicide. Additionally, the collected data were based on the recall of past hardships, leaving room for recall and reporting biases to have affected the results.

The paper, “’It feels like the world is falling on your head’: Exploring the link between financial instability, employment insecurity, and suicidality,” was authored by Nicola Cogan, Susan Rasmussen, Kirsten Russell, Dan Heap, Heather Archbold, Lucy Milligan, Scott Thomson, Spence Whittaker, Dave Morris, and Danielle Rowley.

Yesterday — 13 February 2026English

No association found between COVID-19 shots during pregnancy and autism or behavioral issues

13 February 2026 at 21:00

Recent research provides new evidence regarding the safety of COVID-19 vaccinations during pregnancy. The study, presented at the Society for Maternal-Fetal Medicine (SMFM) 2026 Pregnancy Meeting, indicates that receiving an mRNA vaccine while pregnant does not negatively impact a toddler’s brain development. The findings suggest that children born to vaccinated mothers show no difference in reaching developmental milestones compared to those born to unvaccinated mothers.

The question of vaccine safety during pregnancy has been a primary concern for expectant parents since the introduction of COVID-19 immunizations. Messenger RNA, or mRNA, vaccines function by introducing a genetic sequence that instructs the body’s cells to produce a specific protein. This protein triggers the immune system to create antibodies against the virus.

While health organizations have recommended these vaccines to prevent severe maternal illness, data regarding the longer-term effects on infants has been accumulating slowly. Parents often worry that the immune activation in the mother could theoretically alter the delicate process of fetal brain formation.

To address these specific concerns, a team of researchers investigated the neurodevelopmental outcomes of children aged 18 to 30 months. The study was led by George R. Saade from Eastern Virginia Medical School at Old Dominion University and Brenna L. Hughes from Duke University School of Medicine. They conducted this work as part of the Maternal-Fetal Medicine Units Network. This network is a collaboration of research centers funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development.

The researchers designed a prospective observational study. This type of study follows a group of participants over time to observe outcomes rather than intervening or experimenting on them. The team identified women who had received at least one dose of an mRNA SARS-CoV-2 vaccine. To be included in the exposed group, the mothers must have received the vaccine either during their pregnancy or within the 30 days prior to becoming pregnant.

The research team compared these women to a control group of mothers who did not receive the vaccine during that same period. To ensure the comparison was scientifically valid, the researchers used a technique called matching. Each vaccinated mother was paired with an unvaccinated mother who shared key characteristics.

These characteristics included the specific medical site where they delivered the baby and the date of the delivery. They also matched participants based on their insurance status and their race. This matching process is essential in observational research. It helps rule out other variables, such as access to healthcare or socioeconomic status, which could independently influence a child’s development.

The study applied strict exclusion criteria to isolate the effect of the vaccine. The researchers did not include women who delivered their babies before 37 weeks of gestation. This decision was necessary because preterm birth is a known cause of developmental delays. Including premature infants could have obscured the results. The team also excluded multifetal pregnancies, such as twins or triplets, and children born with major congenital malformations.

Ultimately, the study analyzed 217 matched pairs, resulting in a total of 434 children. The primary tool used to measure development was the Ages and Stages Questionnaire, Third Edition, often referred to as the ASQ-3. This is a standardized screening tool widely used in pediatrics. It relies on parents to observe and report their child’s abilities in five distinct developmental areas.

The first area is communication, which looks at how a child understands language and speaks. The second is gross motor skills, involving large movements like walking or jumping. The third is fine motor skills, which involves smaller movements like using fingers to pick up tiny objects. The fourth is problem-solving, and the fifth is personal-social interaction, covering how the child plays and interacts with others.

The researchers analyzed the data by looking for statistical equivalence. They established a specific margin of 10 points on the ASQ-3 scale. If the difference between the average scores of the vaccinated and unvaccinated groups was less than 10 points, the outcomes were considered practically identical.

The results demonstrated that the neurodevelopmental outcomes were indeed equivalent. The median total ASQ-3 score for the vaccinated group was 255. The median score for the unvaccinated group was 260. After adjusting for other factors, the difference was calculated to be -3.4 points. This falls well within the 10-point margin of equivalence, meaning there was no meaningful difference in development between the two groups.

Beyond the general developmental scores, the researchers utilized several secondary screening tools to check for specific conditions. They employed the Modified Checklist for Autism in Toddlers to assess the risk of autism spectrum disorder. The findings showed no statistical difference in risk levels.

Approximately 5 percent of the children in the vaccinated group screened positive for potential autism risk. This was comparable to the 6 percent observed in the unvaccinated group. These percentages suggest that vaccination status did not influence the likelihood of an autism diagnosis.

The team also used the Child Behavior Checklist. This tool evaluates various behavioral and emotional challenges. It looks at internalizing behaviors, such as anxiety, withdrawal, or sadness. It also examines externalizing behaviors, such as aggression or rule-breaking.

The scores for both internalizing and externalizing behaviors were nearly identical between the two groups. For example, 93 percent of children in the vaccinated group fell within the normal range for total behavioral problems. This was the exact same percentage found in the unvaccinated group.

Finally, the researchers assessed temperament using the Early Childhood Behavior Questionnaire. This measures traits such as “surgency,” which relates to positive emotional reactivity and high energy. It also measures “effortful control,” which is the ability to focus attention and inhibit impulses. Across all these psychological domains, the study found no association between maternal vaccination and negative outcomes.

The demographics of the two groups were largely similar due to the matching process. However, one difference remained. Mothers in the vaccinated group were more likely to be nulliparous. This is a medical term indicating that the woman had never given birth before the pregnancy in question.

Additionally, the children in the vaccinated group were slightly younger at the time of the assessment. Their median age was 25.4 months, compared to 25.9 months for the unvaccinated group. The researchers used statistical models to adjust for these slight variations. Even after these adjustments, the conclusion remained that the developmental outcomes were equivalent.

“Neurodevelopment outcomes in children born to mothers who received the COVID-19 vaccine during or shortly before pregnancy did not differ from those born to mothers who did not receive the vaccine,” said Saade.

While the findings are positive, there are context and limitations to consider. The study was observational, meaning it cannot prove causation as definitively as a randomized controlled trial. However, randomized trials are rarely feasible for widely recommended vaccines due to ethical considerations.

Another factor is the reliance on parent-reported data. Tools like the ASQ-3 depend on the accuracy of the parents’ observations, which can introduce some subjectivity. Furthermore, the study followed children only up to 30 months of age. Some subtle neurodevelopmental issues may not manifest until children are older and face the demands of school.

Despite these limitations, the rigorous matching and the use of multiple standardized screening tools provide a high level of confidence in the results for the toddler age group. The study fills a knowledge gap regarding the safety of mRNA technology for the next generation.

“This study, conducted through a rigorous scientific process in an NIH clinical trials network, demonstrates reassuring findings regarding the long-term health of children whose mothers received COVID-19 vaccination during pregnancy,” said Hughes.

The study, “Association Between SARS-CoV-2 Vaccine in Pregnancy and Child Neurodevelopment at 18–30 Months,” was authored by George R. Saade and Brenna L. Hughes, and will be published in the February 2026 issue of PREGNANCY.

Your attachment style predicts which activities boost romantic satisfaction

13 February 2026 at 19:00

New research provides evidence that the best way to spend time with a romantic partner depends on their specific emotional needs. A study published in Social Psychological and Personality Science suggests that people with avoidant attachment styles feel more satisfied when engaging in novel and exciting activities, while those with anxious attachment styles benefit more from familiar and comfortable shared experiences.

Psychological science identifies attachment insecurity as a significant barrier to relationship satisfaction. Individuals high in attachment avoidance often fear intimacy and prioritize independence, while those high in attachment anxiety fear abandonment and frequently seek reassurance.

Previous studies have shown that partners can mitigate these insecurities by adjusting their behavior, such as offering autonomy to avoidant partners or reassurance to anxious ones. However, less is known about how specific types of shared leisure activities function in this dynamic.

“This study was motivated by two main gaps. One was a gap in the attachment literature. Although attachment insecurity reliably predicts lower relationship satisfaction, these effects can be buffered, and most prior work has focused on partner behaviors. We wanted to know whether shared, everyday experiences could play a similar role,” said study author Amy Muise, a professor and York Research Chair in the Department of Psychology and director of the Sexual Health and Relationships (SHaRe) Lab at York University.

“We were also interested in testing the idea that novelty and excitement are universally good for relationships. Instead, we asked whether different types of shared experiences are more or less beneficial depending on people’s attachment-related needs.”

To explore these dynamics, the scientists conducted a meta-analysis across three separate daily diary studies. The total sample consisted of 390 couples from Canada and the United States. Participants were required to be in a committed relationship and living together or seeing each other frequently. The average relationship length varied slightly by study but ranged generally from seven to eight years.

For a period of 21 days, each partner independently completed nightly surveys. They reported their daily relationship satisfaction and the types of activities they shared with their partner that day. The researchers measured two distinct types of shared experiences. “Novel and exciting” experiences were defined as activities that felt new, challenging, or expanding, such as learning a skill or trying a new restaurant.

“Familiar and comfortable” experiences involved routine, calming, and predictable activities. Examples included watching a favorite TV show, cooking a standard meal together, or simply relaxing at home. The participants also rated their levels of attachment avoidance and anxiety at the beginning of the study. This design allowed the researchers to track how fluctuations in daily activities related to fluctuations in relationship satisfaction.

The data revealed that, in general, both types of shared experiences were linked to higher daily relationship satisfaction. “The effects are modest in size, which is typical for daily experience research because they reflect within-person changes in everyday life,” Muise told PsyPost. “These are not dramatic shifts in relationship quality, but small day-to-day effects that may accumulate over time.”

“Overall, both novel and familiar shared experiences were linked to greater relationship satisfaction, but the effect of familiar, comfortable experiences was larger (roughly two to three times larger) than novel, experiences overall.”

Importantly, the benefits differed depending on a person’s attachment style. For individuals high in attachment avoidance, engaging in novel and exciting activities provided a specific benefit.

On days when avoidant individuals reported more novelty and excitement than usual, the typical link between their avoidant style and lower relationship satisfaction was weakened. The researchers found that these exciting activities increased perceptions of “relational reward.” This means the avoidant partners felt a sense of intimacy and connection that did not feel threatening or smothering. Familiar and comfortable activities did not provide this same buffering effect for avoidant individuals.

In contrast, individuals high in attachment anxiety derived the most benefit from familiar and comfortable experiences. On days marked by high levels of familiarity and comfort, the usual association between attachment anxiety and lower relationship satisfaction disappeared entirely. The study suggests that these low-stakes, comforting interactions help reduce negative emotions for anxiously attached people.

Novel and exciting activities did not consistently buffer the relationship satisfaction of anxiously attached individuals. The researchers noted that while novelty is generally positive, it does not address the specific need for security that defines attachment anxiety. The calming nature of routine appears to be the key ingredient for soothing these specific fears.

“One thing that surprised us was how familiar and comfortable activities seemed to help people who are more anxiously attached,” Muise said. “We expected these experiences to work by lowering worries about rejection or judgment, but that wasn’t what we found. Instead, they seemed to help by lowering people’s overall negative mood.”

“This made us think more carefully about what comfort and routine might actually be doing emotionally. It’s possible that for people higher in attachment anxiety, familiar and comfortable time together helps them feel more secure, and that sense of security is what supports relationship satisfaction. We weren’t able to test that directly in this study, but it’s an important direction for future work.”

The researchers also examined how one person’s attachment style affected their partner’s satisfaction. The results showed that when a person had a highly avoidant partner, they reported higher satisfaction on days they shared novel and exciting experiences. Conversely, when a person had a highly anxious partner, they reported higher satisfaction on days filled with familiar and comfortable activities. This indicates that tailoring activities benefits both the insecure individual and their romantic partner.

“The main takeaway is that there is no single ‘right’ way to spend time together that works for all couples,” Muise explained. “What matters is whether shared experiences align with people’s emotional needs. For people who are more avoidantly attached, doing something novel or exciting together (something that feels new and fun rather than overtly intimate) can make the relationship feel more rewarding and satisfying.”

“For people who are more anxiously attached, familiar and comfortable time together seems especially important for maintaining satisfaction. These findings suggest that tailoring shared time, rather than maximizing novelty or excitement per se, may be a more effective way to support relationship well-being.”

While the findings offer practical insights, the study has certain limitations. The research relied on daily diary entries, which are correlational. This means that while the researchers can observe a link between specific activities and higher satisfaction, they cannot definitively prove that the activities caused the satisfaction. It is possible that feeling satisfied makes a couple more likely to engage in fun or comfortable activities.

“Another potential misinterpretation is that novelty is ‘bad’ for anxiously attached people or that comfort is ‘bad’ for avoidantly attached people,” Muise clarified. “That is not what we found. Both types of experiences were generally associated with higher satisfaction; the difference lies in when they are most helpful for buffering insecurity, not whether they are beneficial at all.”

Future research is needed to determine if these daily buffering effects lead to long-term improvements in attachment security. The scientists also hope to investigate who initiates these activities and whether the motivation behind them impacts their effectiveness. For now, the data suggests that checking in on a partner’s emotional needs might be the best guide for planning the next date night.

“One long-term goal is to understand whether these day-to-day buffering effects can lead to longer-term changes in attachment security,” Muise said. “If repeatedly engaging in the ‘right’ kinds of shared experiences could that have implications for how attachment insecurity evolves over time?”

“Another direction is to examine how these experiences are initiated. Who suggests the activity, and whether it feels voluntary or pressured, might matter, for whether certain experiences are associated with satisfaction.”

“One thing I really appreciate about this study is that it allowed us to look at both partners’ experiences,” Muise added. “The partner effects suggest that tailoring shared experiences doesn’t only benefit the person who is more insecure, it is also associated with how their partner feels about the relationship. Overall, engaging in shared experiences that was aligned with one partner’s attachment needs, has benefits for both partners.”

The study, “Novel and Exciting or Tried and True? Tailoring Shared Relationship Experiences to Insecurely Attached Partners,” was authored by Kristina M. Schrage, Emily A. Impett, Mustafa Anil Topal, Cheryl Harasymchuk, and Amy Muise.

Ultra-processed foods in early childhood linked to lower IQ scores

13 February 2026 at 17:00

Toddlers who consume a diet high in processed meats, sugary snacks, and soft drinks may have lower intelligence scores by the time they reach early school age. A new study published in the British Journal of Nutrition suggests that this negative association is even stronger for children who faced physical growth delays in infancy. These findings add to the growing body of evidence linking early childhood nutrition to long-term brain development.

The first few years of human life represent a biological window of rapid change. The brain grows quickly during this time and builds the neural connections necessary for learning and memory. This process requires a steady supply of specific nutrients to work correctly. Without enough iron, zinc, or healthy fats, the brain might not develop to its full capacity.

Recent trends in global nutrition show that families are increasingly relying on ultra-processed foods. These are industrial products that often contain high levels of sugar, fat, and artificial additives but very few essential vitamins. Researchers are concerned that these foods might displace nutrient-rich options. They also worry that the additives or high sugar content could directly harm biological systems.

Researchers from the Federal University of Pelotas in Brazil and the University of Illinois Urbana-Champaign investigated this issue. The lead author is Glaucia Treichel Heller, a researcher in the Postgraduate Program in Epidemiology in Pelotas. She worked alongside colleagues including Thaynã Ramos Flores and Pedro Hallal to analyze data from thousands of children. The team wanted to determine if eating habits established at age two could predict cognitive abilities years later.

The researchers used data from the 2015 Pelotas Birth Cohort. This is a large, long-term project that tracks the health of children born in the city of Pelotas, Brazil. The team analyzed information from more than 3,400 children. When the children were two years old, their parents answered questions about what the toddlers usually ate.

The scientists did not just look at single foods like apples or candy. Instead, they used a statistical method called principal component analysis. This technique allows researchers to find general dietary patterns based on which foods are typically eaten together. They identified two main types of eating habits in this population.

One pattern was labeled “healthy” by the researchers. This diet included regular consumption of beans, fruits, vegetables, and natural fruit juices. The other pattern was labeled “unhealthy.” This diet featured instant noodles, sausages, soft drinks, packaged snacks, and sweets.

When the children reached six or seven years of age, trained psychologists assessed their intelligence. They used a standard test called the Wechsler Intelligence Scale for Children. This test measures different mental skills to generate an IQ score. The researchers then looked for a statistical link between the diet at age two and the test results four years later.

The analysis showed a clear connection between the unhealthy dietary pattern and lower cognitive scores. Children who frequently ate processed and sugary foods at age two tended to have lower IQ scores at school age. This link remained even when the researchers accounted for other factors that influence intelligence. They adjusted the data for the mother’s education, family income, and how much mental stimulation the child received at home.

The researchers faced a challenge in isolating the effect of diet. Many factors can shape a child’s development. For example, a family with more money might buy healthier food and also buy more books. To manage this, the team identified potential confounding factors. Thaynã Ramos Flores, one of the study authors, noted, “The covariates were identified as potential confounding factors based on a literature review and the construction of a directed acyclic graph.”

The team used these adjustments to ensure the results were not simply reflecting the family’s socioeconomic status. Even with these controls, the negative association between processed foods and IQ persisted. The findings suggest that diet quality itself plays a specific role.

The negative impact appeared to be worse for children who were already biologically vulnerable. The study looked at children who had early-life deficits. These were defined as having low weight, height, or head circumference for their age during their first two years.

For these children, a diet high in processed foods was linked to a drop of nearly 5 points in IQ. This is a substantial difference that could affect school performance. For children without these early physical growth problems, the decline was smaller but still present. In those cases, the reduction was about 2 points.

This finding points to a concept known as cumulative disadvantage. It appears that biological vulnerability and environmental exposures like poor diet interact with each other. A child who is already struggling physically may be less resilient to the harms of a poor diet.

The researchers also looked at the impact of the healthy dietary pattern. They did not find a statistical link between eating healthy foods and higher IQ scores. This result might seem counterintuitive, as fruits and vegetables are known to be good for the brain. However, the authors explain that this result is likely due to the specific population studied.

Most children in the Pelotas cohort ate beans, fruits, and vegetables regularly. Because almost everyone ate the healthy foods, there was not enough difference between the children to show a statistical effect. Flores explained, “The lack of association observed for the healthy dietary pattern can be largely explained by its lower variability.” She added that “approximately 92% of children habitually consumed four or more of the foods that characterize the healthy pattern.”

The study suggests potential biological mechanisms for why the unhealthy diet lowers IQ. One theory involves the gut-brain axis. The human gut contains trillions of bacteria that communicate with the brain. Diets high in sugar and processed additives can alter this bacterial community. These changes might lead to systemic inflammation that affects brain function.

Another possibility involves oxidative stress. Ultra-processed foods often lack the antioxidants found in fresh produce. Without these protective compounds, brain cells might be more susceptible to damage during development. The rapid growth of the brain in early childhood makes it highly sensitive to these physiological stressors.

There are limitations to this type of research. The study is observational, which means it cannot prove that the food directly caused the lower scores. Other factors that the researchers could not measure might explain the difference. For example, the study relied on parents to report what their children ate. Parents might not always remember or report this accurately.

Additionally, the study did not measure the parents’ IQ scores. Parental intelligence is a strong predictor of a child’s intelligence. However, the researchers used maternal education and home stimulation scores as proxies. These measures help account for the intellectual environment of the home.

The findings have implications for public health policy. The results suggest that officials need to focus on reducing the intake of processed foods in early childhood. Merely encouraging fruit and vegetable intake may not be enough if children are still consuming high amounts of processed items. This is particularly important for children who have already shown signs of growth delays.

Future studies could look at how these dietary habits change as children become teenagers. It would also be helpful to see if these results are similar in countries with different food cultures. The team notes that early nutrition is a specific window of opportunity for supporting brain health.

The study, “Dietary patterns at age 2 and cognitive performance at ages 6-7: an analysis of the 2015 Pelotas Birth Cohort (Brazil),” was authored by Glaucia Treichel Heller, Thaynã Ramos Flores, Marina Xavier Carpena, Pedro Curi Hallal, Marlos Rodrigues Domingues, and Andréa Dâmaso Bertoldi.

Bias against AI art is so deep it changes how viewers perceive color and brightness

13 February 2026 at 15:00

New research suggests that simply labeling an artwork as created by artificial intelligence can reduce how much people enjoy and value it. This bias appears to affect not just how viewers interpret the meaning of the art, but even how they process basic visual features like color and brightness. The findings were published in the Psychology of Aesthetics, Creativity, and the Arts.

Artificial intelligence has rapidly become a common tool for visual artists. Artists use technologies ranging from text-to-image generators to robotic arms to produce new forms of imagery. Despite this widespread adoption, audiences often react negatively when they learn technology was involved in the creative process.

Alwin de Rooij, an assistant professor at Tilburg University and associate professor at Avans University of Applied Sciences, sought to understand the consistency of this negative reaction. De Rooij aimed to determine if this bias occurs across different psychological systems involved in viewing art. The researcher also wanted to see if this negative reaction is a permanent structural phenomenon or if it varies by context.

“AI-generated images can now be nearly indistinguishable from art made without AI, yet both public debate and scientific studies suggest that people may respond differently once they are told AI was involved,” de Rooij told PsyPost. “These reactions resemble earlier anxieties around new technologies in art, such as the introduction of photography in the nineteenth century, which is now a fully established art form. This raised the question of how consistent bias against AI in visual art is, and whether it might already be changing.”

To examine this, De Rooij conducted a meta-analysis. This statistical technique combines data from multiple independent studies to find overall trends that a single experiment might miss. The researcher performed a systematic search for experiments published between January 2017 and September 2024.

The analysis included studies where participants viewed visual art and were told it was made by AI. These responses were compared to responses for art labeled as human-made or art presented with no label. The researcher extracted 191 distinct effect sizes from the selected studies.

De Rooij categorized these measurements using a framework known as the Aesthetic Triad model. This model organizes the art experience into three specific systems. The first is the sensory-motor system, which deals with basic visual processing. The second is the knowledge-meaning system, which involves interpretation and context. The third is the emotion-valuation system, which covers subjective feelings and personal preferences.

The investigation revealed that knowing AI was used generally diminishes the aesthetic experience. A small but significant negative effect appeared within the sensory-motor system. This system involves the initial processing of visual features such as color, shape, and spatial relationships. When viewers believed an image was AI-generated, they tended to perceive these basic qualities less favorably.

A moderate negative effect appeared in the knowledge-meaning system. This aspect of the aesthetic experience relates to how people interpret an artwork’s intent. It also includes judgments about the skill required to make the piece. Participants consistently attributed less profundity and creativity to works labeled as artificial intelligence.

The researcher also found a small negative effect in the emotion-valuation system. This system governs subjective feelings of beauty, awe, and liking. Viewers tended to report lower emotional connection when they thought AI was responsible for the work. They also rated these works as less beautiful compared to identical works labeled as human-made.

“The main takeaway is that knowing AI was involved in making an artwork can change how we experience it, even when the artwork itself is identical,” de Rooij explained. “People tend to attribute less meaning and value to art once it is labeled as AI-made, not because it looks worse, but because it is interpreted differently. In some cases, this bias even feeds into basic visual judgments, such as how colorful or vivid an image appears. This shows that bias against AI is not just an abstract opinion about technology. It can deeply shape the aesthetic experience itself.”

But these negative responses were not uniform across all people. The researcher identified age as a significant factor in the severity of the bias. Older participants demonstrated a stronger negative reaction to AI art. Younger audiences showed much weaker negative effects.

This difference suggests a possible generational shift in how people perceive technology in art. Younger viewers may be less troubled by the integration of algorithms in the creative process. The style of the artwork also influenced viewer reactions.

Representational art, which depicts recognizable objects, reduced the negative bias regarding meaning compared to abstract art. However, representational art worsened the bias regarding emotional connection. The setting of the study mattered as well. Experiments conducted online produced stronger evidence of bias than those conducted in laboratories or real-world galleries.

“Another surprising finding was how unstable the bias is,” de Rooij said. “Rather than being a fixed reaction, it varies across audiences and contexts. As mentioned earlier, the bias tends to be stronger among older populations, but the results show it is also influenced by the style of the artworks and by how and where they are presented. In some settings, the bias becomes very weak or nearly disappears. This further supports the observation that, much like earlier reactions to new technologies in art, resistance to AI may be transitional rather than permanent.”

A key limitation involves how previous experiments presented artificial intelligence. Many studies framed the technology as an autonomous agent that created art independently. This description often conflicts with real-world artistic practice.

“The practical significance of these findings need to be critically examined,” de Rooij noted. “Many of the studies included in the meta-analysis frame AI as if it were an autonomous artist, which does not reflect artistic practice, where AI is typically used as a responsive material. The AI-as-artist framing evoke dystopian imaginaries about AI replacing human artists or threatening the humanity in art. As a result, some studies may elicit stronger negative responses to AI, but in a way that has no clear real-world counterpart.”

Future research should investigate the role of invisible human involvement in AI art. De Rooij plans to conduct follow-up studies.

“The next step is to study bias against AI in art in more realistic settings, such as galleries or museums, and in ways that better reflect how artists actually use AI in their creative practice,” de Rooij said. “This is a reaction to the finding that bias against AI seemed particularly strong in online studies, which merits verification of the bias in real-world settings. This proposed follow-up research has recently received funding from the Dutch Research Council, and the first results are expected in late 2026. We are excited about moving this work forward!”

The study, “Bias against artificial intelligence in visual art: A meta-analysis,” was authored by Alwin de Rooij.

Why oversharing might be the smartest move for your career and relationships

13 February 2026 at 06:15

PsyPost’s PodWatch highlights interesting clips from recent podcasts related to psychology and neuroscience.

In a recent episode of the Hidden Brain podcast titled “Coming Clean,” released on Monday, February 9, experts discussed the surprising power of vulnerability. Between the five and fifteen-minute marks of the broadcast, host Shankar Vedantam spoke with Harvard Business School psychologist Leslie John. They examined why admitting to our failures often yields better results than hiding them.

John described a common psychological phenomenon she calls the “disclosure hangover.” This is the sinking feeling of regret or anxiety that settles in the morning after you share a personal, embarrassing, or vulnerable story with colleagues. While many people worry that these moments destroy their professional image, John argues that these fears are often misplaced.

Research conducted by John indicates that calculated vulnerability can actually boost a leader’s standing. In one study involving a Google executive, the leader recorded a video introduction where he admitted he applied for roughly twenty jobs before landing his current role. Viewers trusted him more and expressed a greater willingness to work for him compared to when he hid this past failure.

The most significant finding from this experiment was that the executive’s perceived competence remained stable. Employees did not think he was less capable of doing his job simply because he struggled in the past. This evidence challenges the common belief that leaders must appear perfect to command respect.

The episode also highlighted the experience of Dr. Anna Lembke, a psychiatrist at Stanford University who treats addiction. Lembke publicly shared her own personal struggle with a compulsive habit of reading graphic romance novels. Despite her fears that this would ruin her reputation, the admission made her appear more confident and relatable to her audience.

Beyond social benefits, there is a biological reason humans feel the urge to share personal details. John cited research by scientist Diana Tamir showing that self-disclosure activates the brain’s reward centers. Talking about oneself generates a neurological response similar to the pleasure derived from eating good food.

This biological drive aligns with a deep psychological need to be truly understood by others. The discussion noted that individuals, particularly those with low self-esteem, feel more secure when partners see them accurately rather than through an overly positive lens. Being known for who you really are provides a profound sense of relief.

While society often warns against sharing “too much information,” John suggests we should worry more about sharing too little. Authentic self-expression acts as a powerful tool for building trust. By letting down their guard, professionals and partners alike can foster stronger connections.

Younger women find men with beards less attractive than older women do

13 February 2026 at 05:00

A new study published in Adaptive Human Behavior and Physiology suggests that a woman’s age and reproductive status may influence her preferences for male physical traits. The research indicates that postmenopausal women perceive certain masculine characteristics, such as body shape and facial features, differently than women who are still in their reproductive years. These findings offer evidence that biological shifts associated with menopause might alter the criteria women use to evaluate potential partners.

Scientists have recognized that physical features act as powerful biological signals in human communication. Secondary sexual characteristics are traits that appear during puberty and visually distinguish men from women. These include features such as broad shoulders, facial hair, jawline definition, and muscle mass.

Evolutionary psychology suggests that these traits serve as indicators of health and genetic quality. For instance, a muscular physique or a strong jawline often signals high testosterone levels and physical strength. Women of reproductive age typically prioritize these markers because they imply that a potential partner possesses “good genes” that could be passed to offspring.

However, researchers have historically focused most of their attention on the preferences of young women. Less is known about how these preferences might change as women age and lose their reproductive capability. The biological transition of menopause involves significant hormonal changes, including a decrease in estrogen levels.

This hormonal shift may correspond to a change in mating strategies. The “Grandmother Hypothesis” proposes that older women shift their focus from reproduction to investing in their existing family line. Consequently, they may no longer prioritize high-testosterone traits, which can be associated with aggression or short-term mating.

Instead, older women might prioritize traits that signal cooperation, reliability, and long-term companionship. To test this theory, a team of researchers from Poland designed a study to compare the preferences of women at different stages of life. The research team included Aurelia Starzyńska and Łukasz Pawelec from the Wroclaw University of Environmental and Life Sciences and the University of Warsaw, alongside Maja Pietras from Wroclaw Medical University and the University of Wroclaw.

The researchers recruited 122 Polish women to participate in an online survey. The participants ranged in age from 19 to 70 years old. Based on their survey responses regarding menstrual regularity and history, the researchers categorized the women into three groups.

The first group was premenopausal, consisting of women with regular reproductive functions. The second group was perimenopausal, including women experiencing the onset of menopausal symptoms and irregular cycles. The third group was postmenopausal, defined as women whose menstrual cycles had ceased for at least one year.

To assess preferences, the researchers created a specific set of visual stimuli. They started with photographs of a single 22-year-old male model. Using photo-editing applications, they digitally manipulated the images to create distinct variations in appearance.

The researchers modified the model’s face to appear either more feminized, intermediate, or heavily masculinized. They also altered the model’s facial hair to show a clean-shaven look, light stubble, or a full beard.

Body shape was another variable manipulated in the study. The scientists adjusted the hip-to-shoulder ratio to create three silhouette types: V-shaped, H-shaped, and A-shaped. Finally, they modified the model’s musculature to display non-muscular, moderately muscular, or strongly muscular builds.

Participants viewed these twelve modified images and rated them on a scale from one to ten. They evaluated the man in the photos based on three specific criteria. The first criterion was physical attractiveness.

The second and third criteria involved personality assessments. The women rated how aggressive they perceived the man to be. They also rated the man’s perceived level of social dominance.

The results showed that a woman’s reproductive status does influence her perception of attractiveness. One significant finding related to the shape of the male torso. Postmenopausal women rated the V-shaped body, which is typically characterized by broad shoulders and narrow hips, as less attractive than other shapes.

This contrasts with general evolutionary expectations where the V-shape is a classic indicator of male fitness. The data suggests that as women exit their reproductive years, the appeal of this strong biological signal may diminish.

Age also played a distinct role in how women viewed facial hair. The study found that older women rated men with medium to full beards as more attractive compared to younger women. This preference for beards increased with the age of the participant.

The researchers suggest that beards might signal maturity and social status rather than just raw genetic fitness. Younger women in the study showed a lower preference for beards. This might occur because facial hair can mask other facial features that young women use to assess mate quality.

The study produced complex results regarding facial masculinity. Chronological age showed a slight positive association with finding feminized faces attractive. This aligns with the idea that older women might prefer “softer” features associated with cooperation.

However, when isolating the specific biological factor of menopause, the results shifted. Postmenopausal women rated feminized faces as less attractive than premenopausal women did. This indicates that the relationship between aging and facial preference is not entirely linear.

Perceptions of aggression also varied by group. Postmenopausal women rated men with medium muscularity as more aggressive than men with other body types. This association was not present in the younger groups.

The researchers propose that older women might view visible musculature as a signal of potential threat rather than protection. Younger women, who are more likely to seek a partner for reproduction, may view muscles as a positive sign of health and defense.

Interestingly, the study found no significant connection between the physical traits and perceived social dominance. Neither the age of the women nor their menopausal status affected how they rated a man’s dominance. This suggests that while attractiveness and aggression are linked to physical cues, dominance might be evaluated through other means not captured in static photos.

The study, like all research, has limitations. One issue involved the method used to find participants, known as snowball sampling. In this process, existing participants recruit future subjects from among their own acquaintances. This method may have resulted in a sample that is not fully representative of the general population.

Reliance on online surveys also introduces a technology bias. Older women who are less comfortable with the internet may have been excluded from the study. This could skew the results for the postmenopausal group.

Another limitation involved the stimuli used. The photographs were all based on a single 22-year-old male model. This young age might not be relevant or appealing to women in their 50s, 60s, or 70s. Postmenopausal women might naturally prefer older men, and evaluating a man in his early twenties could introduce an age-appropriateness bias. The researchers acknowledge that future studies should use models of various ages to ensure more accurate ratings.

Despite these limitations, the study provides evidence that biological changes in women influence social perception. The findings support the concept that mating psychology evolves across the lifespan. As the biological need for “good genes” fades, women appear to adjust their criteria for what makes a man attractive.

The study, “The Perception of Women of Different Ages of Men’s Physical attractiveness, Aggression and Social Dominance Based on Male Secondary Sexual Characteristics,” was authored by Aurelia Starzyńska, Maja Pietras, and Łukasz Pawelec.

Genetic risk for depression predicts financial struggles, but the cause isn’t what scientists thought

13 February 2026 at 05:00

A new study published in the Journal of Psychopathology and Clinical Science offers a nuanced look at how genetic risk for depression interacts with social and economic life circumstances to influence mental health over time. The findings indicate that while people with a higher genetic liability for depression often experience financial and educational challenges, these challenges may not be directly caused by the genetic risk itself.

Scientists conducted the study to better understand the developmental pathways that lead to depressive symptoms. A major theory in psychology, known as the bioecological model, proposes that genetic predispositions do not operate in a vacuum. Instead, this model suggests that a person’s genetic makeup might shape the environments they select or experience. For example, a genetic tendency toward low mood or low energy might make it harder for an individual to complete higher education or maintain steady employment.

If this theory holds true, those missed opportunities could lead to financial strain or a lack of social resources. These environmental stressors would then feed back into the person’s life, potentially worsening their mental health. The researchers aimed to test whether this specific chain of events is supported by data. They sought to determine if genetic risk for depression predicts changes in depressive symptoms specifically by influencing socioeconomic factors like wealth, debt, and education.

To investigate these questions, the researchers utilized data from two massive, long-term projects in the United States. The first dataset came from the National Longitudinal Study of Adolescent Health, also known as Add Health. This sample included 5,690 participants who provided DNA samples. The researchers tracked these individuals from adolescence, starting around age 16, into early adulthood, ending around age 29.

The second dataset served as a replication effort to see if the findings would hold up in a different group. This sample came from the Wisconsin Longitudinal Study, or WLS, which included 8,964 participants. Unlike the younger cohort in Add Health, the WLS participants were tracked across a decade in mid-to-late life, roughly from age 53 to 64. Using two different age groups allowed the scientists to see if these patterns persisted across the lifespan.

For both groups, the researchers calculated a “polygenic index” for each participant. This is a personalized score that summarizes thousands of tiny genetic variations across the entire genome that are statistically associated with depressive symptoms. A higher score indicates a higher genetic probability of experiencing depression. The researchers then measured four specific socioeconomic resources: educational attainment, total financial assets, total debt, and access to health insurance.

In the initial phase of the analysis, the researchers looked at the population as a whole. This is called a “between-family” analysis because it compares unrelated individuals against one another. In the Add Health sample, they found that higher genetic risk for depression was indeed associated with increases in depressive symptoms over the 12-year period.

The data showed that this link was partially explained by the socioeconomic variables. Participants with higher genetic risk tended to have lower educational attainment, fewer assets, more debt, and more difficulty maintaining health insurance. These difficult life circumstances, in turn, were associated with rising levels of depression.

The researchers then repeated this between-family analysis in the older Wisconsin cohort. The results were largely consistent. Higher genetic risk predicted increases in depression symptoms over the decade. Once again, this association appeared to be mediated by the same social factors. Specifically, participants with higher genetic risk reported lower net worth and were more likely to have gone deeply into debt or experienced healthcare difficulties.

These results initially seemed to support the idea that depression genes cause real-world problems that then cause more depression. However, the researchers took a significant additional step to test for causality. They performed a “within-family” analysis using siblings included in the Wisconsin study.

Comparing siblings provides a much stricter test of cause and effect. Siblings share roughly 50 percent of their DNA and grow up in the same household, which controls for many environmental factors like parenting style and childhood socioeconomic status. If the genetic risk for depression truly causes a person to acquire more debt or achieve less education, the sibling with the higher polygenic score should have worse economic outcomes than the sibling with the lower score.

When the researchers applied this sibling-comparison model, the findings changed. Within families, the sibling with higher genetic risk did report more depressive symptoms. This confirms that the genetic score is picking up on a real biological vulnerability. However, the link between the depression genetic score and the socioeconomic factors largely disappeared.

The sibling with higher genetic risk for depression was not significantly more likely to have lower education, less wealth, or more debt than their co-sibling. This lack of association in the sibling model suggests that the genetic risk for depression does not directly cause these negative socioeconomic outcomes. Instead, the correlation seen in the general population is likely due to other shared factors.

One potential explanation for the discrepancy involves a concept called pleiotropy, where the same genes influence multiple traits. The researchers conducted sensitivity analyses that accounted for genetic scores related to educational attainment. They found that once they controlled for the genetics of education, the apparent link between depression genes and socioeconomic status vanished.

This suggests that the same genetic variations that influence how far someone goes in school might also be correlated with depression risk. It implies that low education or financial struggle is not necessarily a downstream consequence of depression risk, but rather that both depression and socioeconomic struggles may share common genetic roots or be influenced by broader family environments.

The study has some limitations. Both datasets were comprised almost entirely of individuals of European ancestry. This lack of diversity means the results may not apply to people of other racial or ethnic backgrounds. Additionally, the measures of debt and insurance were limited to the questions available in these pre-existing surveys. They may not have captured the full nuance of financial stress.

Furthermore, while sibling models help rule out family-wide environmental factors, they cannot account for every unique experience a person has. Future research is needed to explore how these genetic risks interact with specific life events, such as trauma or job loss, which were not the primary focus of this investigation. The researchers also note that debt and medical insurance difficulties are understudied in this field and deserve more detailed attention in future work.

The study, “Genotypic and Socioeconomic Risks for Depressive Symptoms in Two U.S. Cohorts Spanning Early to Older Adulthood,” was authored by David A. Sbarra, Sam Trejo, K. Paige Harden, Jeffrey C. Oliver, and Yann C. Klimentidis.

The biology of bonding: Andrew Huberman explains attachment and desire

13 February 2026 at 04:17

PsyPost’s PodWatch highlights interesting clips from recent podcasts related to psychology and neuroscience.

In a recent episode of the Huberman Lab podcast, released on Thursday, February 12, Dr. Andrew Huberman explores the biological and psychological roots of human connection. The episode, titled “Essentials: The Science of Love, Desire & Attachment,” examines how early life experiences and specific brain functions create the feelings of romance. Huberman breaks down the complex science behind why humans bond with certain people and how relationships either succeed or fail over time.

During the first five minutes of the broadcast, Huberman explains that adult romantic styles often mirror the emotional bond a person had with their caregivers as a toddler. He references the famous “Strange Situation Task” developed by psychologist Mary Ainsworth in the 1980s. In this experiment, researchers observed how children reacted when their parents left a room and subsequently returned.

Based on these reactions, researchers categorized children into groups such as securely attached or anxious-avoidant. Huberman notes that these early classifications are strong predictors of how individuals will behave in romantic partnerships later in life. However, he emphasizes that these emotional templates are not permanent and can change once a person understands them.

The discussion moves beyond psychology to look at the physical brain. Huberman clarifies that there is no single area in the brain responsible for creating the feeling of love. Instead, multiple brain regions work together in a coordinated sequence to produce the states of desire and attachment.

Around the ten-minute mark, the host details the specific chemical and electrical systems involved in bonding. He corrects a common misconception about dopamine, explaining that it is primarily a chemical for motivation and craving rather than just pleasure. This chemical acts as a currency in the brain that drives the pursuit of a partner.

A major component of connection is the neural circuit for empathy, which involves the prefrontal cortex and the insula. The insula is a region of the brain that helps people sense their own internal body state, a process known as interoception. This area allows individuals to pay attention to their own feelings while simultaneously reading the emotions of others.

Huberman introduces the concept of “positive delusion” as a requirement for long-term stability. This describes a mental state where a person believes that only their specific partner can make them feel a certain way. This unique biological bias helps maintain the bond between two people over time.

Huberman reviews research from the Gottman Lab at the University of Washington regarding relationship breakdown. The researchers identified four negative behaviors that predict failure: criticism, defensiveness, stonewalling, and contempt. Stonewalling occurs when a listener withdraws from an interaction and stops responding to their partner.

Among these negative behaviors, contempt is identified as the most destructive force in a partnership. Huberman cites the researchers who describe contempt as the “sulfuric acid” of a relationship because it erodes the emotional bond. This hostility completely shuts down the empathy circuits required for connection.

Evening screen use may be more relaxing than stimulating for teenagers

13 February 2026 at 03:00

A recent study published in the Journal of Sleep Research suggests that evening screen use might not be as physically stimulating for teenagers as many parents and experts have assumed. The findings provide evidence that most digital activities actually coincide with lower heart rates compared to non-screen activities like moving around the house or playing. This indicates that the common connection between screens and poor sleep is likely driven by the timing of device use rather than a state of high physical arousal.

Adolescence is a time when establishing healthy sleep patterns is essential for mental health and growth, yet many young people fall short of the recommended eight to ten hours of sleep. While screen use has been linked to shorter sleep times, the specific reasons why this happens are not yet fully understood.

Existing research has looked at several possibilities, such as the light from screens affecting hormones or the simple fact that screens take up time that could be spent sleeping. Some experts have also worried that the excitement from social media or gaming could keep the body in an active state that prevents relaxation. The new study was designed to investigate the physical arousal theory by looking at heart rate in real-world settings rather than in a laboratory.

“In our previous research, we found that screen use in bed was linked with shorter sleep, largely because teens were falling asleep later. But that left an open question: were screens simply delaying bedtime, or were they physiologically stimulating adolescents in a way that made it harder to fall asleep?” said study author Kim Meredith-Jones, a research associate professor at the University of Otago.

“In this study, we wanted to test whether evening screen use actually increased heart rate — a marker of physiological arousal — and whether that arousal explained delays in falling asleep. In other words, is it what teens are doing on screens that matters, or just the fact that screens are replacing sleep time?”

By using objective tools to track both what teens do on their screens and how their hearts respond, the team hoped to fill gaps in existing knowledge. They aimed to see if different types of digital content, such as texting versus scrolling, had different effects on the heart. Understanding these connections is important for creating better guidelines for digital health in young people.

The research team recruited a group of 70 adolescents from Dunedin, New Zealand, who were between 11 and nearly 15 years old. This sample was designed to be diverse, featuring 31 girls and 39 boys from various backgrounds. Approximately 33 percent of the participants identified as indigenous Māori, while others came from Pacific, Asian, or European backgrounds.

To capture a detailed look at their evening habits, the researchers used a combination of wearable technology and video recordings over four different nights. Each participant wore a high-resolution camera attached to a chest harness starting three hours before their usual bedtime. This camera recorded exactly what they were doing and what screens they were viewing until they entered their beds.

Once the participants were in bed, a stationary camera continued to record their activities until they fell asleep. This allowed the researchers to see if they used devices while under the covers and exactly when they closed their eyes. The video data was then analyzed by trained coders who categorized screen use into ten specific behaviors, such as watching videos, gaming, or using social media.

The researchers also categorized activities as either passive or interactive. Passive activities included watching, listening, reading, or browsing, while interactive activities included gaming, communication, and multitasking. Social media use was analyzed separately to see its specific impact on heart rate compared to other activities.

At the same time, the participants wore a Fitbit Inspire 2 on their dominant wrist to track their heart rate every few seconds. The researchers used this information to see how the heart reacted to each specific screen activity in real time. This objective measurement provided a more accurate picture than asking the teenagers to remember how they felt or what they did.

To measure sleep quality and duration, each youth also wore a motion-sensing device on their other wrist for seven consecutive days. This tool, known as an accelerometer, provided data on when they actually fell asleep and how many times they woke up. The researchers then used statistical models to see if heart rate patterns during screen time could predict these sleep outcomes.

The data revealed that heart rates were consistently higher during periods when the teenagers were not using screens. The average heart rate during non-screen activities was approximately 93 beats per minute, which likely reflects the physical effort of moving around or doing chores. In contrast, when the participants were using their devices, their average heart rate dropped to about 83 beats per minute.

This suggests that screen use is often a sedentary behavior that allows the body to stay relatively calm. When the participants were in bed, the difference was less extreme, but screen use still tended to accompany lower heart rates than other in-bed activities. These findings indicate that digital engagement may function as a way for teenagers to wind down after a long day.

The researchers also looked at how specific types of digital content affected the heart. Social media use was associated with the lowest heart rates, especially when the teenagers were already in bed. Gaming and multitasking between different apps also showed lower heart rate readings compared to other screen-based tasks.

“We were surprised to find that heart rates were lower during social media use,” Meredith-Jones told PsyPost. “Previous research has suggested that social media can be stressful or emotionally intense for adolescents, so we expected to see higher arousal. Instead, our findings suggest that in this context, teens may have been using social media as a way to unwind or switch off. That said, how we define and measure ‘social media use’ matters, and we’re now working on more refined ways to capture the context and type of engagement.”

On the other hand, activities involving communication, such as texting or messaging, were linked to higher heart rates. This type of interaction seemed to be less conducive to relaxation than scrolling through feeds or watching videos. Even so, the heart rate differences between these various digital activities were relatively small.

When examining sleep patterns, the researchers found that heart rate earlier in the evening had a different relationship with sleep than heart rate closer to bedtime. Higher heart rates occurring more than two hours before bed were linked to falling asleep earlier in the night. This may be because higher activity levels in the early evening help the body build up a need for rest.

However, the heart rate in the two hours before bed and while in bed had the opposite effect on falling asleep. For every increase of 10 beats per minute during this window, the participants took about nine minutes longer to drift off. This provides evidence that physical excitement right before bed can delay the start of sleep.

Notably, while a higher heart rate made it harder to fall asleep, it did not seem to reduce the total amount of sleep the teenagers got. It also did not affect how often they woke up during the night or the general quality of their rest. The researchers noted that a person would likely need a very large increase in heart rate to see a major impact on their sleep schedule.

“The effects were relatively small,” Meredith-Jones explained. “For example, our data suggest heart rate would need to increase by around 30 beats per minute to delay sleep onset by about 30 minutes. The largest differences we observed between screen activities were closer to 10 beats per minute, making it unlikely that typical screen use would meaningfully delay sleep through physiological arousal alone.”

“The key takeaway is that most screen use in the evening did not increase heart rate. In fact, many types of screen activity were associated with lower heart rates compared to non-screen time. Although higher heart rate before bed was linked with taking longer to fall asleep, the changes in heart rate we observed during screen use were generally small. Overall, most evening screen activities appeared more relaxing than arousing.”

One limitation of this study is that the researchers did not have a baseline heart rate for each participant while they were completely at rest. Without this information, it is difficult to say for certain if screens were actively lowering the heart rate or if the teens were just naturally calm. Individual differences in biology could account for some of the variations seen in the data.

“One strength of this study was our use of wearable cameras to objectively classify screen behaviours such as gaming, social media, and communication,” Meredith-Jones noted. “This approach provides much richer and more accurate data than self-report questionnaires or simple screen-time analytics. However, a limitation is that we did not measure each participant’s true resting heart rate, so we can’t definitively say whether higher heart rates reflected arousal above baseline or just individual differences. That’s an important area for refinement in future research.”

It is also important to note that the findings don’t imply that screens are always helpful for sleep. Even if they are not physically arousing, using a device late at night can still lead to sleep displacement. This happens when the time spent on a screen replaces time that would otherwise be spent sleeping, leading to tiredness the next day. On the other hand, one shouldn’t assume that screens always impede sleep, either.

“A common assumption is that all screen use is inherently harmful for sleep,” Meredith-Jones explained. “Our findings don’t support that blanket statement. In earlier work, we found that screen use in bed was associated with shorter sleep duration, but in this study, most screen use was not physiologically stimulating. That suggests timing and context matter, and that some forms of screen use may even serve as a wind-down activity before bed.”

Looking ahead, “we want to better distinguish between different types of screen use, for example, interactive versus passive engagement, or emotionally charged versus neutral communication,” Meredith-Jones said. “We’re also developing improved real-world measurement tools that can capture not just how long teens use screens, but what they’re doing, how they’re engaging, and in what context. That level of detail is likely to give us much clearer answers than simple ‘screen time’ totals.”

The study, “Screens, Teens, and Sleep: Is the Impact of Nighttime Screen Use on Sleep Driven by Physiological Arousal?” was authored by Kim A. Meredith-Jones, Jillian J. Haszard, Barbara C. Galland, Shay-Ruby Wickham, Bradley J. Brosnan, Takiwai Russell-Camp, and Rachael W. Taylor.

Can brain stimulation treat psychopathy?

13 February 2026 at 01:00

Scientists exploring new ways to address psychopathic traits have found that gentle electrical or magnetic stimulation of the brain may slightly improve empathy and prosocial behavior. A new study published in Progress in Neuro-Psychopharmacology and Biological Psychiatry suggests the technology shows promise—but there is currently no direct evidence it works in people with psychopathy.

Psychopathy is often associated with persistent antisocial behavior and emotional differences, such as reduced empathy, guilt, and concern for others. Traditional treatments, including therapy programs and anger-management courses, have had limited success in changing these core emotional traits.

This has led researchers to explore whether differences in brain activity might help explain psychopathy, and whether targeting the brain directly could offer new treatment possibilities.

Brain imaging studies have shown that people with psychopathic traits often have unusual activity in regions linked to emotion and decision-making. These include areas involved in recognizing fear, responding to others’ pain, and regulating behavior.

Scientists have therefore begun testing non-invasive brain stimulation, which utilizes magnets or weak electrical currents applied to the scalp, to see whether altering brain activity can influence emotional responses.

Led by Célia F. Camara from the University of Essex in the U.K., the research team behind the new study wanted to know whether these brain-stimulation techniques could change traits related to psychopathy.

Camara and colleagues conducted a large review and statistical analysis of 64 experiments involving 122 measured effects. The studies examined several forms of stimulation, including transcranial magnetic stimulation and transcranial direct current stimulation, and compared them with sham (placebo-like) conditions.

Most experiments were conducted with healthy adult volunteers rather than people diagnosed with psychopathy. Participants completed tasks or questionnaires measuring empathy, emotional reactions, or prosocial behavior before and after brain stimulation. The researchers then combined results across studies to see whether any consistent patterns emerged.

The findings demonstrated that certain types of “excitatory” brain stimulation—designed to increase activity in targeted brain regions—produced small to moderate improvements in social and emotional responses. In some cases, participants reported greater empathy, increased willingness to help others, or increased feelings of guilt. Other types of stimulation that dampen brain activity sometimes reduced these responses.

Overall, the analysis suggests that non-invasive brain stimulation can influence emotional and social processing in ways that are relevant to psychopathic traits. However, the results were mixed and varied widely depending on the type of stimulation, the brain area targeted, and how many sessions participants received.

The researchers noted that while the findings provide early proof that emotional traits can be influenced by brain stimulation, the technology is far from being a practical treatment. Notably, the review found that the only available study conducted specifically on psychopathic individuals reported null effects.

“The generalizability of our findings is limited by insufficient research on psychopathy-relevant samples. Responses to non-invasive brain stimulation in individuals with psychopathy may differ from those of non-psychopathic populations, as evidence indicates that individuals with psychopathy exhibit distinct neurobiological profiles compared with non-psychopathic cohorts,” Camara and colleagues cautioned.

Nevertheless, the results open the door to new ways of understanding and potentially addressing the emotional aspects of psychopathy.

The study, “On the possibility to modulate psychopathic traits via non-invasive brain stimulation: A systematic review and meta-analysis,” was authored by Célia F. Camara, Carmen S. Sergiou, Andrés Molero Chamizo, Alejandra Sel, Nathzidy G. Rivera Urbina, Michael A. Nitsche, and Paul H.P. Hanel.

Before yesterdayEnglish

Childhood trauma and genetics drive alcoholism at different life stages

12 February 2026 at 23:00

New research suggests that the path to alcohol dependence may differ depending on when the condition begins. A study published in Drug and Alcohol Dependence identifies distinct roles for genetic variations and childhood experiences in the development of Alcohol Use Disorder (AUD). The findings indicate that severe early-life trauma accelerates the onset of the disease, whereas specific genetic factors are more closely linked to alcoholism that develops later in adulthood. This separation of causes provides a more nuanced view of a condition that affects millions of people globally.

Alcohol Use Disorder is a chronic medical condition characterized by an inability to stop or control alcohol use despite adverse consequences. Researchers understand that the risk of developing this condition stems from a combination of biological and environmental factors. Genetic predisposition accounts for approximately half of the risk. The remaining risk comes from life experiences, particularly those occurring during formative years. However, the specific ways these factors interact have remained a subject of debate.

One specific gene of interest produces a protein called Brain-Derived Neurotrophic Factor, or BDNF. This protein acts much like a fertilizer for the brain. It supports the survival of existing neurons and encourages the growth of new connections and synapses. This process is essential for neuroplasticity, which is the brain’s ability to reorganize itself by forming new neural connections.

Variations in the BDNF gene can alter how the brain adapts to stress and foreign substances. Because alcohol consumption changes the brain’s structure, the gene that regulates brain plasticity is a prime suspect in the search for biological causes of addiction.

Yi-Wei Yeh and San-Yuan Huang, researchers from the Tri-Service General Hospital and National Defense Medical University in Taiwan, led the investigation. They aimed to untangle how BDNF gene variants, childhood trauma, and family dysfunction contribute to alcoholism. They specifically wanted to determine if these factors worked alone or if they amplified each other. For example, they sought to answer whether a person with a specific genetic variant would be more susceptible to the damaging effects of a difficult childhood.

The team recruited 1,085 participants from the Han Chinese population in Taiwan. After excluding individuals with incomplete data or DNA issues, the final analysis compared 518 patients diagnosed with Alcohol Use Disorder against 548 healthy control subjects.

The researchers categorized the patients based on when their drinking became a disorder. They defined early-onset as occurring at or before age 25 and late-onset as occurring after age 25. This distinction allowed them to see if different drivers were behind the addiction at different life stages.

To analyze the biological factors, the researchers collected blood samples from all participants. They extracted DNA to examine four distinct locations on the BDNF gene. These specific locations are known as single-nucleotide polymorphisms. They represent single-letter changes in the genetic code that can alter how the gene functions. The team looked for patterns in these variations to see if any were more common in the group with alcoholism.

Participants also completed detailed psychological assessments. The Childhood Trauma Questionnaire asked about physical, emotional, and sexual abuse, as well as physical and emotional neglect. A second survey measured Adverse Childhood Experiences (ACEs), which covers a broader range of household challenges such as divorce or incarcerated family members. A third tool, the Family APGAR, assessed how well the participants’ families functioned in terms of emotional support, communication, and adaptability.

The genetic analysis revealed a specific pattern of DNA variations associated with the disorder. This pattern, known as a haplotype, appeared more frequently in patients with Alcohol Use Disorder. A deeper look at the data showed that this genetic link was specific to late-onset alcoholism. This category includes individuals who developed the condition after the age of 25. This was a somewhat unexpected finding, as earlier research has often linked strong genetic factors to early-onset disease. The authors suggest that genetic influences on brain plasticity might become more pronounced as the brain ages.

The results regarding childhood experiences painted a different picture. Patients with Alcohol Use Disorder reported much higher rates of childhood trauma compared to the healthy control group. This included higher scores for physical abuse, emotional abuse, and neglect. The study found a clear mathematical relationship between trauma and age. The more severe the childhood trauma, the younger the patient was when they developed a dependency on alcohol. This supports the theory that some individuals use alcohol to self-medicate the emotional pain of early abuse.

The impact of Adverse Childhood Experiences (ACEs) was particularly stark. The data showed a compounding risk. Individuals with one or more adverse experiences were roughly 3.5 times more likely to develop the disorder than those with none. For individuals with two or more adverse experiences, the likelihood skyrocketed. They were 48 times more likely to develop Alcohol Use Disorder. This suggests that there may be a tipping point where the cumulative burden of stress overwhelms a young person’s coping mechanisms.

The researchers uncovered distinct differences between men and women regarding trauma. Men with the disorder reported higher rates of physical abuse in childhood compared to female patients. Women with the disorder reported higher rates of sexual abuse compared to males. The data suggested that for women, a history of sexual abuse was associated with developing alcoholism seven to ten years earlier than those without such history. This highlights a critical need for gender-specific approaches when addressing trauma in addiction treatment.

Family environment played a major role across the board. Patients with the disorder consistently reported lower family functioning compared to healthy individuals. This dysfunction was present regardless of whether the alcoholism started early or late in life. It appears that a lack of family support is a general risk factor rather than a specific trigger for a certain type of the disease. A supportive family acts as a buffer against stress. When that buffer is missing, the risk of maladaptive coping strategies increases.

The team tested the hypothesis that trauma might change how the BDNF gene affects a person. The analysis did not support this idea. The genetic risks and the environmental risks appeared to operate independently of one another. The gene variants did not make the trauma worse, and the trauma did not activate the gene in a specific way. This suggests that while both factors lead to the same outcome, they may travel along parallel biological pathways to get there.

There are limitations to this study that affect how the results should be interpreted. The participants were all Han Chinese, so the genetic findings might not apply to other ethnic populations. Genetic variations often differ by ancestry, and what is true for one group may not hold for another.

The study also relied on adults remembering their childhoods. This retrospective approach can introduce errors, as memory is not always a perfect record of the past. Additionally, the number of female participants was relatively small compared to males, which mirrors the prevalence of the disorder but limits statistical power for that subgroup.

The study also noted high rates of nicotine use among the alcohol-dependent group. Approximately 85 percent of the patients used nicotine. Since smoking can also affect brain biology, it adds another layer of complexity to the genetic analysis. The researchers attempted to control for this, but it remains a variable to consider.

Despite these caveats, the research offers a valuable perspective for clinicians. It suggests that patients who develop alcoholism early in life are likely driven by environmental trauma. Treatment for these individuals might prioritize trauma-informed therapy and psychological processing of past events. In contrast, patients who develop the disorder later in life might be grappling with a genetic vulnerability that becomes relevant as the brain ages. This could point toward different biological targets for medication or different behavioral strategies.

The authors recommend that future research should focus on replicating these findings in larger and more diverse groups. They also suggest using brain imaging technologies. Seeing how these gene variants affect the physical structure of the brain could explain why they predispose older adults to addiction.

Understanding the distinct mechanisms of early versus late-onset alcoholism is a step toward personalized medicine in psychiatry. By identifying whether a patient is fighting a genetic predisposition or the ghosts of a traumatic past, doctors may eventually be able to tailor treatments that address the root cause of the addiction.

The study, “Childhood trauma, family functioning, and the BDNF gene may affect the development of alcohol use disorder,” was authored by Yi-Wei Yeh, Catherine Shin Huey Chen, Shin-Chang Kuo, Chun-Yen Chen, Yu-Chieh Huang, Jyun-Teng Huang, You-Ping Yang, Jhih-Syuan Huang, Kuo-Hsing Ma, and San-Yuan Huang.

A key personality trait is linked to the urge to cheat in unhappy men

12 February 2026 at 21:00

A study in Sexual and Relationship Therapy found that men are more open to casual sex and infidelity than women. The research also highlights a strong link between relationship dissatisfaction, the desire for uncommitted sex, and the intention to cheat.

Infidelity has long been defined as a violation of promises and commitments within a romantic relationship, reflecting a failure to uphold expectations of love, loyalty, and support. However, modern views conceptualize infidelity as physical, sexual, or emotional behaviors that violate relationship norms and cause distress and negative relationship outcomes. Exactly which behaviors constitute infidelity varies across couples, as norms regarding emotional and sexual exclusivity differ between relationships.

The most common forms of infidelity are sexual and emotional infidelity. Sexual infidelity usually involves physical sexual behaviors with someone other than one’s partner. Emotional infidelity consists of forming intimate emotional bonds with a person other than the partner that breach relationship rules agreed upon by the couple. Research indicates that sexual and emotional infidelity often co-occur; they are, most often, not independent phenomena.

A key psychological characteristic linked to infidelity is sociosexuality. Sociosexuality is the level of openness to casual sex without commitment. Individuals with higher sociosexuality are more likely to engage in both sexual and emotional infidelity, as their attitudes and desires may conflict with monogamous relationship norms.

Study author Paula Pricope and her colleagues wanted to investigate whether sociosexuality plays a mediating role in the relationship between relationship satisfaction and intentions towards infidelity. They also wanted to know whether these associations are the same in men and women. The authors hypothesized that men would be more inclined to engage in infidelity compared to women and that their sociosexuality would be higher (i.e., they would be more open to casual sex).

Study participants were 246 individuals from Romania. Their average age was 24 years. All participants were volunteers. Sixty-one percent of participants were women. Seventy-two percent were in a non-marital romantic relationship, while 28% were married. Sixty-eight percent of participants were from urban areas of Romania.

Participants completed assessments of intentions towards infidelity (the Intentions Towards Infidelity Scale), relationship satisfaction (the Relationship Assessment Scale), and sociosexuality (the Sociosexual Orientation Inventory – Revised).

Results showed that individuals reporting stronger intentions towards infidelity tended to have higher sociosexuality and be less satisfied with their relationships. In other words, individuals more willing to cheat on their partners tended to be more open to uncommitted sex and less satisfied with their relationships. Men tended to report higher sociosexuality and higher intentions towards infidelity than women.

The authors tested a statistical model proposing that lower relationship satisfaction leads to higher sociosexuality, which, in turn, increases intentions to cheat. The results indicated that this pathway was significant specifically for men. For male participants, lower relationship satisfaction was linked to higher sociosexuality, which then predicted higher intentions to cheat. However, this mediation pathway was not significant for women.

The study contributes to the scientific understanding of infidelity. However, all study data came from self-reports, leaving room for reporting bias to have affected the results. Additionally, the design of the study does not allow for causal inferences.

While it is indeed possible that lower relationship satisfaction leads to increased sociosexuality and infidelity intentions, it is also possible that higher sociosexuality and infidelity intentions reduce relationship satisfaction or make it harder for a person to be satisfied with a committed relationship. Other possibilities also remain open.

The paper, “The roles of sociosexuality and gender in the relationship between relationship satisfaction and intentions toward infidelity: a moderated mediation model,” was authored by Paula Pricope, Tudor-Daniel Huțul, Adina Karner-Huțuleac, and Andreea Huțul.

Methamphetamine increases motivation through brain processes separate from euphoria

12 February 2026 at 19:00

A study published in the journal Psychopharmacology has found that the increase in motivation people experience from methamphetamine is separate from the drug’s ability to produce a euphoric high. The findings suggest that these two common effects of stimulant drugs likely involve different underlying biological processes in the brain. This research indicates that a person might become more willing to work hard without necessarily feeling a greater sense of pleasure or well-being.

The researchers conducted the new study to clarify how stimulants affect human motivation and personal feelings. They intended to understand if the pleasurable high people experience while taking these drugs is the primary reason they become more willing to work for rewards. By separating these effects, the team aimed to gain insight into how drugs could potentially be used to treat motivation-related issues without causing addictive euphoria.

Another reason for the study was to investigate how individual differences in personality or brain chemistry change how a person responds to a stimulant. Scientists wanted to see if people who are naturally less motivated benefit more from these drugs than those who are already highly driven. The team also sought to determine if the drug makes tasks feel easier or if it simply makes the final reward seem more attractive to the user.

“Stimulant drugs like amphetamine are thought to produce ‘rewarding’ effects that contribute to abuse or dependence, by increasing levels of the neurotransmitter dopamine. Findings from animal models suggest that stimulant drugs, perhaps because of their effects on dopamine, increase motivation, or the animals’ willingness to exert effort,” explained study author Harriet de Wit, a professor at the University of Chicago.

“Findings from human studies suggest that stimulant drugs lead to repeated use because they produce subjective feelings of wellbeing. In the present study, we tested the effects of amphetamine in healthy volunteers, on both an effort task and self-reported euphoria.”

For their study, the researchers recruited a group of 96 healthy adults from the Chicago area. This group consisted of 48 men and 48 women between the ages of 18 and 35. Each volunteer underwent a rigorous screening process that included a physical exam, a heart health check, and a psychiatric interview to ensure they were healthy.

The study used a double-blind, placebo-controlled design to ensure the results were accurate and unbiased. This means that neither the participants nor the staff knew if a volunteer received the actual drug or an inactive pill on a given day. The participants attended two separate laboratory sessions where they received either 20 milligrams of methamphetamine or a placebo.

During these sessions, the participants completed a specific exercise called the Effort Expenditure for Rewards Task. This task required them to choose between an easy option for a small amount of money or a more difficult option for a larger reward. The researchers used this to measure how much physical effort a person was willing to put in to get a better payoff.

The easy task involved pressing a specific key on a keyboard 30 times with the index finger of the dominant hand within seven seconds. Successfully completing this task always resulted in a small reward of one dollar. This served as a baseline for the minimum amount of effort a person was willing to expend for a guaranteed but small gain.

The hard task required participants to press a different key 100 times using the pinky finger of their non-dominant hand within 21 seconds. The rewards for this more difficult task varied from about one dollar and 24 cents to over four dollars. This task was designed to be physically taxing and required a higher level of commitment to complete.

Before making their choice on each trial, participants were informed of the probability that they would actually receive the money if they finished the task. These probabilities were set at 12 percent, 50 percent, or 88 percent. This added a layer of risk to the decision, as a person might work hard for a reward but still receive nothing if the odds were not in their favor.

Throughout the four-hour sessions, the researchers measured the participants’ personal feelings and physical reactions at regular intervals. They used standardized questionnaires to track how much the participants liked the effects of the drug and how much euphoria they felt. They also monitored physical signs such as heart rate and blood pressure to ensure the safety of the volunteers.

Before the main sessions, the participants completed the task during an orientation to establish their natural effort levels. The researchers then divided the group in half based on these baseline scores. This allowed the team to compare people who were naturally inclined to work hard against those who were naturally less likely to choose the difficult task.

The results showed that methamphetamine increased the frequency with which people chose the hard task over the easy one across the whole group. This effect was most visible when the chances of winning the reward were in the low to medium range. The drug seemed to give participants a boost in motivation when the outcome was somewhat uncertain.

The data provides evidence that the drug had a much stronger impact on people who were naturally less motivated. Participants in the low baseline group showed a significantly larger increase in their willingness to choose the hard task compared to those in the high baseline group. For people who were already high achievers, the drug did not seem to provide much of an additional motivational boost.

To understand why the drug changed behavior, the researchers used a mathematical model to analyze the decision-making process. This model helped the team separate how much a person cares about the difficulty of a task from how much they value the reward itself. It provided a more detailed look at the internal trade-offs people make when deciding to work.

The model showed that methamphetamine specifically reduced a person’s sensitivity to the physical cost of effort. This suggests that the drug makes hard work feel less unpleasant or demanding than it normally would. Instead of making the reward seem more exciting, the drug appears to make the work itself feel less like a burden.

This change in effort sensitivity was primarily found in the participants who started with low motivation levels. For these individuals, the drug appeared to lower the mental or physical barriers that usually made them avoid the difficult option. In contrast, the drug did not significantly change the effort sensitivity of those who were already highly motivated.

Methamphetamine did not change how sensitive people were to the probability of winning the reward. This indicates that the drug affects the drive to work rather than changing how people calculate risks or perceive the odds of success. The volunteers still understood the chances of winning, but they were more willing to try anyway despite the difficulty.

As the researchers expected, the drug increased feelings of happiness and euphoria in the participants. It also caused the usual physical changes associated with stimulants, such as an increase in heart rate and blood pressure. Most participants reported that they liked the effects of the drug while they were performing the tasks.

A major finding of the study is that the boost in mood was not related to the boost in productivity. The participants who felt the highest levels of euphoria were not the same people who showed the greatest increase in hard task choices. “This suggests that different receptor actions of amphetamine mediate willingness to exert effort and feelings of wellbeing,” de Wit explained.

There was no statistical correlation between how much a person liked the drug and how much more effort they were willing to exert. This provides evidence that the brain processes that create pleasure from stimulants are distinct from those that drive motivated behavior. A person can experience the motivational benefits of a stimulant without necessarily feeling the intense pleasure that often leads to drug misuse.

The findings highlight that “drugs have numerous behavioral and cognitive actions, which may be mediated by different neurotransmitter actions,” de Wit told PsyPost. “The purpose of research in this area is to disentangle which effects are relevant to misuse or dependence liability, and which might have clinical benefits, and what brain processes underlie the effects.”

The results also highlight the importance of considering a person’s starting point when predicting how they will respond to a medication. Because the drug helped the least motivated people the most, it suggests that these treatments might be most effective for those with a clear deficit in drive.

The study, like all research, has some limitations. The participants were all healthy young adults, so it is not clear if the results would be the same for older people or those with existing health conditions. A more diverse group of volunteers would be needed to see if these patterns apply to the general population.

The study only tested a single 20-milligram dose of methamphetamine given by mouth. It is possible that different doses or different ways of taking the drug might change the relationship between mood and behavior. Using a range of doses in future studies would help researchers see if there is a point where the mood and effort effects begin to overlap.

Another limitation is that the researchers did not directly look at the chemical changes inside the participants’ brains. While they believe dopamine is involved, they did not use brain imaging technology to confirm this directly. Future research could use specialized scans to see exactly which brain regions are active when these changes in motivation occur.

“The results open the door to further studies to determine what brain mechanisms underlie the two behavioral effects,” de Wit said.

The study, “Effects of methamphetamine on human effort task performance are unrelated to its subjective effects,” was authored by Evan C. Hahn, Hanna Molla, Jessica A. Cooper, Joseph DeBrosse, and Harriet de Wit.

Most Americans experience passionate love only twice in a lifetime, study finds

12 February 2026 at 17:00

Most adults in the United States experience the intense rush of passionate love only about twice throughout their lives, according to a recent large-scale survey. The study, published in the journal Interpersona, suggests that while this emotional state is a staple of human romance, it remains a relatively rare occurrence for many individuals. The findings provide a new lens through which to view the frequency of deep romantic attachment across the entire adult lifespan.

The framework for this research relies on a classic model where love consists of three parts: passion, intimacy, and commitment. Passion is described as the physical attraction and intense longing that often defines the start of a romantic connection. Amanda N. Gesselman, a researcher at the Kinsey Institute at Indiana University, led the team of scientists who conducted this work.

The research team set out to quantify how often this specific type of love happens because earlier theories suggest passion is high at the start of a relationship but fades as couples become more comfortable. As a relationship matures, it often shifts toward companionate love, which is defined by deep affection and entwined lives rather than obsessive longing. Because this intense feeling is often fleeting, it might happen several times as people move through different stages of life.

The researchers wanted to see if social factors like age, gender, or sexual orientation influenced how often someone falls in love. Some earlier studies on university students suggested that most young people fall in love at least once by the end of high school. However, very little data existed regarding how these experiences accumulate for adults as they reach middle age or later life.

To find these answers, the team analyzed data from more than 10,000 single adults in the U.S. between the ages of 18 and 99. Participants were recruited to match the general demographic makeup of the country based on census data. This large group allowed the researchers to look at a wide variety of life histories and romantic backgrounds.

Participants were asked to provide a specific number representing how many times they had ever been passionately in love during their lives. On average, the respondents reported experiencing this intense feeling 2.05 times. This number suggests that for the average person, passionate love is a rare event that happens only a few times in a century of living.

A specific portion of the group, about 14 percent, stated they had never felt passionate love at all. About 28 percent had felt it once, while 30 percent reported two experiences. Another 17 percent had three experiences, and about 11 percent reported four or more. These figures show that while the experience is common, it is certainly not a daily or even a yearly occurrence for most.

The study also looked at how these numbers varied based on the specific characteristics of the participants. Age showed a small link to the number of experiences, meaning older adults reported slightly more instances than younger ones. This result is likely because older people have had more years and more opportunities to encounter potential partners.

The increase with age was quite small, which suggests that people do not necessarily keep falling in love at a high rate as they get older. One reason for this might be biological, as the brain systems involved in reward and excitement are often most active during late adolescence and early adulthood. As people transition into mature adulthood, their responsibilities and self-reflection might change how they perceive or pursue new romantic passion.

Gender differences were present in the data, with men reporting slightly more experiences than women. This difference was specifically found among heterosexual participants, where heterosexual men reported more instances of passionate love than heterosexual women. This finding aligns with some previous research suggesting that men may be socialized to fall in love or express those feelings earlier in a relationship.

Among gay, lesbian, and bisexual participants, the number of experiences did not differ by gender. The researchers did not find that sexual orientation on its own created any differences in how many times a person fell in love. For example, the difference between heterosexual and bisexual participants was not statistically significant.

The researchers believe these results have important applications for how people view their own romantic lives. Many people feel pressure from movies, songs, and social media to constantly chase a state of high passion. Knowing that the average person only feels this a couple of times may help people feel more normal if they are not currently in a state of intense romance.

In a clinical or counseling setting, these findings could help people who feel they are behind in their romantic development. If someone has never been passionately in love, they are part of a group that includes more than one in ten adults. Seeing this as a common variation in human experience rather than a problem can reduce feelings of shame.

The researchers also noted that people might use a process called retrospective cognitive discounting. This happens when a person looks back at their past and views old relationships through a different lens based on their current feelings. An older person might look back at a past “crush” and decide it was not true passionate love, which would lower their total count.

This type of self-reflection might help people stay resilient after a breakup. By reinterpreting a past relationship as something other than passionate love, they might remain more open to finding a new connection in the future. This mental flexibility is part of how humans navigate the ups and downs of their romantic histories.

There are some limitations to the study that should be considered. Because the researchers only surveyed single people, the results might be different if they had included people who are currently married or in long-term partnerships. People who are in stable relationships might have different ways of remembering their past experiences compared to those who are currently unattached.

The study also relied on people remembering their entire lives accurately, which can be a challenge for older participants. Future research could follow the same group of people over many years to see how their feelings change as they happen. This would remove the need for participants to rely solely on their memories of the distant past.

The participants were all located in the United States, so these findings might not apply to people in other cultures. Different societies have different rules about how people meet, how they express emotion, and what they consider to be love. A global study would be needed to see if the “twice in a lifetime” average holds true in other parts of the world.

Additionally, the survey did not provide a specific definition of passionate love for the participants. Each person might have used their own personal standard for what counts as being passionately in love. Using a more standardized definition in future studies could help ensure that everyone is answering the question in the same way.

The researchers also mentioned that they did not account for individual personality traits or attachment styles. Some people are naturally more prone to falling in love quickly, while others are more cautious or reserved. These internal traits likely play a role in how many times someone experiences passion throughout their life.

Finally, the study did not include a large enough number of people with diverse gender identities beyond the categories of men and women. Expanding the research to include more gender-diverse individuals would provide a more complete picture of the human experience. Despite these gaps, the current study provides a foundation for understanding the frequency of one of life’s most intense emotions.

The study, “Twice in a lifetime: quantifying passionate love in U.S. single adults,” was authored by Amanda N. Gesselman, Margaret Bennett-Brown, Jessica T. Campbell, Malia Piazza, Zoe Moscovici, Ellen M. Kaufman, Melissa Blundell Osorio, Olivia R. Adams, Simon Dubé, Jessica J. Hille, Lee Y. S. Weeks, and Justin R. Garcia.

AI boosts worker creativity only if they use specific thinking strategies

12 February 2026 at 15:00

A new study published in the Journal of Applied Psychology suggests that generative artificial intelligence can boost creativity among employees in professional settings. But the research indicates that these tools increase innovative output only when workers use specific mental strategies to manage their own thought processes.

Generative artificial intelligence is a type of technology that can produce new content such as text, images, or computer code. Large language models like ChatGPT or Google’s Gemini use massive datasets to predict and generate human-like responses to various prompts. Organizations often implement these tools with the expectation that they will help employees come up with novel and useful ideas. Many leaders believe that providing access to advanced technology will automatically lead to a more innovative workforce.

However, recent surveys indicate that only a small portion of workers feel that these tools actually improve their creative work. The researchers conducted the new study to see if the technology truly helps and to identify which specific factors make it effective. They also wanted to see how these tools function in a real office environment where people manage multiple projects at once. Most previous studies on this topic took place in artificial settings using only one isolated task.

“When ChatGPT was released in November 2022, generative AI quickly became part of daily conversation. Many companies rushed to integrate generative AI tools into their workflows, often expecting that this would make employees more creative and, ultimately, give organizations a competitive advantage,” said study author Shuhua Sun, who holds the Peter W. and Paul A. Callais Professorship in Entrepreneurship at Tulane University’s A. B. Freeman School of Business.

“What struck us, though, was how little direct evidence existed to support those expectations, especially in real workplaces. Early proof-of-concept studies in labs and online settings began to appear, but their results were mixed. Even more surprisingly, there were almost no randomized field experiments examining how generative AI actually affects employee creativity on the job.”

“At the same time, consulting firms started releasing large-scale surveys on generative AI adoption. These reports showed that only a small percentage of employees felt that using generative AI made them more creative. Taken together with the mixed lab/online findings, this raised a simple but important question for us: If generative AI is supposed to enhance creativity, why does it seem to help only some employees and not others? What are those employees doing differently?”

“That question shaped the core of our project. So, instead of asking simply whether generative AI boosts creativity, we wanted to understand how it does so and for whom. Driven by these questions, we developed a theory and tested it using a randomized field experiment in a real organizational setting.”

The researchers worked with a technology consulting firm in China to conduct their field experiment. This company was an ideal setting because consulting work requires employees to find unique solutions for many different clients. The study included a total of 250 nonmanagerial employees from departments such as technology, sales, and administration. These participants had an average age of about 30 years and most held university degrees.

The researchers randomly split the workers into two groups. The first group received access to ChatGPT accounts and was shown how to use the tool for their daily tasks. The second group served as a control and did not receive access to the artificial intelligence software during the study. To make sure the experiment was fair, the company told the first group that the technology was meant to assist them rather than replace them.

The experiment lasted for about one week. During this time, the researchers tracked how often the treated group used their new accounts. At the end of the week, the researchers collected data from several sources to measure the impact of the tool. They used surveys to ask employees about their work experiences and their thinking habits.

They also asked the employees’ direct supervisors to rate their creative performance. These supervisors did not know which employees were using the artificial intelligence tool. Additionally, the researchers used two external evaluators to judge specific ideas produced by the employees. These evaluators looked at how novel and useful the ideas were without knowing who wrote them.

The researchers looked at cognitive job resources, which are the tools and mental space people need to handle complex work. This includes having enough information and the ability to switch between hard and easy tasks. They also measured metacognitive strategies. This term describes how people actively monitor and adjust their own thinking to reach a goal.

A person with high metacognitive strategies might plan out their steps before starting a task. They also tend to check their own progress and change their approach if they are not making enough headway. The study suggests that the artificial intelligence tool increased the cognitive resources available to employees. The tool helped them find information quickly and allowed them to manage their mental energy more effectively.

The results show that the employees who had access to the technology generally received higher creativity ratings from their supervisors. The external evaluators also gave higher scores for novelty to the ideas produced by this group. The evidence suggests that the tool was most effective when workers already used strong metacognitive strategies. These workers were able to use the technology to fill specific gaps in their knowledge.

For employees who did not use these thinking strategies, the tool did not significantly improve their creative output. These individuals appeared to be less effective at using the technology to gain new resources. The study indicates that the tool provides the raw material for creativity, but the worker must know how to direct the process. Specifically, workers who monitored their own mental state knew when to use the tool to take a break or switch tasks.

This ability to switch tasks is important because it prevents a person from getting stuck on a single way of thinking. When the technology handled routine parts of a job, it gave workers more mental space to focus on complex problem solving. The researchers found that the positive effect of the technology became significant once a worker’s use of thinking strategies reached a certain level. Below that threshold, the tool did not provide a clear benefit for creativity.

The cognitive approach to creativity suggests that coming up with new ideas is a mental process of searching through different areas of knowledge. People must find pieces of information and then combine them in ways that have not been tried before. This process can be very demanding because people have a limited amount of time and mental energy. Researchers call this the knowledge burden.

It takes a lot of effort to find, process, and understand new information from different fields. If a person spends all their energy just gathering facts, they might not have enough strength left to actually be creative. Artificial intelligence can help by taking over the task of searching for and summarizing information. This allows the human worker to focus on the high level task of combining those facts into something new.

Metacognition is essentially thinking about one’s own thinking. It involves a person being aware of what they know and what they do not know. When a worker uses metacognitive strategies, they act like a coach for their own brain. They ask themselves if their current plan is working or if they need to try a different path.

The study shows that this self-awareness is what allows a person to use artificial intelligence effectively. Instead of just accepting whatever the computer says, a strategic thinker uses the tool to test specific ideas. The statistical analysis revealed that the artificial intelligence tool provided workers with more room to think. This extra mental space came from having better access to knowledge and more chances to take mental breaks.

The researchers used a specific method called multilevel analysis to account for the way employees were organized within departments and teams. This helps ensure that the findings are not skewed by the influence of a single department or manager. The researchers also checked to see if other factors like past job performance or self-confidence played a role. Even when they accounted for these variables, the link between thinking strategies and the effective use of artificial intelligence remained strong.

The data showed that the positive impact of the tool on creativity was quite large for those who managed their thinking well. For those with low scores in that area, the tool had almost no impact on their creative performance. To test creativity specifically, the researchers asked participants to solve a real problem. They had to provide suggestions for protecting employee privacy in a digital office.

This task required at least 70 Chinese characters in response. It was designed to see if the participants could think of novel ways to prevent information leaks or excessive monitoring by leadership. The external raters then scored these responses based on how original and useful they were. This provided a more objective look at creativity than just asking a supervisor for their opinion.

“The main takeaway is that generative AI does not automatically make people more creative,” Sun told PsyPost. “Simply providing access to AI tools is not enough, and in many cases it yields little creative benefit. Our findings show that the creative value of AI depends on how people engage with it during the creative process. Individuals who actively monitor their own understanding, recognize what kind of help they need, and deliberately decide when and how to use AI are much more likely to benefit creatively.”

“In contrast, relying on AI in a more automatic or unreflective way tends to produce weaker creative outcomes. For the average person, the message is simple: AI helps creativity when it is used thoughtfully: Pausing to reflect on what you need, deciding when AI can be useful, and actively shaping its output iteratively are what distinguish creative gains from generic results.”

As with all research, there are some limitations to consider. The researchers relied on workers to report their own thinking strategies, which can sometimes be inaccurate. The study also took place in a single company within one specific country. People in different cultures might interact with artificial intelligence in different ways.

Future research could look at how long-term use of these tools affects human skills. There is a possibility that relying too much on technology could make people less independent over time. Researchers might also explore how team dynamics influence the way people use these tools. Some office environments might encourage better thinking habits than others.

It would also be helpful to see if the benefits of these tools continue to grow over several months or if they eventually level off. These questions will be important as technology continues to change the way we work. The findings suggest that simply buying new software is not enough to make a company more innovative. Organizations should also consider training their staff to be more aware of their own thinking processes.

Since the benefits of artificial intelligence depend on a worker’s thinking habits, generic software training might not be enough. Instead, programs might need to focus on how to analyze a task and how to monitor one’s own progress. These metacognitive skills are often overlooked in traditional professional development. The researchers note that these skills can be taught through short exercises. Some of these involve reflecting on past successes or practicing new ways to plan out a workday.

The study, “How and for Whom Using Generative AI Affects Creativity: A Field Experiment,” was authored by Shuhua Sun, Zhuyi Angelina Li, Maw-Der Foo, Jing Zhou, and Jackson G. Lu.

Scientists asked men to smell hundreds of different vulvar odors to test the “leaky-cue hypothesis”

12 February 2026 at 06:00

A new study published in Evolution and Human Behavior suggests that modern women may not chemically signal fertility through vulvar body odor, a trait commonly observed in other primates. The findings indicate that men are unable to detect when a woman is in the fertile phase of her menstrual cycle based solely on the scent of the vulvar region. This research challenges the idea that humans have retained these specific evolutionary mating signals.

In the animal kingdom, particularly among non-human primates like lemurs, baboons, and chimpanzees, females often broadcast their reproductive status to males. This is frequently done through olfactory signals, specifically odors from the genital region, which change chemically during the fertile window. These scents serve as information for males, helping them identify when a female is capable of conceiving. Because humans share a deep evolutionary history with these primates, scientists have debated whether modern women retain these chemical signals.

A concept known as the “leaky-cue hypothesis” proposes that women might unintentionally emit subtle physiological signs of fertility. While previous research has investigated potential signals in armpit odor, voice pitch, or facial attractiveness, results have been inconsistent.

The specific scent of the vulvar region has remained largely unexplored using modern, rigorous methods, despite its biological potential as a source of chemical communication. To address this gap, a team led by Madita Zetzsche from the Behavioural Ecology Research Group at Leipzig University and the Max Planck Institute for Evolutionary Anthropology conducted a detailed investigation.

The researchers recruited 28 women to serve as odor donors. These participants were between the ages of 20 and 30, did not use hormonal contraception, and had regular menstrual cycles. To ensure the accuracy of the fertility data, the team did not rely on simple calendar counting. Instead, they used high-sensitivity urinary tests to detect luteinizing hormone and analyzed saliva samples to measure levels of estradiol and progesterone. This allowed the scientists to pinpoint the exact day of ovulation for each participant.

To prevent external factors from altering body odor, the donors adhered to a strict lifestyle protocol. They followed a vegetarian or vegan diet and avoided foods with strong scents, such as garlic, onion, and asparagus, as well as alcohol and tobacco. The women provided samples at ten specific points during their menstrual cycle. These points were clustered around the fertile window to capture any rapid changes in odor that might occur just before or during ovulation.

The study consisted of two distinct parts: a chemical analysis and a perceptual test. For the chemical analysis, the researchers collected 146 vulvar odor samples from a subset of 16 women. They used a specialized portable pump to draw air from the vulvar region into stainless steel tubes containing polymers designed to trap volatile compounds. These are the lightweight chemical molecules that evaporate into the air and create scent.

The team analyzed these samples using gas chromatography–mass spectrometry. This is a laboratory technique that separates a mixture into its individual chemical components and identifies them. The researchers looked for changes in the chemical profile that corresponded to the women’s conception risk and hormone levels. They specifically sought to determine if the abundance of certain chemical compounds rose or fell in a pattern that tracked the menstrual cycle.

The chemical analysis revealed no consistent evidence that the overall scent profile changed in a way that would allow fertility to be tracked across the menstrual cycle. While some specific statistical models suggested a potential link between the risk of conception and levels of certain substances—such as an increase in acetic acid and a decrease in a urea-related compound—these findings were not stable. When the researchers ran robustness checks, such as excluding samples from donors who had slightly violated dietary rules, the associations disappeared. The researchers concluded that there is likely a low retention of chemical fertility cues in the vulvar odor of modern women.

In the second part of the study, 139 men participated as odor raters. To collect the scent for this experiment, the female participants wore cotton pads in their underwear overnight for approximately 12 hours. These pads were then frozen to preserve the scent and later presented to the male participants in glass vials. The men, who were unaware of the women’s fertility status, sniffed the samples and rated them on three dimensions: attractiveness, pleasantness, and intensity.

The perceptual results aligned with the chemical findings. The statistical analysis showed that the men’s ratings were not influenced by the women’s fertility status. The men did not find the odor of women in their fertile window to be more attractive or pleasant than the odor collected during non-fertile days. Neither the risk of conception nor the levels of reproductive hormones predicted how the men perceived the scents.

These null results were consistent even when the researchers looked at the data in different ways, such as examining specific hormone levels or the temporal distance to ovulation. The study implies that if humans ever possessed the ability to signal fertility through vulvar scent, this trait has likely diminished significantly over evolutionary time.

The researchers suggest several reasons for why these cues might have been lost or suppressed in humans. Unlike most primates that walk on four legs, humans walk upright. This bipedalism moves the genital region away from the nose of other individuals, potentially reducing the role of genital odor in social communication. Additionally, human cultural practices, such as wearing clothing and maintaining high levels of hygiene, may have further obscured any remaining chemical signals.

It is also possible that social odors in humans have shifted to other parts of the body, such as the armpits, although evidence for axillary fertility cues remains mixed. The researchers noted that while they found no evidence of fertility signaling in this context, it remains possible that such cues require more intimate contact or sexual arousal to be detected, conditions that were not replicated in the laboratory.

Additionally, the strict dietary and behavioral controls, while necessary for scientific rigor, might not reflect real-world conditions where diet varies. The sample size for the chemical analysis was also relatively small, which can make it difficult to detect very subtle effects.

Future research could investigate whether these cues exist in more naturalistic settings or investigate the role of the vaginal microbiome, which differs significantly between humans and non-human primates. The high levels of Lactobacillus bacteria in humans create a more acidic environment, which might alter the chemical volatility of potential fertility signals.

The study, “Understanding olfactory fertility cues in humans: chemical analysis of women’s vulvar odour and perceptual detection of these cues by men,” was authored by Madita Zetzsche, Marlen Kücklich, Brigitte M. Weiß, Julia Stern, Andrea C. Marcillo Lara, Claudia Birkemeyer, Lars Penke, and Anja Widdig.

Blue light exposure may counteract anxiety caused by chronic vibration

12 February 2026 at 05:00

Living in a modern environment often means enduring a constant hum of background noise and physical vibration. From the rumble of heavy traffic to the oscillation of industrial machinery, these invisible stressors can gradually erode mental well-being.

A new study suggests that a specific color of light might offer a simple way to counter the anxiety caused by this chronic environmental agitation. The research indicates that blue light exposure can calm the nervous system even when the physical stress of vibration continues. These findings were published in the journal Physiology & Behavior.

Anxiety disorders are among the most common mental health challenges globally. They typically arise from a complicated mix of biological traits and social pressures. Environmental factors are playing an increasingly large role in this equation. Chronic exposure to low-frequency noise and vibration is known to disrupt the body’s hormonal balance. This disruption frequently leads to psychological symptoms such as irritability, fatigue, and persistent anxiety.

Doctors often prescribe medication to manage these conditions once a diagnosis is clear. These drugs usually work by altering the chemical signals in the brain to inhibit anxious feelings. However, pharmaceutical interventions are not always the best first step for early-stage anxiety. There is a growing demand for therapies that are accessible and carry fewer side effects. This has led scientists to investigate light therapy as a promising alternative.

Light does more than allow us to see. It also regulates our internal biological clocks and influences our mood. Specialized cells in the eyes detect light and send signals directly to the brain regions that control hormones. This pathway allows light to modulate the release of neurotransmitters associated with emotional well-being.

Despite this general knowledge, there has been little research on how specific light wavelengths might combat anxiety caused specifically by vibration. A team of researchers decided to fill this gap using zebrafish as a model organism. Zebrafish are small, tropical freshwater fish that are widely used in neuroscience. Their brain chemistry and genetic structure share many similarities with humans.

The study was led by Longfei Huo and senior author Muqing Liu from the School of Information Science and Technology at Fudan University in China. They aimed to identify if light could serve as a preventative measure against vibration-induced stress. The team designed a controlled experiment to first establish which vibrations caused the most stress. They subsequently tested whether light could reverse that stress.

The researchers began by separating the zebrafish into different groups. Each group was exposed to a specific frequency of vibration for one hour daily. The frequencies tested were 30, 50, and 100 Hertz. To ensure consistency, the acceleration of the vibration was kept constant across all groups. This phase of the experiment lasted for one week.

To measure anxiety in fish, the scientists relied on established behavioral patterns. When zebrafish are comfortable, they swim freely throughout their tank. When they are anxious, they tend to sink to the bottom. They also exhibit “thigmotaxis,” which is a tendency to hug the walls of the tank rather than exploring open water.

The team utilized a “novel tank test” to observe these behaviors. They placed the fish in a new environment and recorded how much time they spent in the lower half. The results showed that daily exposure to vibration made the fish act more anxious. The effect was strongest in the group exposed to 100 Hertz. These fish spent a statistically significant amount of time at the bottom of the tank.

The researchers also used a “light-dark box test.” In this setup, half the tank is illuminated and the other half is dark. Anxious fish prefer to hide in the dark. The fish exposed to 100 Hertz vibration spent much more time in the dark zones compared to the control group. This confirmed that the vibration was inducing a strong anxiety-like state.

After establishing that 100 Hertz vibration caused the most stress, the researchers moved to the second phase of the study. They wanted to see if light color could mitigate this effect. They repeated the vibration exposure but added a light therapy component. While the fish underwent vibration, they were bathed in either red, green, blue, or white light.

The blue light used in the experiment had a wavelength of 455 nanometers. The red light was 654 nanometers, and the green was 512 nanometers. The light exposure lasted for two hours each day. The researchers then ran a comprehensive battery of behavioral tests to see if the light made a difference.

The team found that the color of the light had a profound impact on the mental state of the fish. Zebrafish exposed to the blue light showed much less anxiety than those in the other groups. In the novel tank test, the blue-light group spent less time at the bottom. They explored the upper regions of the water almost as much as fish that had never been vibrated at all.

In contrast, the red light appeared to offer no benefit. In some metrics, the red light seemed to make the anxiety slightly worse. Fish under red light spent the longest time hiding in the dark during the light-dark box test. This suggests that the calming effect is specific to the wavelength of the light and not just the brightness.

The researchers also introduced two innovative testing methods to validate their results. One was a “social interaction test.” Zebrafish are social animals and usually prefer to be near others. Stress often causes them to withdraw. The researchers placed a group of fish inside a transparent cylinder within the tank. They then measured how much time the test fish spent near this cylinder.

Fish exposed to vibration and white light avoided the group. However, the fish treated with blue light spent a large amount of time near their peers. This indicated that their social anxiety had been alleviated. The blue light restored their natural desire to interact with others.

The second new method was a “pipeline swimming test.” This involved placing the fish in a tube with a gentle current. The setup allowed the scientists to easily measure swimming distance and smoothness of movement. Stressed fish tended to swim erratically or struggle against the flow. The blue-light group swam longer distances with smoother trajectories.

To understand the biological mechanism behind these behavioral changes, the scientists analyzed the fish’s brain chemistry. They measured the levels of three key chemicals: cortisol, norepinephrine, and serotonin. Cortisol is the primary stress hormone in both fish and humans. High levels of cortisol are a hallmark of physiological stress.

The analysis revealed that vibration exposure caused a spike in cortisol and norepinephrine. This hormonal surge matched the anxious behavior observed in the tanks. However, the application of blue light blocked this increase. The fish treated with blue light had cortisol levels comparable to the unstressed control group.

Even more striking was the effect on serotonin. Serotonin is a neurotransmitter that helps regulate mood and promotes feelings of well-being. The study found that 455 nm blue light specifically boosted serotonin levels in the fish. This suggests that blue light works by simultaneously lowering stress hormones and enhancing mood-regulating chemicals.

The authors propose that the blue light activates specific cells in the retina. These cells, known as intrinsically photosensitive retinal ganglion cells, contain a pigment called melanopsin. Melanopsin is highly sensitive to blue wavelengths. When activated, these cells send calming signals to the brain’s emotional centers.

There are some limitations to this study that must be considered. The research focused heavily on specific frequencies and wavelengths. It is possible that other combinations of light and vibration could yield different results. The study also did not investigate potential interaction effects between the light and vibration in a full factorial design.

Additionally, while zebrafish are a good model, they are not humans. The neural pathways are similar, but the complexity of human anxiety involves higher-level cognitive processes. Future research will need to replicate these findings in mammals. Scientists will also need to determine the optimal intensity and duration of light exposure for therapeutic use.

The study opens up new possibilities for managing environmental stress. It suggests that modifying our lighting environments could protect against the invisible toll of noise and vibration. For those living or working in industrial areas, blue light therapy could become a simple, non-invasive tool for mental health.

The study, “Blue light exposure mitigates vibration noise-induced anxiety by enhancing serotonin levels,” was authored by Longfei Huo, Xiaojing Miao, Yi Ren, Xuran Zhang, Qiqi Fu, Jiali Yang, and Muqing Liu.

Relatives with lower paternity uncertainty are perceived as kinder

According to a large study published in Evolutionary Psychology, people consistently perceive family members as kinder when there is greater certainty of biological relatedness.

Humans often assume that kindness within families is driven mainly by love, shared history, or cultural expectations. Yet evolutionary theories suggest that altruism within families may also be shaped by genetic relatedness. According to kin selection theory, people are predisposed to invest more care and support in relatives who are more likely to share their genes, because such investment indirectly promotes their own genetic success.

One important factor complicating this picture is paternity uncertainty, the fact that, unlike maternity, biological fatherhood is never absolutely certain. Radim Kuba and Jaroslav Flegr examined whether this uncertainty influences how people perceive kindness among different family members.

Drawing on evolutionary psychology and prior findings on parental and grandparental investment, they asked whether relatives associated with higher paternity certainty (such as mothers or maternal grandmothers) are consistently seen as kinder than those associated with lower certainty (such as paternal grandfathers).

The researchers analyzed data from a large online survey conducted between 2016 and 2021. Participants were recruited through a Czech and Slovak Facebook-based volunteer community using a snowball sampling method, allowing the study to reach a broad internet population. Nearly 15,000 individuals began the survey, and after exclusions, 9,128 adult participants who rated at least one family member were included in the final analyses.

Participants completed an extensive questionnaire and were asked to rate the kindness of various family members, such as parents, grandparents, siblings, and step-relatives, ranging from “strongly disagree” to “strongly agree” in response to statements like whether a given relative was kinder than other people. Importantly, the concept of kindness was left intentionally broad, allowing respondents to draw on lifelong experiences, including emotional support and everyday prosocial behavior.

The findings revealed a clear and consistent pattern: perceived kindness decreased as paternity uncertainty increased. Mothers and maternal grandmothers (relatives with no paternity uncertainty) received the highest kindness ratings, followed by fathers, maternal grandfathers, and paternal grandmothers, who carry one level of uncertainty. Paternal grandfathers, associated with two layers of uncertainty, were rated lowest among biological grandparents. These differences were statistically reliable, even though their size was modest.

Importantly, this pattern did not appear among step-relatives. Step-family members, who share no genetic relatedness and identical levels of paternity uncertainty, were rated similarly to one another, regardless of role. This contrast strengthens the authors’ interpretation that genetic relatedness, and not just social roles or cultural stereotypes, drives the observed differences.

Additional analyses showed that daughters tended to rate their biological parents as kinder than sons did, a pattern consistent with evolutionary predictions about investment through more certain maternal lines.

Overall, this study suggests that even in modern societies, subtle evolutionary pressures linked to genetic certainty continue to shape how people perceive kindness and altruism within their families.

Of note is that the voluntary, non-representative nature of the sample, particularly its relatively high level of education, may limit the generalizability of findings. Further, kindness ratings were subjective and may reflect personal relationship quality rather than purely objective behavior.

The research, “The Evolutionary Roots of Familial Altruism: Paternity Uncertainty Shapes Patterns of Kindness“, was authored by Radim Kuba and Jaroslav Flegr.

Specific brain training regimen linked to lower dementia risk in 20-year study

12 February 2026 at 01:00

A specific regimen of computer-based brain exercises focused on visual processing speed may lower the long-term risk of receiving a dementia diagnosis. A new analysis of data spanning two decades suggests that older adults who engaged in this adaptive training, provided they participated in follow-up sessions, were approximately 25 percent less likely to be diagnosed with dementia compared to a control group. These results were published in the journal Alzheimer’s & Dementia: Translational Research & Clinical Interventions.

The search for effective ways to prevent or delay Alzheimer’s disease and related dementias is a primary focus of modern medical research. While physical exercise and diet are frequently cited as potential protective factors, the role of specific cognitive training remains a subject of intense debate. Many commercial products promise to sharpen the mind, yet scientific evidence supporting their ability to prevent disease has been inconsistent. To address this uncertainty, researchers revisited data from a gold-standard clinical trial to see if specific interventions had lasting effects on brain health.

The research was led by Norma B. Coe, a professor at the Perelman School of Medicine at the University of Pennsylvania. Coe and her colleagues sought to understand if the benefits of cognitive training could be detected in medical records twenty years after the training took place. They focused on whether different types of mental exercises had varying impacts on the likelihood of a patient developing dementia as they aged into their eighties and nineties.

The team utilized data from the Advanced Cognitive Training for Independent and Vital Elderly study. Known as the ACTIVE study, this large-scale project began in the late 1990s. It was designed as a randomized controlled trial, which is widely considered the most rigorous method for determining cause and effect in science. The original trial enrolled nearly 3,000 healthy adults over the age of 65 living in the community.

Participants in the ACTIVE study were randomly assigned to one of four groups. The first group received memory training. This instruction focused on teaching strategies for remembering word lists and sequences of items. The second group received reasoning training. These sessions involved identifying patterns in number series and solving problems related to daily living. The third group received speed of processing training. The fourth group served as a control and received no training.

The speed of processing intervention was distinct from the other two. It involved a computer-based task designed to improve the user’s visual attention. Participants were asked to identify an object in the center of the screen while simultaneously locating a target in the periphery. As the user improved, the program became faster and the tasks became more difficult. This made the training “adaptive,” meaning it constantly pushed the participant to the limit of their ability.

The initial training period lasted for five to six weeks. Researchers offered a subset of participants “booster” sessions. These additional training blocks occurred one year and three years after the initial enrollment. The goal of these boosters was to reinforce the skills learned during the first phase.

To determine long-term outcomes, Coe and her team linked the original study data with Medicare claims records spanning from 1999 to 2019. This allowed the researchers to track the participants for up to 20 years. They looked for diagnostic codes indicating Alzheimer’s disease or other forms of dementia. By using insurance claims, the team could identify diagnoses made by doctors in real-world clinical settings, even for participants who had stopped communicating with the original study organizers.

The analysis included 2,021 of the original participants. The results revealed a specific and isolated benefit. Participants who underwent the speed of processing training and attended at least one booster session showed a reduced risk of diagnosed dementia. The hazard ratio was 0.75, indicating a 25 percent lower risk compared to the control group.

The study did not find similar benefits for the other groups. Participants who received memory training or reasoning training did not show a statistically distinct difference in dementia diagnosis rates compared to the control group. This was true even if they attended booster sessions. Additionally, individuals in the speed training group who did not attend the booster sessions showed no reduction in risk. The protective effect appeared to depend on the combination of the specific visual speed task and the reinforcement provided by the follow-up sessions.

The researchers propose several reasons why the speed training might have yielded different results than the memory or reasoning exercises. One hypothesis centers on the type of memory engaged. The memory and reasoning interventions relied on “declarative memory.” This involves learning explicit strategies and conscious techniques to solve problems. In contrast, the speed training engaged “procedural memory.” This type of learning becomes automatic and unconscious through repetition, similar to riding a bike.

Another key difference was the adaptive nature of the speed task. The computer program adjusted the difficulty in real-time. This ensured that participants were always challenged, potentially stimulating the brain more effectively than the static strategies taught in the other groups. The authors suggest that this intense, adaptive engagement of the brain’s processing systems might facilitate neuroplasticity, or the brain’s ability to rewire itself.

The findings align with previous, shorter-term analyses of the ACTIVE study, which had hinted at cognitive benefits for the speed training group. However, this is the first analysis to use Medicare claims to confirm a reduction in diagnosed disease over such a lengthened timeframe.

“This work conveys a clear message but also leads us to ask many new questions. We are keen to dig deeper to understand the underlying mechanisms at play here, but ultimately this is a great problem to have,” said Marilyn Albert, the corresponding study author and director of the Johns Hopkins Alzheimer’s Disease Research Center at the Johns Hopkins School of Medicine.

There are limitations to the study that provide context for the results. The analysis relied on administrative billing codes rather than direct neurological examinations of every participant. This means a diagnosis would only be recorded if a participant visited a doctor and the doctor coded the visit correctly. It is possible that some participants developed dementia but were never formally diagnosed.

The study also excluded participants who were enrolled in Medicare Advantage plans because complete claims data were not available for them. If the population in Medicare Advantage plans differs in health or socioeconomic status from those in traditional Medicare, it could influence the generalizability of the findings. Additionally, the researchers noted that individuals with higher education levels or better access to healthcare are often more likely to receive a dementia diagnosis, which could introduce bias into the claims data.

Despite these caveats, the results offer a potential avenue for preventative intervention. “The findings reported here suggest that moderate cognitive training could delay the onset of dementia over subsequent years,” said Richard Hodes, director of the National Institute on Aging, in a press release. “There is still more research to be done to determine about how this works, but this promising lead may move the field further into developing effective interventions to delay or prevent onset of dementia.”

Future research will likely focus on isolating the specific mechanisms that made the speed training effective. Scientists need to understand if the benefit comes from the visual aspect of the task, the speed component, or the adaptive difficulty. Understanding why the memory and reasoning strategies failed to prevent disease diagnosis is equally important for designing future public health programs.

The study also raises questions about the optimal “dose” of training. Since the benefit was only seen in those who received booster sessions, it suggests that brain training may be like physical exercise: it requires maintenance to remain effective.

“This study shows that simple brain training, done for just weeks, may help people stay mentally healthy for years longer,” said Jay Bhattacharya, a director at the National Institutes of Health. “That’s a powerful idea — that practical, affordable tools could help delay dementia and help older adults keep their independence and quality of life.”

The study, “Impact of cognitive training on claims-based diagnosed dementia over 20 years: evidence from the ACTIVE study,” was authored by Norma B. Coe, Katherine E. M. Miller, Chuxuan Sun, Elizabeth Taggert, Alden L. Gross, Richard N. Jones, Cynthia Felix, Marilyn S. Albert, George W. Rebok, Michael Marsiske, Karlene K. Ball, and Sherry L. Willis.

Childhood trauma scores fail to predict violent misconduct in juvenile detention

11 February 2026 at 23:00

New research published in Aggression and Violent Behavior indicates that a history of childhood trauma may not effectively predict which incarcerated youth will engage in the most frequent and violent misconduct. The study suggests that while adverse childhood experiences explain why young people enter the justice system, current factors such as mental health status and gang affiliation are stronger predictors of behavior during incarceration.

Psychologists and criminologists identify childhood adversity as a primary driver of delinquency. Exposure to trauma often hinders emotional regulation and impulse control. This can lead adolescents to interpret social interactions as hostile and resort to aggression. Correctional systems frequently use the Adverse Childhood Experiences score, commonly known as the ACE score, to quantify this history. The traditional ACE score is a cumulative measure of ten specific categories of abuse, neglect, and household dysfunction.

There is a growing consensus that the original ten-item measure may be too narrow for justice-involved youth. It fails to account for systemic issues such as poverty, community violence, and discrimination. Consequently, scholars have proposed expanded measures to capture a broader range of adversities. D

Despite the widespread use of these scores, little research has isolated their ability to predict the behavior of the most serious offenders. Most studies examine general misconduct across all inmates. This study aimed to determine if trauma scores could identify the small fraction of youth responsible for the vast majority of violent and disruptive incidents within state facilities.

“While research has extensively documented that adverse childhood experiences (ACEs) increase the risk of juvenile delinquency, we knew much less about whether ACEs predict the most serious forms of institutional misconduct among already-incarcerated youth,” said study author Jessica M. Craig, an associate professor of criminal justice and director of graduate programs at the University of North Texas.

“We were particularly interested in whether an expanded ACEs measure—which includes experiences like witnessing community violence, homelessness, and extreme poverty beyond the traditional 10-item scale—would better predict which youth become chronic and violent misconduct offenders during incarceration. This matters because institutional misconduct can lead to longer confinement, additional legal consequences, and reduced access to rehabilitation programs.​”

For their study, the researchers analyzed data from a cohort of 4,613 serious and violent juvenile offenders. The sample included all youth adjudicated and incarcerated in state juvenile correctional facilities in Texas between 2009 and 2013 who had completed an initial intake assessment. The participants were predominantly male. Approximately 46 percent were Hispanic and 34 percent were Black. The average age at the time of incarceration was 16 years old.

The researchers utilized the Positive Achievement Change Tool to derive two distinct trauma scores for each individual. The first was the traditional ACE score. This metric summed exposure to ten indicators: physical, emotional, and sexual abuse; physical and emotional neglect; household substance abuse; mental illness in the home; parental separation or divorce; domestic violence against a mother; and the incarceration of a household member.

The second measure was an expanded ACE score. This metric included the original ten items plus four additional variables relevant to high-risk populations. These additions included a history of foster care or shelter placements, witnessing violence in the community, experiencing homelessness, and living in a family with income below the poverty level. The average youth in the sample had a traditional ACE score of roughly 3.3 and an expanded score of nearly 4.9.

The study did not treat misconduct as a simple average. The researchers sought to identify chronic perpetrators. They calculated the rate of total misconduct incidents and violent misconduct incidents for each youth. They then separated the offenders into groups representing the top 10 percent and the top 1 percent of misconduct perpetrators. This allowed the analysis to focus specifically on the individuals who pose the greatest challenge to institutional safety.

The researchers used statistical models to test whether higher trauma scores increased the likelihood of being in these high-rate groups. These models controlled for other potential influences, including prior criminal history, offense type, age, race, and substance abuse history.

The analysis yielded results that challenged the assumption that past trauma dictates future institutional violence. Neither the traditional ACE score nor the expanded ACE score served as a significant predictor for membership in the top 10 percent or top 1 percent of misconduct perpetrators. This finding held true for both general rule-breaking and specific acts of violence. The addition of variables like poverty and community violence to the trauma score did not improve its predictive power regarding institutional behavior.

“We were surprised that even the expanded ACEs measure—which included witnessing violence, foster care placement, homelessness, and poverty—failed to predict high-rate misconduct,” Craig told PsyPost. “Given that previous research suggested the traditional 10-item ACEs scale might underestimate adversity among justice-involved youth, we expected the expanded measure to show stronger predictive power.”​

While trauma history did not predict chronic misconduct, other personal and situational characteristics proved to be strong indicators. The most consistent predictor of violent behavior was a history of serious mental health problems. Youth with such histories had approximately 150 percent increased odds of falling into the top 1 percent of violent misconduct perpetrators compared to their peers. This effect size suggests that current psychological stability is a primary determinant of safety within the facility.

Age and social connections also played significant roles. The data indicated that older youth were substantially less likely to engage in chronic misconduct. Specifically, those who were older at the time of incarceration were about 50 to 60 percent less likely to be in the high-rate misconduct groups. Gang affiliation was another robust predictor. Youth with gang ties were significantly more likely to be among the most frequent violators of institutional rules. This points to the influence of peer dynamics and the prison social structure on individual behavior.

“These are substantively meaningful effects that have real implications for correctional programming and supervision strategies,” Craig said.

The study provides evidence that the factors driving entry into the justice system may differ from the factors driving behavior once inside. While childhood adversity sets a trajectory toward delinquency, the structured environment of a correctional facility introduces new variables. The researchers suggest that the “survival coping” mechanisms youth develop in response to trauma might manifest differently depending on their immediate environment and mental state.

“Contrary to expectations, we found that neither traditional nor expanded ACEs measures significantly predicted which youth became the most frequent perpetrators of institutional misconduct,” Craig explained. “Instead, factors like age at incarceration, gang affiliation, and mental health history were much stronger predictors.”

“This suggests that while childhood trauma remains critically important for understanding how youth enter the justice system, managing their behavior during incarceration may require greater focus on their current mental health needs, developmental stage, and institutional factors rather than trauma history alone.​”

These findings imply that correctional administrators should look beyond a cumulative trauma score when assessing risk. Screening processes that emphasize current mental health conditions and gang involvement may offer more utility for preventing violence than those focusing solely on historical adversity. Effective management of high-risk populations appears to require targeted mental health interventions and strategies to disrupt gang activity.

There are some limitations to consider. The data came from a single state, which may limit the ability to generalize the findings to other jurisdictions with different correctional cultures or demographics.

The study also relied on cumulative scores that count the presence of adverse events but do not measure their severity, frequency, or timing. It is possible that specific types of trauma, such as physical abuse, have different impacts than others, such as parental divorce. A simple sum of these events might obscure specific patterns that do predict violence.

“It’s important to emphasize that our findings don’t diminish the significance of childhood trauma in understanding juvenile justice involvement overall,” Craig said. “ACEs remain crucial for understanding pathways into the system and should absolutely be addressed through trauma-informed programming. However, when it comes to predicting institutional violence specifically among already deeply-entrenched offenders, personal characteristics and current mental health status appear more salient than historical trauma exposure.​”

“Future research should examine whether specific patterns or combinations of traumatic experiences—rather than cumulative scores—might better predict institutional violence. We’d also like to investigate whether trauma-informed treatment programs, when youth actually receive them during incarceration, can reduce misconduct even when trauma history alone doesn’t predict it. Additionally, examining the timing and severity of ACEs, rather than just their presence or absence, could clarify the trauma-violence relationship.”

The study, “Looking back: The impact of childhood adversity on institutional misconduct among a cohort of serious and violent institutionalized delinquents,” was authored by Jessica M. Craig, Haley Zettler, and Chad R. Trulson.

Study finds mindfulness creates lasting improvements in visual memory

11 February 2026 at 21:00

An experimental study conducted in China found that a 5-week emotion-targeted mindfulness training improved participants’ working memory accuracy for faces displaying emotions, with the exception of faces displaying fear. The improvements continued to be present one month after the training was completed. The research was published in npj Science of Learning.

Mindfulness is the practice of intentionally paying attention to the present moment with openness and without judgment. It involves noticing thoughts, emotions, bodily sensations, and external experiences as they arise. Mindfulness has roots in Buddhist meditation traditions but is widely used today in secular psychological and health contexts. It is commonly cultivated through practices such as meditation, breathing exercises, and mindful movement.

Research shows that mindfulness can reduce stress, anxiety, and depressive symptoms. It can also improve emotional regulation and increase awareness of habitual reactions. Mindfulness helps people relate differently to difficult thoughts and feelings rather than trying to suppress or avoid them. In everyday life, it can be practiced during routine activities such as eating, walking, or listening.

Study author Hui Kou and her colleagues wanted to explore the impact of mindfulness training on working memory for faces and the cognitive mechanisms underlying this effect. They conducted an experiment.

Study participants were 120 undergraduate students from a medical university in China. Ninety of them were women. Participants’ average age was 20 years. All participants were right-handed and had normal or corrected-to-normal vision.

Study authors randomly divided participants into a training and a control group. The training group underwent 5 weeks of mindfulness training based on mindfulness-based stress reduction (MBSR) and cognitive therapy. They had 2 hours of training per week.

The goal of the training was to enhance emotion perception and emotion regulation, so the contents of each weekly training focused on the topic of emotions. The control group had two lectures on mindfulness designed to concentrate on general principles of mindfulness. Each lecture lasted 60 minutes and did not include experiential practices.

Before and after the training and 1 month after the training was finished, participants completed assessments of mindfulness (the Five Facet Mindfulness Questionnaire), and a cognitive test assessing their visual working memory for faces displaying emotions. In the cognitive test, participants first viewed two faces for one second. This was followed by a two-second blank screen (delay period), after which another face appeared.

Participants’ task was to indicate whether that final face was among the two initially shown. There were 48 such trials in one block. There were 5 blocks in total. All faces in one block displayed the same emotion and the emotion displayed was different in each block. The emotions the faces displayed were happy, sad, angry, fearful, and neutral.

The results showed that the mindfulness training resulted in improved working memory accuracy for facial stimuli across all examined emotional expressions except fear. One month after the training was finished, these improvements were still present. Participants from the training group performed better than those in the control group both immediately after the training and one month later.

Statistical analyses indicated that, after the training, participants processed information on faces they viewed more efficiently when making memory decisions regardless of the emotion the face displayed. The stronger this increase in processing efficiency was, the more accurate participants’ memory performance became.

“These findings demonstrate that mindfulness training induces lasting improvements in both accuracy and processing efficiency of visual working memory, independent of facial emotions, clarifying its cognitive mechanisms,” the study authors concluded.

The study contributes to the scientific knowledge on the effects of mindfulness training. However, it should be noted that the study used just a single working memory task, with a single type of stimuli. It remains unknown how much the findings would generalize to different working memory tasks and to stimuli that are not faces displaying emotions.

The paper, “Mindfulness training enhances face working memory: evidence from the drift-diffusion model,” was authored by Hui Kou, Wei Luo, Xiaodong Li, Jia Wu, Qianguo Xiao, and Taiyong Bi.

High rates of screen time linked to specific differences in toddler vocabulary

11 February 2026 at 20:00

New research published in the journal Developmental Science provides evidence that the amount of time toddlers spend watching videos is associated with the specific types of words they learn, distinct from the total number of words they know. The findings indicate that higher levels of digital media consumption are linked to a vocabulary containing a smaller proportion of body part words and a larger proportion of words related to people and furniture.

The widespread integration of digital media into family life has prompted questions about its influence on early child development. Current estimates suggest that many children under the age of two spend roughly two hours per day interacting with screens, primarily watching videos or television.

Previous research has often focused on the relationship between screen time and the overall size of a child’s vocabulary. These earlier studies generally established that high exposure to low-quality programming correlates with a lower total number of words spoken by the child.

However, language acquisition is a multifaceted process. Children do not learn all words in the same manner. The acquisition of certain types of words relies heavily on specific environmental inputs.

“There is no doubt that use of digital media by young children has been on the rise in the past few years, and growing evidence suggest that this has impacts on their language learning, especially during the first few years of life,” said study author Sarah C. Kucker, an assistant professor of psychology at Southern Methodist University.

“For instance, we know that children who watch high rates of low-quality television/videos tend to have smaller vocabularies and less advanced language skills (this is work by my own lab, but also many others such as Brushe et al., 2025; Madigan et al., 2024). However, we also know that some forms of media do not have negative effects and can, in fact, be useful for language when the media is high-quality, socially-interactive, and educational in nature (work by Sundqvist as well Jing et al., 2024).”

“On top of this, we know that children’s language development and specifically their vocabulary learning is not an all-or-nothing, but rather that children learn different types of words at different times and in different ways – e.g. learning words for body parts is easier when you can touch the body part when named, and names for people (mama, dada) are learned earlier than most other nouns,” Kucker continued.

“When we put this together it means that we shouldn’t be looking at digital media’s influence on language as just an all-or-nothing, or blanket good-or-bad, but rather take a more nuanced look. So we did just that by looking at the types of words children are learning and the association with the time they spend with digital media.”

For their study, the researchers recruited 388 caregivers of children aged 17 to 30 months. This age range represents a period of rapid language expansion often referred to as the vocabulary spurt. Participants were recruited through online research platforms and in-person visits to a university laboratory. The researchers combined these groups into a single dataset for analysis.

Caregivers completed a comprehensive survey known as the Media Assessment Questionnaire. This instrument asked parents to report the number of minutes their child spent using various forms of technology, such as television, tablets, and video chat.

The researchers collected data for both typical weekdays and weekends. They used these reports to calculate a weighted daily average of screen time for each child. The data revealed that video and television viewing was the most common media activity. On average, the children in the sample watched videos for approximately 110 minutes per day.

To measure language development, caregivers completed the MacArthur-Bates Communicative Development Inventory. This is a standardized checklist containing hundreds of words commonly learned by young children. Parents marked the words their child could say.

This tool allowed the researchers to calculate the total size of each child’s noun vocabulary. It also enabled them to break down the vocabulary into specific semantic categories. These categories included animals, vehicles, toys, food and drink, clothing, body parts, small household items, furniture and rooms, outside things, places to go, and people.

The researchers also analyzed the vocabulary data through a different lens. They classified nouns based on the features that define their categories. Specifically, they looked at shape-based nouns and material-based nouns.

Shape-based nouns usually refer to solid objects defined by their physical form, such as “ball” or “cup.” Material-based nouns often refer to nonsolid substances or items defined by what they are made of, such as “applesauce” or “chalk.” This distinction is significant in developmental psychology because physical handling of objects is thought to help children learn these concepts.

The researchers found that children with higher rates of video viewing produced a smaller proportion of body part words. In a typical toddler’s vocabulary, words like “nose,” “feet,” or “ears” are often among the first learned. However, as screen time increased, the density of these words in the child’s repertoire decreased relative to other word types.

In contrast, the researchers found a positive association between video time and words related to people. This category includes proper names, titles like “teacher” or “grandma,” and general terms like “baby.” Children who watched more videos tended to have a vocabulary composition that was more heavily weighted toward these social labels.

A similar positive association was found for the category of furniture and rooms. Heavy media users were more likely to produce words such as “couch,” “TV,” or “kitchen” relative to their peers with lower media use.

“While we expected that children with high media use would have fewer body part words in their vocabulary, we were surprised to find that children with high media knew relatively more people words and furniture words,” Kucker told PsyPost. “We suspect this may have to do with the content of the media highlighting those terms, or perhaps the physical context in which children are using media (e.g. while sitting on a couch or when working with mom), but the tools to capture this information are currently limited.”

The researchers found no significant relationship between video watching and the other semantic categories measured, such as animals, toys, or food. Additionally, the researchers found no evidence that video exposure altered the balance between shape-based and material-based nouns. The proportion of words related to solid objects versus nonsolid substances remained stable regardless of screen time habits.

The research highlights that the impact of digital media is not uniformly negative or positive. The findings suggest that screen time changes the landscape of early learning in specific ways.

“Most caregivers have heard the advice to avoid screen time with their young children,” Kucker said. “However, the reality is that that is very difficult to do 100% of the time in today’s tech-based world. What this study shows is that a high amount of low-quality videos/TV is associated with lower overall vocabulary sizes in 2-year-old children, but that that videos/TV may not impact all types of words equally.”

“For instance, children with more video/TV time have fewer names for body parts, but seem to learn most other nouns at relatively equal levels, potentially because some videos/TV do a good job teaching children some basics.”

“So do try to limit children’s screen time, but don’t fret about avoiding it completely,” Kucker explained. “Instead, consider the content and context for when the media is being used and why – high-quality, educational use, or those that are social (e.g. FaceTime, Zoom), may not be detrimental as long as children are still getting rich interactive play outside of the screen.”

As with all research, there are some limitations to consider. The data relied on caregiver reports, which can introduce memory errors or bias.

The study was also cross-sectional, meaning it captured a snapshot of the children’s lives rather than following them over time. It is not possible to determine causality from this data alone. For example, it is unknown if watching videos causes the change in vocabulary or if families with different communication styles rely more on media.

“We are currently looking at more longitudinal impacts of digital media on children’s language over time as well as individual differences across children, such as considering personality and temperament,” Kucker noted.

Additionally, the study focused primarily on the duration of screen time. It did not fully capture the specific content of the videos the children watched or the nature of the interactions parents had with their children during viewing. The researchers noted that educational content and co-viewing with a parent can mitigate potential negative effects.

“Not all media is bad!” Kucker said. “Media’s effect on children is nuanced and interacts with the rest of their experiences. I always like to tell parents that if your child watches an educational show for a few minutes so you can have a few minutes of quiet, that may be helping you to then be a better parent later which will more than offset that few minutes of media time.”

“Children who get rich, social experiences are often still developing in very strong ways even if they have a bit of high-quality screen time here and there. Just considering the content and context of the media is key!”

“We have a lot of work left still to do and understand in this area, and much of the support for this work has come from various grants and foundations, such as NIH and NSF,” Kucker added. “Without those funding avenues, this work couldn’t be done.”

The study, “Videos and Vocabulary – How Digital Media Use Impacts the Types of Words Children Know,” was authored by Sarah C. Kucker, Rachel F. Barr, and Lynn K. Perry.

Hippocampal neurons shift their activity backward in time to anticipate rewards

11 February 2026 at 19:00

Recent experimental findings suggest that the hippocampus, the brain region primarily associated with memory and navigation, actively reorganizes its neural patterns to anticipate future events. Researchers observed that as mice learned to navigate a complex task, the neural signals associated with a reward shifted backward in time to predict the outcome before it happened. These results were published in the journal Nature.

The hippocampus is a seahorse-shaped structure located deep within the temporal lobes of the brain. Neuroscientists have recognized for decades that this region is essential for forming new memories. It is also responsible for creating a cognitive map. This internal representation allows an organism to visualize its environment and navigate through space.

Biologists have traditionally viewed the cognitive map as a relatively static record of the environment. Under this view, the hippocampus encodes features such as landmarks, borders, and the location of resources. However, survival requires more than just a record of the past. An animal must use its prior experiences to predict where food or safety will be located in the future.

This necessity leads to the theory of predictive coding. This theory suggests that the brain is constantly generating models of the world to estimate future outcomes. When an outcome matches the prediction, the brain learns that its model is correct. When an outcome is unexpected, the brain must update the model.

While this theory is widely accepted in computational neuroscience, observing the physical reorganization of cells in the hippocampus over long periods has been a technical challenge. Most neural recording technologies can only track brain activity for short durations. This limitation makes it difficult to see how internal maps evolve as learning consolidates over weeks.

Mohammad Yaghoubi, a researcher at McGill University, aimed to bridge this gap. Working with senior author Mark Brandon at the Douglas Research Centre, Yaghoubi designed an experiment to track specific neurons across an extended timeframe. They sought to determine if the hippocampal map restructures itself to prioritize the prediction of rewards.

The research team employed a sophisticated imaging technique known as calcium imaging. They injected a modified virus into the brains of mice. This virus caused neurons to express a fluorescent protein that glows when calcium enters the cell, which happens when a neuron fires.

The researchers then implanted a gradient refractive index lens, a tiny microscope component, above the hippocampus. This setup allowed them to attach a miniature camera, weighing only a few grams, to the head of the mouse. The camera recorded the fluorescence of hundreds of individual neurons while the animal moved freely.

Because this method relies on optical imaging rather than physical electrodes, it is less invasive to the tissue over time. This stability allowed Yaghoubi and his colleagues to identify and monitor the exact same neurons day after day for several weeks. They could then correlate specific cellular activity with the animal’s behavior during learning.

The mice were trained to perform a task known as “delayed nonmatching-to-location” inside an automated chamber. The apparatus featured a touch-sensitive screen at one end and a reward dispenser at the other. The task required the mouse to initiate a trial and then observe a sample location lighting up on the screen.

After a short delay, the screen displayed the original location alongside a new, novel location. To receive a reward, the mouse had to ignore the familiar spot and touch the new location. The reward was a small amount of strawberry milkshake delivered at the opposite end of the chamber. This task is cognitively demanding because it requires the animal to hold information in working memory and apply a specific rule.

At the beginning of the training, the researchers noted that a distinct population of hippocampal neurons fired vigorously when the mouse received the milkshake. These cells appeared to be tuned specifically to the experience of consuming the reward. The neural map at this stage was heavily focused on the outcome itself.

As the mice repeated the task over weeks and their performance improved, the neural patterns began to change. The researchers observed a phenomenon described as backpropagation of neural tuning. The cells that originally fired only upon receiving the reward began to fire earlier in the sequence of events.

“What we found was surprising,” said Brandon. “Neural activity that initially peaked at the reward gradually shifted to earlier moments, eventually appearing before mice reached the reward.”

By the time the mice had mastered the task, these specific neurons were firing while the animal was still approaching the reward port. In some instances, the firing shifted all the way back to the moment the mouse made the correct choice on the touchscreen. The cells had transformed from sensors of the present reward into predictors of the future reward.

The study also analyzed the activity of the neuronal population as a whole. In the early stages of learning, a large percentage of the recorded cells were dedicated to encoding the reward location. This resulted in an over-representation of the reward site in the mouse’s mental map.

As the weeks passed, the proportion of neurons tuned to the reward itself decreased. Simultaneously, the number of neurons encoding the approach and the choice period increased. The brain appeared to be efficient. Once the reward was predictable, fewer resources were needed to represent it. The cognitive effort shifted toward the actions required to obtain it.

This reorganization supports the idea that the hippocampus acts as a predictive device. The backward shift in timing allows the brain to signal an upcoming event based on the current context. This predictive signal likely helps guide the animal’s behavior, reinforcing the actions that lead to a positive outcome.

The researchers confirmed that this shift was not due to simple changes in the animal’s speed or position. They used statistical controls to ensure that the change in firing timing was a true remapping of the cognitive representation. The consistency of the findings across multiple animals suggests a fundamental biological mechanism.

“The hippocampus is often described as the brain’s internal model of the world,” said Brandon. “What we are seeing is that this model is not static; it is updated day by day as the brain learns from prediction errors. As outcomes become expected, hippocampal neurons start to respond earlier as they learn what will happen next.”

There are limitations to the study that warrant mention. The research was conducted on mice, and while the hippocampus is evolutionarily conserved, human cognition involves additional layers of complexity. Further research is necessary to confirm if identical cellular mechanisms drive predictive learning in the human brain.

Additionally, the study focused on a reward-based task. It remains to be seen if the hippocampus utilizes the same predictive backpropagation for negative or aversive outcomes. Future experiments will likely investigate whether the brain rewires itself similarly to predict threats or punishments.

The findings may have implications for understanding neurodegenerative disorders. Individuals with Alzheimer’s disease often exhibit disorientation and difficulty learning from new experiences. If the predictive coding mechanism in the hippocampus is disrupted, it could explain why patients struggle to anticipate consequences or navigate familiar environments.

By demonstrating that memory circuits are dynamic and predictive, this study offers a new perspective on how the brain interacts with time. The hippocampus does not merely archive the past. It actively reconstructs it to prepare for the future.

The study, “Predictive Coding of Reward in the Hippocampus,” was authored by Mohammad Yaghoubi, Andres Nieto-Posadas, Coralie-Anne Mosser, Thomas Gisiger, Émmanuel Wilson, Sylvain Williams, and Mark P. Brandon.

Psychology study sheds light on the phenomenon of waifus and husbandos

11 February 2026 at 17:00

A new study published in Psychology of Popular Media suggests that human romantic attraction to fictional characters may operate through the same psychological mechanisms that drive relationships between real people. The research offers insight into how individuals form deep attachments to non-existent partners in an increasingly digital world.

The concept of falling in love with an artificial being is not a modern invention, the researchers behind the new study noted. The ancient Greek narrative of Pygmalion describes a sculptor who creates a statue so beautiful that he falls in love with it. This theme of attributing human qualities and agency to inanimate creations has persisted throughout history.

In the contemporary landscape, this phenomenon is often observed within the anime fan community. Fans of Japanese animation sometimes utilize specific terminology to describe characters they hold in special regard. The terms “waifu” and “husbando” are derived from the English words for wife and husband. These labels imply a desire for a significant, often romantic, relationship with the character if they were to exist in reality.

The researchers conducted the new study to better understand the nature of relationships with “virtual agents.” A virtual agent is any character that exists solely on a screen but projects a sense of agency or independence to the audience. As technology advances, these characters are becoming more interactive and realistic. The authors sought to determine if the reasons people connect with these characters align with evolutionary theories regarding human mating strategies.

“Given the popularity of AI agents and chatbots, we were interested in people who have attraction to fictional characters,” said study author Connor Leshner, a PhD candidate in the Department of Psychology at Trent University.

“Through years of research, we have access to a large and charitable sample of anime fans, and it is a norm within this community to have relationships (sometimes real, sometimes now) with fictional characters. We mainly wanted to understand whether a large group of people have the capacity for relationships with fictional characters, because, if they do, then a logical future study would be studying relationships with something like AI.”

To investigate this, the research team recruited a large sample of self-identified anime fans. Participants were gathered from various online platforms, including specific communities on the website Reddit. The final sample consisted of 977 individuals who indicated that they currently had a waifu or husbando.

The demographic makeup of the sample was predominantly male. Approximately 78 percent of the respondents identified as men, while the remainder identified as women. The average age of the participants was roughly 26 years old, and more than half were from the United States. This provided a snapshot of a specific, highly engaged subculture.

The researchers employed a quantitative survey to assess the participants’ feelings and motivations. They asked participants to rate their agreement with various statements on a seven-point scale. The survey measured four potential reasons for choosing a specific character. These reasons were physical appearance, personality, the character’s role in the story, and the character’s similarity to the participant.

The researchers also sought to categorize the type of connection the fan felt toward the character. The three categories measured were emotional connection, sexual attraction, and feelings of genuine love.

The results provided evidence supporting the idea that fictional attraction mirrors real-world attraction. The data showed a positive association between a character’s physical appearance and the participant’s sexual attraction to them. This suggests that visual appeal is a primary driver for sexual interest in virtual agents, much as it is in human interaction.

However, physical appearance was not the only factor at play. The researchers found that a character’s personality was a strong predictor of emotional connection. Additionally, participants who felt that a character was similar to themselves were more likely to report a deep emotional bond. This indicates that shared traits and relatable behaviors foster feelings of closeness even when the partner is not real.

A central focus of the study was the influence of gender on these connections. The analysis revealed distinct differences between how men and women engaged with their chosen characters. Men were significantly more likely to report feelings of sexual attraction toward their waifus or husbandos. This aligns with prior research on male mating strategies that emphasizes visual and sexual stimuli.

Women, in contrast, reported higher levels of emotional connection with their fictional partners. While they also valued personality, their bonds were characterized more by affection and emotional intimacy than by sexual desire. This finding supports the hypothesis that women apply criteria focused on emotional compatibility even when the relationship is entirely imagined.

The researchers also explored the concept of “genuine love” for these characters. They found that feelings of love were predicted by a combination of factors. Physical appearance, personality, and similarity to the self all contributed to the sensation of being in love. This suggests that for a fan to feel love, the character must appeal to them on multiple levels simultaneously.

“People do have the capacity for these relationships,” Leshner told PsyPost. “Sometimes they are based in physical attraction, especially for men, while others are based on platonic, personality-based attraction, especially for women. Overall, people can feel a deep, intimate connection with people who don’t exist on our plane of reality, and I think that’s neat.”

The findings were not particularly surprising. “Everything matches what you’d expect from related theories, like evolutionary mating strategy where men want physical or sexual relationships, while women find more appeal in the platonic, long-term relationship,” Leshner said. “We have ongoing research that helps contextualize these findings more, but until that’s published, we cannot say much more.”

One potential predictor that did not yield significant results was the character’s role in the media. The “mere exposure effect” suggests that people tend to like things simply because they are familiar with them. The researchers tested if characters with larger roles, such as protagonists who appear on screen frequently, were more likely to be chosen. The data did not support this link.

The specific narrative function of the character did not predict sexual attraction, emotional connection, or love. A supporting character with limited screen time appeared just as capable of inspiring deep affection as a main hero. This implies that the specific attributes of the character matter more than their prominence in the story.

These findings carry implications that extend beyond the anime community. As artificial intelligence and robotics continue to develop, human interactions with non-human entities will likely become more common. The study suggests that people are capable of forming complex, multifaceted relationships with entities that do not physically exist.

“Anime characters don’t have agency, nor do they have consciousness, so the extent to which the average person might have a serious relationship with an anime characters is probably limited,” Leshner told PsyPost. “With that said, the same can is true of AI, and the New York Times published a huge article on human-AI romantic relationships. So maybe these relationships are more appealing than we really capture here.”

There are limitations to the study. The research relied on cross-sectional data, which means it captured a single moment in time. This design prevents researchers from proving that specific character traits caused the attraction. It is possible that attraction causes a participant to perceive traits differently.

Additionally, the sample was heavily skewed toward Western, male participants. Cultural differences in how relationships are viewed could influence these results. The anime fandom in Japan, for instance, might exhibit different patterns of attachment than those observed in the United States. Future research would benefit from a more diverse, global pool of participants.

Despite these limitations, the study provides a foundation for understanding the future of human connection. It challenges the notion that relationships with fictional characters are fundamentally different from real relationships. The psychological needs and drives that lead someone to download a soulmate appear to be remarkably human.

“People might either find these relationships weird, or might say that AI is significantly different from what we show here,” Leshner added. “My first response is that these relationships aren’t weird, and we’ve been discussing similar relationships for centuries. The article opens with a reference to Pygmalion, which is a Greek story about a guy falling in love with a statue. At minimum, it’s a repeated idea in our culture.”

“To my second point about the similarities between AI and anime characters, I think about it like this: AI might seem more human, but it’s just Bayesian statistics with extra steps. If you watch an anime all the way through, you can spend up to hundreds of hours with characters who have their own human struggles, triumphs, loves and losses. To be drawn toward that story and character is, to me, functionally similar to talking to an AI chatbot. The only difference is that an AI chatbot can feel more responsive, and might have more options for customization.”

“I think this research is foundational to the future of relationships, but I don’t think people know enough about anime characters, or really media or parasocial relationships broadly, to see things the same way,” Leshner continued. “I’m going to keep going down this road to understand the parallels with AI and modern technologies, but I fully believe that this is an uphill battle for recognition.”

“I hope this work inspires people to look into why people might be attracted to anime characters more broadly. It feels like the average anime character is made to be conventionally attractive in a way that is not true of most animation. It might still be weird to someone with no knowledge of the field if they engage in this quick exercise, but I have the utmost confidence that the average person might say, ‘Well, although it is not for me, I can understand it better now.'”

The study, “You would not download a soulmate: Attributes of fictional characters that inspire intimate connection,” was authored by Connor Leshner, Stephen Reysen, Courtney N. Plante, Sharon E. Roberts, and Kathleen C. Gerbasi.

Scientists: A common vaccine appears to have a surprising impact on brain health

11 February 2026 at 15:00

A new scientific commentary suggests that annual influenza vaccination could serve as a practical and accessible strategy to help delay or prevent the onset of dementia in older adults. By mitigating the risk of severe cardiovascular events and reducing systemic inflammation, the seasonal flu shot may offer neurological protection that extends well beyond respiratory health. This perspective article was published in the journal Aging Clinical and Experimental Research.

Dementia poses a significant and growing challenge to aging societies worldwide, creating an urgent need for scalable prevention strategies. While controlling midlife risk factors like high blood pressure remains a primary focus, medical experts are looking for additional tools that can be easily integrated into existing healthcare routines.

Lorenzo Blandi from the Vita-Salute San Raffaele University and Marco Del Riccio from the University of Florence authored this analysis to highlight the potential of influenza vaccination as a cognitive preservation tool. They argue that the current medical understanding of the flu shot is often too limited. The researchers propose that by preventing the cascade of physical damage caused by influenza, vaccination can help maintain the brain’s vascular and cellular health.

The rationale for this perspective stems from the observation that influenza is not merely a respiratory illness. It is a systemic infection that can cause severe complications throughout the body. The authors note that influenza infection is associated with a marked increase in the risk of heart attacks and strokes in the days following illness.

These vascular events are known to contribute to cumulative brain injury. Consequently, Blandi and Del Riccio sought to synthesize existing evidence linking vaccination to improved cognitive outcomes. They posit that preventing these viral insults could modify the trajectory of dementia risk in the elderly population.

To support their argument, the authors detail evidence from four major epidemiological studies that demonstrate a link between receiving the flu shot and a lower incidence of dementia. The first piece of evidence cited is a 2023 meta-analysis. This massive review aggregated data from observational cohort studies involving approximately 2.09 million adults.

The participants in these studies were followed for periods ranging from four to thirteen years. The analysis found that individuals who received influenza vaccinations had a 31 percent lower risk of developing incident dementia compared to those who did not.

The second key study referenced was a claims-based cohort study. This research utilized propensity-score matching, a statistical technique designed to create comparable groups by accounting for various baseline characteristics. The researchers analyzed data from 935,887 matched pairs of older adults who were at least 65 years old.

The results showed that those who had received an influenza vaccination had a 40 percent lower relative risk of developing Alzheimer’s disease over a follow-up period of roughly four years. The study calculated an absolute risk reduction of 3.4 percent, suggesting that for every 29 people vaccinated, one case of Alzheimer’s might be prevented during that timeframe.

The third study highlighted in the perspective used data from the Veterans Health Administration. This study was significant because it used time-to-event models to address potential biases related to when vaccinations occurred.

The researchers found that vaccinated older adults had a hazard ratio for dementia of 0.86. This statistic indicates a risk reduction of roughly 14 percent. The data also revealed a dose-response relationship. This means that the protective signal was strongest among participants who received multiple vaccine doses across different years and seasons, rather than just a single shot.

The fourth and final study cited was a prospective analysis of the UK Biobank. This study modeled vaccination as an exposure that varies over time, allowing for a nuanced view of cumulative effects.

The researchers observed a reduced risk for all-cause dementia, with a hazard ratio of 0.83. The reduction in risk was even more pronounced for vascular dementia, showing a hazard ratio of 0.58. Similar to the veterans’ study, this analysis supported the idea of a dose-response relationship. The accumulation of vaccinations over time appeared to correlate with better cognitive outcomes.

Blandi and Del Riccio explain several biological mechanisms that could account for these protective effects. The primary pathway involves the prevention of vascular damage. Influenza infection is a potent trigger for inflammation and blood clotting.

Research shows that the risk of acute myocardial infarction can be six times greater in the first week after a flu infection. By preventing the flu, the vaccine likely prevents these specific vascular assaults. Since vascular health is closely tied to brain health, avoiding these events helps preserve cognitive reserve. The cumulative burden of small strokes or reduced blood flow to the brain is a major predictor of cognitive decline.

In addition to vascular protection, the authors discuss the role of neuroinflammation. Studies in animal models have shown that influenza viruses can trigger activation of microglia, which are the immune cells of the brain. This activation can lead to the loss of synapses and memory decline, even if the virus itself does not enter the brain.

Systemic inflammation caused by the flu can cross into the nervous system. The authors suggest that vaccination may dampen these inflammatory surges. There is also a hypothesis known as “trained immunity,” where vaccines might program the immune system to respond more efficiently to threats, reducing off-target damage to the brain.

Based on this evidence, the authors propose several policy changes and organizational strategies. They argue that public health messaging needs to be reconceptualized. Instead of framing the flu shot solely as a way to avoid a winter cold, health officials should present it as a measure to reduce heart attacks, strokes, and potential cognitive decline. This approach addresses the priorities of older adults, who often fear dementia and loss of independence more than respiratory illness.

The authors also recommend specific clinical practices. They suggest that health systems should prioritize the use of high-dose or adjuvanted vaccines for adults over the age of 65. These formulations are designed to overcome the weaker immune response often seen in aging bodies.

Additionally, the authors advocate for making vaccination a default part of hospital discharge procedures. When an older adult is leaving the hospital after a cardiac or pulmonary event, vaccination should be a standard component of their care plan. This would help close the gap between the known benefits of the vaccine and the currently low rates of uptake in many regions.

Despite the promising data, Blandi and Del Riccio acknowledge certain limitations in the current body of evidence. The majority of the data comes from observational studies. This type of research can identify associations but cannot definitively prove causality.

There is always a possibility of “healthy user bias,” where people who choose to get vaccinated are already more health-conscious and have better lifestyle habits than those who do not. While the studies cited used advanced statistical methods to control for these factors, residual confounding can still exist.

The authors also note that studies based on medical claims data can suffer from inaccuracies in how dementia is diagnosed and recorded. Furthermore, the precise biological mechanisms remain a hypothesis that requires further validation. The authors call for future research to include pragmatic randomized trials that specifically measure cognitive endpoints. They suggest that future studies should track biological markers of neuroinflammation in vaccinated versus unvaccinated groups to confirm the proposed mechanisms.

The study, “From breath to brain: influenza vaccination as a pragmatic strategy for dementia prevention,” was authored by Lorenzo Blandi and Marco Del Riccio.

Staying off social media isn’t always a sign of a healthy social life

11 February 2026 at 03:00

New research suggests that the way adolescents use social media is not a uniform experience but rather splits into distinct personality-driven profiles that yield varying social results. The findings indicate that digital platforms largely reinforce existing friendships rather than helping isolated youth build new connections. These results were published in the journal Computers in Human Behavior.

Psychologists have debated whether apps like Instagram, TikTok, or Snapchat help or harm adolescent development for years. Some theories propose that these platforms simulate meaningful connection and allow young people to practice social skills. Other perspectives argue that digital interactions replace face-to-face communication with superficial scrolling, leading to isolation.

However, most previous inquiries looked at average behaviors across large groups or focused on simple metrics like screen time. This approach often misses the nuance of individual habits. Real-world usage is rarely just about logging on or logging off. It involves a mix of browsing, posting, liking, and chatting.

Federica Angelini, the lead author from the Department of Developmental and Social Psychology at the University of Padova in Italy, worked with colleagues to move beyond these binary categories. They wanted to understand how specific combinations of online behaviors cluster together. They also sought to determine if a teenager’s underlying social motivations drive these habits.

The research team recognized that early adolescence is a formative period for social and emotional growth. During these years, close relationships with peers become central to a young person’s identity. Because these interactions now occur simultaneously in physical and digital spaces, the authors argued that science needs better models to capture this complexity.

To achieve this, the team tracked 1,211 Dutch students between the ages of 10 and 15 over the course of three years. They used surveys to measure how often students looked at content, posted about themselves, interacted with others, and shared personal feelings. The researchers also assessed the students’ psychological motivations, such as the fear of missing out or a desire for popularity.

Using a statistical technique called latent profile analysis, the investigators identified four distinct types of users. The largest group, comprising about 54 percent of the participants, was labeled “All-round users.” These teens engaged in a moderate amount of all activities, from scrolling to posting.

The study found that All-round users generally maintained moderate-to-high quality friendships throughout the three-year period. Their digital habits appeared to be an extension of a healthy offline social life. They used these platforms to keep in touch and share experiences with friends they already saw in person.

The second largest group, making up roughly 30 percent, was identified as “Low users.” These individuals rarely engaged with social media in any form, whether passive scrolling or active posting. While it might seem beneficial to be less dependent on screens, the data showed a different story for this specific group.

These Low users reported lower quality friendships at the start of the research compared to their peers. Their lack of online engagement appeared to mirror a lack of connection in the real world. Without a strong peer group to interact with, they had little motivation to log on. The data suggests they were not simply opting out of technology but were missing out on the social reinforcement that happens online.

A smaller group, about 8 percent, was termed “High self-disclosing users.” These adolescents frequently used digital platforms to share personal feelings, secrets, and emotional updates. They tended to prefer online communication over face-to-face talk.

This group scored higher on measures of anxiety and depression. The researchers suggest these teens might use the internet to compensate for difficulties in offline social situations. The reduced pressure of online chat, which lacks nonverbal cues like eye contact, may make it easier for them to open up. Despite their emotional struggles, this group maintained high-quality friendships, suggesting their vulnerability online helped sustain their bonds.

The final group, labeled “High self-oriented users,” made up roughly 7 percent of the sample. These teens focused heavily on posting content about themselves but showed less interest in what peers were doing. They were driven by a desire for status and attention.

Unlike the other groups, High self-oriented users were less concerned with the fear of missing out. Their primary goal appeared to be self-promotion rather than connection. Notably, this was the only group that saw a decline in the quality of their close friendships over the three years. Their focus on gaining an audience rather than engaging in reciprocal friendship likely failed to deepen their personal relationships.

The analysis revealed that social media generally acts as an amplifier of offline social dynamics. Teens with strong existing friendships used the platforms to maintain those bonds. Those with weaker connections did not seem to benefit from the technology.

This supports the idea that the benefits of social media rely heavily on pre-existing relationships. Adolescents who struggle socially in person may find it difficult to use these tools to build meaningful relationships from scratch. Instead of bridging the gap, the technology might leave them further behind.

The study also highlighted the role of motivation. Teens who used social media to seek status were more likely to fall into the self-oriented or self-disclosing categories. Those who simply wanted to stay in the loop tended to be All-round users.

There are limitations to consider regarding this research. The data relied on self-reported surveys, which can sometimes be inaccurate as people may not remember their habits perfectly. Additionally, the study was conducted in the Netherlands, so the results might not apply universally to adolescents in other cultural contexts.

The researchers noted that some participants dropped out of the study over the three years, which is common in longitudinal work. The study also did not strictly differentiate between friends met online versus friends met offline, though most participants indicated they communicated with people they knew in real life.

Future research could benefit from using objective measures, such as tracking app usage data directly from smartphones. It would also be beneficial to investigate how these profiles evolve as teens move into young adulthood. Understanding these patterns could help parents and educators tailor their advice, rather than giving generic warnings about screen time.

The study, “Adolescent social media use profiles: A longitudinal study of friendship quality and socio-motivational factors,” was authored by Federica Angelini, Ina M. Koning, Gianluca Gini, Claudia Marino, and Regina J.J.M. van den Eijnden.

Moderate coffee and tea consumption linked to lower risk of dementia

11 February 2026 at 02:00

A new analysis of long-term dietary habits suggests that your daily cup of coffee or tea might do more than just provide a morning jolt. Researchers have determined that moderate consumption of caffeinated beverages is linked to a lower risk of dementia and better physical brain function over time. These results were published in the journal JAMA.

Dementia and Alzheimer’s disease represent a growing health challenge as the global population ages. Current medical treatments offer limited benefits once symptoms appear, and they cannot reverse the condition. This reality has prompted medical experts to look for lifestyle habits that might delay the onset of cognitive decline. Diet is a primary area of focus because it is a factor that individuals can control in their daily lives.

Coffee and tea are of particular interest to nutritional scientists. These beverages contain chemical compounds that may protect brain cells from damage. These include caffeine and polyphenols, which are plant-based micronutrients with antioxidant properties.

Prior attempts to measure this potential benefit have yielded mixed results. Some earlier inquiries relied on participants remembering their dietary habits from the distant past. Others checked in with participants only once, failing to capture how habits change over a lifetime. To address these limitations, a team led by Yu Zhang and Daniel Wang from the Harvard T.H. Chan School of Public Health and Mass General Brigham undertook a more expansive approach.

The investigators analyzed data from two massive, long-running groups of medical professionals. The study included over 130,000 female nurses and male health professionals who provided updates on their health and diet for up to forty-three years. Unlike smaller snapshots of time, this project tracked dietary habits repeatedly. Participants filled out detailed questionnaires about what they ate and drank every two to four years.

This distinct method allowed the researchers to reduce errors associated with memory. It also helped them calculate a cumulative average of caffeine intake over decades. The team looked for associations between these drinking habits and three specific outcomes: the clinical diagnosis of dementia, self-reported memory problems, and performance on objective cognitive tests.

The data revealed a distinct pattern regarding the consumption of caffeinated beverages. Individuals who drank caffeinated coffee had a lower chance of developing dementia compared to those who avoided it. The relationship followed a specific curve rather than a straight line.

The greatest reduction in risk appeared among people who drank approximately two to three cups of caffeinated coffee per day. Consuming more than this amount did not result in additional benefits, but it also did not appear to cause harm. This finding contradicts some earlier fears that high caffeine intake might be detrimental to the aging brain.

Tea drinkers saw similar benefits. Consuming one to two cups of tea daily was linked to a lower likelihood of dementia diagnosis. In contrast, the researchers found that the results were not statistically significant among those who drank decaffeinated coffee. This distinction suggests that caffeine itself may play a central role in the observed neuroprotection.

The study also looked at how well participants could think and remember as they aged. In a subset of the participants who underwent telephone-based testing, higher caffeinated coffee intake tracked with better scores on performance tasks. These tests measured verbal memory, attention, and executive function.

The difference in scores was roughly equivalent to being several months younger in terms of brain aging. Even among people who carried genes that usually increase the risk of Alzheimer’s, the link between caffeine and better brain health remained consistent. The researchers also assessed “subjective cognitive decline.” This is a stage where individuals feel they are having memory slips before a doctor can detect them. Higher caffeine intake was associated with fewer reports of these subjective problems.

These results add weight to a growing body of evidence linking caffeine to neurological health. However, the findings do not perfectly align with every previous study. For example, recent analyses of the UK Biobank database also found that coffee drinkers had a lower risk of neurodegenerative conditions. That research highlighted that unsweetened coffee seemed most beneficial.

The UK Biobank findings differed slightly regarding decaffeinated coffee. While the Harvard team found no link between decaf and dementia risk, the UK study suggested decaf might still offer some protection. This discrepancy implies that other compounds in coffee besides caffeine might play a role, or that different populations metabolize these beverages differently.

Other research utilizing brain imaging has offered clues about why this might happen. A study from the Australian Imaging, Biomarkers and Lifestyle study of aging found that higher coffee consumption was associated with a slower buildup of amyloid proteins in the brain. These proteins are the sticky clumps associated with Alzheimer’s disease.

The new Harvard study aligns with the theory that caffeine helps maintain neural networks. It supports the idea that moderate stimulation of the brain’s chemical receptors might reduce inflammation. Caffeine blocks specific receptors in the brain known as adenosine receptors. When these receptors are blocked, it affects the release of neurotransmitters and may reduce the stress on brain cells.

Researchers have also observed in animal models that caffeine can suppress the enzymes that create amyloid plaques. It appears to enhance the function of mitochondria, which are the power plants of the cell. By improving how brain cells use energy, caffeine might help them survive longer in the face of aging.

Additional context comes from the National Health and Nutrition Examination Survey in the United States. That separate analysis found that older adults who consumed more caffeine performed better on tests of processing speed and attention. The consistency of these findings across different populations strengthens the argument that caffeine has a measurable effect on cognition.

Despite the large sample size of the new Harvard analysis, the study has limitations inherent to observational research. It demonstrates an association but cannot definitively prove that coffee causes the reduction in dementia cases. It is possible that people who start to experience subtle cognitive decline naturally stop drinking coffee before they are diagnosed. This phenomenon is often called reverse causation.

The researchers attempted to account for this by conducting sensitivity analyses. They looked at the data in ways that excluded the years immediately preceding a diagnosis. The protective link remained, suggesting that reverse causation does not fully explain the results.

The participants in this study were primarily white medical professionals. This fact means the results might not apply perfectly to the general population or to other racial and ethnic groups. Additionally, the questionnaires did not distinguish between different preparation methods. The study could not separate the effects of espresso versus drip coffee, or green tea versus black tea.

Unmeasured factors could also be at play. Coffee drinkers might share other lifestyle habits that protect the brain, such as higher levels of social activity or different dietary patterns. The researchers used statistical models to adjust for smoking, exercise, and overall diet quality. However, observational studies can never fully eliminate the possibility of residual confounding variables.

Future science needs to clarify the biological mechanisms at play. Researchers must determine if caffeine is acting alone or in concert with other antioxidants found in these plants. Clinical trials that assign specific amounts of caffeine to participants could help confirm these observational findings.

The senior author of the study, Daniel Wang, noted the perspective needed when interpreting these results. “While our results are encouraging, it’s important to remember that the effect size is small and there are lots of important ways to protect cognitive function as we age,” Wang said. “Our study suggests that caffeinated coffee or tea consumption can be one piece of that puzzle.”

For now, the data suggests that a moderate coffee or tea habit is a generally healthy choice for the aging brain. It appears that consumption of about three cups of coffee or two cups of tea provides the maximum potential benefit. This study provides reassurance that this common daily ritual does not harm cognitive function and may help preserve it.

The study, “Coffee and Tea Intake, Dementia Risk, and Cognitive Function,” was authored by Yu Zhang, Yuxi Liu, Yanping Li, Yuhan Li, Xiao Gu, Jae H. Kang, A. Heather Eliassen, Molin Wang, Eric B. Rimm, Walter C. Willett, Frank B. Hu, Meir J. Stampfer, and Dong D. Wang.

Severe teen ADHD symptoms predict lower income and higher arrest rates by age 40

11 February 2026 at 01:00

A longitudinal study in Christchurch, New Zealand found that individuals who displayed the most severe ADHD symptoms as adolescents were at an elevated risk of developing substance use disorder, depression, and suicidal ideation in early adulthood. They were also more likely to engage in crime and be unemployed. These individuals tended to have lower income and living standards, and less stable relationships. The paper was published in the British Journal of Psychiatry.

Attention-deficit/hyperactivity disorder, or ADHD, is a neurodevelopmental condition characterized by persistent patterns of inattention, hyperactivity, and/or impulsivity that interfere with daily functioning or development. It typically begins in childhood, although many individuals continue to experience symptoms into adolescence and adulthood. Most often, ADHD is diagnosed when an individual starts school as behavior caused by ADHD comes into conflict with school rules.

ADHD is more commonly diagnosed in males, although females are often underdiagnosed due to less overt symptoms. Genetic factors play a major role in ADHD. ADHD often co-occurs with other conditions such as learning disorders, anxiety, depression, or oppositional behavior. Symptoms can significantly adversely affect academic performance, work productivity, and social relationships.

Study author James A. Foulds and his colleagues used data from a 40-year longitudinal study of a birth cohort in Christchurch, New Zealand to estimate the association between ADHD symptoms in adolescence and a broad range of mental health and psychosocial outcomes in early adulthood up to 40 years of age.

Data used in this analysis came from the Christchurch Health and Development Study. This study enrolled 1,265 individuals born in Christchurch in 1977 and assessed them annually from birth to 16 years of age. After that, data were collected when participants were 18, 21, 25, 30, 35, and 40 years of age. In the final three data collection waves, 75–80% of surviving study participants provided their data.

Data used in this analysis were assessments of ADHD symptoms, conduct disorder, and oppositional defiant disorder symptoms when participants were 14–16 years of age. Of the data collected between 16 and 40 years of participants’ age, study authors used information on substance use disorders (alcohol and cannabis), illicit drug use, and internalizing mental health problems.

Internalizing mental health problems are psychological difficulties characterized by inwardly directed distress, such as anxiety, depression, and withdrawal. Of the data collected between participants’ 25 and 40 years of age, this analysis used information on participants’ unemployment (lasting at least 3 months), relationship breakdowns, income, and home ownership.

Results showed that the 25% of participants with the most severe ADHD symptoms in adolescence were more likely to smoke tobacco (34% vs 15%), fulfill criteria for alcohol use disorder (26% vs 14%), and cannabis use disorder (18% vs 7%) compared to participants with less severe or no ADHD symptoms.

These individuals also more often met the criteria for major depression (29% vs 19%), anxiety disorders, and suicidal ideation. They were more likely to have been arrested (9% vs 3%), more likely to have engaged in both violent and property crime, and were more often unemployed. Participants who had the most severe ADHD symptoms as adolescents owned their homes less often and tended to have lower personal income. They more often reported breakdowns of relationships.

“Higher levels of adolescent ADHD symptoms are associated with substance use problems and criminal offending in adulthood. Long-term secondary prevention activities are needed to detect and manage coexisting problems among adults with a history of ADHD,” the study authors concluded.

The study sheds light on the links between ADHD symptoms in adolescence and key life outcomes in adulthood. However, it should be noted that the study was conducted on a group of individuals born in a single city (Christchurch) in the same year (1977), meaning that the observed associations may be affected by cultural and social specificities of Christchurch during the studied period. Results in other cultures and other historical periods may differ.

The study authors also note that only 5 participants were prescribed stimulant medication for their ADHD. This differs from the modern situation where people suffering from ADHD receive medication for their condition much more often. Finally, it remains unknown how much the association with the outcomes is due to ADHD symptoms and not due to other co-occurring conditions like autism spectrum disorder, a condition that was largely not diagnosed properly in the 1970s.

The paper, “Long-term outcomes associated with adolescent ADHD symptomatology: birth cohort study,” was authored by James A. Foulds, Joseph M. Boden, Jessica A. Kerr, Katie M. Douglas, Michaela Pettie, Jesse T. Young, Mairin R. Taylor, Katherine Donovan, and Richard Porter.

Physical distance shapes moral choices in sacrificial dilemmas

When people feel physically closer to someone who could be harmed, they are less willing to sacrifice that person for the greater good, according to a new finding reported in Cognition & Emotion.

Moral dilemmas, situations where any available option violates an important moral value, have been used to study how people balance rules like “do not harm” against outcomes like saving more lives. Classic examples such as the trolley and footbridge dilemmas show that people often reject utilitarian solutions when harm requires direct physical contact, suggesting that emotional responses play an important role in moral judgment.

The trolley dilemma is a thought experiment that asks whether it is morally permissible to pull a lever to divert a runaway train, sacrificing one person on a side track to save five people on the main line. The footbridge dilemma modifies this scenario by asking if one would physically push a large person off a bridge to stop the train, rather than using a mechanical switch.

Federica Alfeo and colleagues were motivated by an open question in this literature: is it the type of action (e.g., pushing versus pulling a lever), or the physical closeness to the victim, that drives these moral choices? Building on theories of psychological distance and prior work on emotion in decision-making, the authors set out to disentangle how proximity itself shapes moral judgments and emotional reactions.

The researchers conducted two studies using computer-based, interactive moral dilemmas modeled on the footbridge scenario. The scenarios were presented from a first-person perspective, allowing participants to experience the unfolding situation as if they themselves were at the scene.

In Study 1, 261 participants responded to scenarios that required different actions implying different levels of physical proximity to a victim: pushing someone directly, using a gun, or pulling a lever that opened a trapdoor. Participants made a forced choice between a deontological option (letting five people die) and a utilitarian option (sacrificing one person), while their response times were recorded.

After each scenario, participants estimated how physically close they felt to the victim using a visual distance scale. They also rated their emotional responses using standardized ratings, spanning negative emotions (e.g., fear, anger, sadness), moral emotions (e.g., guilt, shame, regret), and positive or neutral emotions. Importantly, emotions were assessed both for the option participants chose (factual emotions) and for the option they rejected (counterfactual emotions).

In Study 2, the researchers tested 46 additional participants to further isolate proximity. Here, the action remained constant across all scenarios (pulling a lever), while only the visual distance to the victim was manipulated. This design allowed the authors to examine whether perceived proximity alone, without changing the action, was sufficient to alter moral choices and emotional reactions.

Across both studies, participants reliably perceived the intended differences in physical distance, confirming that the proximity manipulations worked as designed. In Study 1, moral choices varied systematically with proximity. Participants were less willing to endorse the utilitarian option when the scenario required closer physical contact with the potential victim.

When harm felt more immediate and personal, participants tended to favor deontological choices, even when those choices resulted in worse overall outcomes. Scenarios implying greater distance, by contrast, were associated with a higher likelihood of sacrificing one person to save five.

Emotional responses mirrored these decision patterns. Negative emotions and moral emotions, including guilt, shame, regret, and disappointment, were strongest in high-proximity scenarios and weakest when the victim was farther away. Importantly, emotions associated with the unchosen alternative were consistently more intense than emotions linked to the chosen action.

This pattern suggests that participants anticipated the emotional consequences of both options and tended to choose the one expected to minimize emotional distress. Response times did not meaningfully differ across proximity levels, indicating that emotional intensity rather than deliberation time distinguished the scenarios.

Study 2 replicated and clarified these effects while holding the action constant. Even when participants always performed the same action, greater perceived distance increased utilitarian responding, whereas closer proximity reduced it. Emotional patterns showed a similar structure, with proximity amplifying negative and moral emotions and counterfactual emotions again exceeding factual ones.

Together, these findings show that physical closeness itself, not just the type of action, plays a central role in moral decision-making.

These findings are based on hypothetical, computer-based dilemmas, which may not fully capture how people behave in real-world moral situations involving genuine stakes and consequences.

The research, “The closer you are, the more it hurts: the impact of proximity on moral decision-making,” was authored by Federica Alfeo, Antonietta Curci, and Tiziana Lanciano.

Does sexual activity before exercise harm athletic performance?

10 February 2026 at 21:00

New research published in the journal Physiology & Behavior provides evidence that sexual activity shortly before high-intensity exercise does not harm athletic performance. The study suggests that masturbation-induced orgasm 30 minutes prior to exertion may actually enhance exercise duration and reaction time. These findings challenge long-standing beliefs regarding the necessity of sexual abstinence before athletic competition.

The motivation for the new study stems from a persistent debate in the sports world. Coaches and athletes have frequently adhered to the idea that sexual activity drains energy and reduces aggression. This belief has led to common recommendations for abstinence in the days leading up to major events. Diego Fernández-Lázaro from the University of Valladolid led a research team to investigate whether these restrictions are scientifically justified.

Previous scientific literature on this topic has been inconsistent or limited in scope. Many prior studies focused on sexual activity occurring the night before competition, leaving a gap in knowledge regarding immediate effects. Fernández-Lázaro and his colleagues aimed to examine the physiological and performance outcomes of sexual activity that occurs less than an hour before maximal effort.

To conduct the investigation, the researchers recruited 21 healthy, well-trained male athletes. The participants included basketball players, long-distance runners, and boxers. The average age of the volunteers was 22 years. The study utilized a randomized crossover design to ensure robust comparisons. This means that every participant completed both the experimental condition and the control condition.

In the control condition, participants abstained from any sexual activity for at least seven days. On the day of testing, they watched a neutral documentary film for 15 minutes before beginning the exercise assessments. In the experimental condition, the participants engaged in masturbation to orgasm in a private setting 30 minutes before the tests. They viewed a standardized erotic film to facilitate this process. Afterward, they watched the same neutral documentary to standardize the rest period.

The researchers employed two primary physical tests to measure performance. The first was an isometric handgrip strength test using a dynamometer. The second was an incremental cycling test performed on a stationary bike. The cycling test began at a set resistance and increased in difficulty every minute until the participant could no longer continue. This type of test is designed to measure aerobic capacity and time to exhaustion.

In addition to physical performance, the team collected blood samples to analyze various biomarkers. They looked for changes in hormones such as testosterone, cortisol, and luteinizing hormone. They also measured markers of muscle damage, including creatine kinase and lactate dehydrogenase. Inflammatory markers like C-reactive protein were also assessed to see if sexual activity placed additional stress on the body.

The results indicated that sexual activity did not have a negative impact on physical capabilities. The participants demonstrated a small but statistically significant increase in the total duration of the cycling test following sexual activity compared to the abstinence condition. This improvement represented a 3.2 percent increase in performance time.

The researchers also observed changes in handgrip strength. The mean strength values were slightly higher in the group that had engaged in sexual activity. This suggests that the neuromuscular system remained fully functional and perhaps slightly primed for action.

Physiological monitoring revealed that heart rates were higher during the exercise sessions that followed sexual activity. This elevation in heart rate aligns with the activation of the sympathetic nervous system. This system is responsible for the “fight or flight” response that prepares the body for physical exertion.

Hormonal analysis provided further insight into the body’s response. The study found that concentrations of both testosterone and cortisol were higher after sexual activity. Testosterone is an anabolic hormone associated with strength and aggression. Cortisol is a stress hormone that helps mobilize energy stores. The simultaneous rise in both hormones indicates a state of physiological activation rather than a state of fatigue.

The study also examined markers of muscle damage to see if the combination of sex and exercise caused more tissue stress. The findings showed that levels of lactate dehydrogenase were actually lower in the sexual activity condition. This specific enzyme leaks into the blood when muscle cells are damaged or stressed. The reduction suggests that the pre-exercise sexual activity did not exacerbate muscle stress and may have had a protective or neutral effect.

Other markers of muscle damage, such as creatine kinase and myoglobin, showed no significant differences between the two conditions. Similarly, inflammatory markers like interleukin-6 remained stable. This implies that the short-term physiological stress of sexual activity does not compound the stress caused by the exercise itself.

These findings diverge from some historical perspectives and specific past studies. For example, a study by Kirecci and colleagues reported that sexual intercourse within 24 hours of exercise reduced lower limb strength. The current study contradicts that conclusion by showing maintained or improved strength. The difference may lie in the specific timing or the nature of the sexual activity, as the current study focused on masturbation rather than partnered intercourse.

The results align more closely with a body of research summarized by Zavorsky and others. Those reviews generally concluded that sexual activity the night before competition has little to no impact on performance. The current study builds on that foundation by narrowing the window to just 30 minutes. It provides evidence that even immediate pre-competition sexual activity is not detrimental.

The researchers propose that the observed effects are likely due to a “priming” mechanism. Sexual arousal activates the sympathetic nervous system and triggers the release of catecholamines. This physiological cascade resembles a warm-up. It increases heart rate and alertness, which may translate into better readiness for immediate physical exertion.

The psychological aspect of the findings is also worth noting. The participants did not report any difference in their perceived rate of exertion between the two conditions. This means the exercise did not feel harder after sexual activity, even though their heart rates were higher. This consistency suggests that motivation and psychological fatigue were not negatively affected.

There are limitations to this study that affect how the results should be interpreted. The sample consisted entirely of young, well-trained men. Consequently, the findings may not apply to female athletes, older adults, or those with lower fitness levels. The physiological responses to sexual activity can vary across these different demographics.

The study restricted sexual activity to masturbation to maintain experimental control. Partnered sexual intercourse involves different physical demands and psychological dynamics. Intercourse often requires more energy expenditure and involves oxytocin release related to bonding, which might influence sedation or relaxation differently than masturbation.

The sample size of 21 participants is relatively small, although adequate for a crossover design of this nature. Larger studies would be needed to confirm these results and explore potential nuances. The study also relied on a one-week washout period between trials. While this is standard, residual psychological effects from the first session cannot be entirely ruled out.

Future research should aim to include female participants to determine if similar hormonal and performance patterns exist. It would also be beneficial to investigate different time intervals between sexual activity and exercise. Understanding the effects of partnered sex versus masturbation remains a key area for further exploration.

The study provides evidence that the “abstinence myth” may be unfounded for many athletes. The data indicates that sexual activity 30 minutes before exercise does not induce fatigue or muscle damage. Instead, it appears to trigger a neuroendocrine response that supports physical performance. Athletes and coaches may need to reconsider strict abstinence policies based on these physiological observations.

The study, “Sexual activity before exercise influences physiological response and sports performance in high-level trained men athletes,” was authored by Diego Fernández-Lázaro, Manuel Garrosa, Gema Santamaría, Enrique Roche, José María Izquierdo, Jesús Seco-Calvo, and Juan Mielgo-Ayuso.

Neuroimaging data reveals a “common currency” for effective communication

10 February 2026 at 20:00

A new study published in PNAS Nexus has found that specific patterns of brain activity can predict the success of persuasive messages across a wide variety of contexts. By analyzing neuroimaging data from over 500 individuals, researchers identified that neural responses in regions associated with reward and social processing are consistent indicators of how effective a message will be. These findings suggest that the human brain utilizes a common set of mechanisms to evaluate persuasive content.

Diverse fields such as marketing, political science, and public health rely heavily on the ability to influence attitudes and behaviors through mass media. Practitioners and scientists have long sought to understand exactly what makes a message persuasive enough to change a mind or prompt an action.

Previous research on this topic has typically been isolated within specific disciplines, preventing the development of a unified theory that applies across different topics. This fragmentation makes it difficult to know if the psychological drivers behind a successful anti-smoking ad are the same as those driving a popular movie trailer. The authors of the current study aimed to bridge this gap by applying a standardized analytical framework to a large collection of existing datasets.

“Persuasive messages—like those used in marketing, politics, or public health campaigns—play a key role in shaping attitudes and influencing behavior. But what exactly makes these messages effective, and do the same processes apply across different contexts? We don’t fully know, because research on persuasion tends to stay within individual disciplines, with little cross-talk,” explained the corresponding authors, Christin Scholz, Hang-Yee Chan, and Emily B. Falk.

“If we could identify common processes, different fields could work together more efficiently to understand what really drives persuasion. In this study, we examine neuroimaging data collected in response to a variety of persuasive messages. MRI brain images offer a way to observe and compare patterns of brain activity across different contexts. By conducting a mega-analysis of 16 datasets, we aimed to uncover broader patterns in how the brain responds to persuasive messages—patterns that individual studies might overlook.”

The research team conducted a mega-analysis, which differs from a traditional meta-analysis by aggregating and re-processing the raw data from multiple studies rather than simply summarizing their published results. They pooled functional magnetic resonance imaging (fMRI) data from 16 distinct experiments conducted by the co-authors. This combined dataset included 572 participants who were exposed to a total of 739 different persuasive messages.

The scope of the messages was broad, covering topics such as public health promotion, crowdfunding projects, commercial products, and video, text, or image-based advertisements. The total dataset comprised 21,688 individual experimental trials. In each of the original studies, participants lay inside an MRI scanner while viewing the messages. The scanner recorded changes in blood flow to various parts of the brain, which serves as a proxy for neural activity.

After viewing the content, the participants provided their own evaluations of the messages. They typically answered survey questions about how much they liked the message or whether they intended to change their behavior. The researchers categorized these self-reported measures as “message effectiveness in individuals.”

To assess the real-world impact of the content, the team also gathered data on how independent, larger groups of people responded to the same messages. These measures were termed “message effectiveness at scale.” This category included objective behavioral metrics like click-through rates on web banners, the amount of money donated to a campaign, or total view counts on video platforms.

The researchers then used linear mixed-effects models to test if brain activity in specific regions could predict both the individuals’ ratings and the large-scale behavioral outcomes. They focused their analysis on two primary neural systems: the reward system and the mentalizing system. The reward system is involved in anticipating value and pleasure, while the mentalizing system helps individuals understand the thoughts and feelings of others.

The statistical analysis revealed that activity in brain networks associated with reward processing was positively linked to message effectiveness. When participants showed higher engagement in the ventral tegmental area and nucleus accumbens, they were more likely to rate the messages as effective. These regions are deep structures in the brain that are typically involved in processing personal value and motivation. The study indicates that this neural signal of value is a consistent predictor of how well a message is received by the viewer.

The researchers also identified a strong connection between message success and activity in the brain’s mentalizing system. This network includes the medial prefrontal cortex and the temporal poles. These areas are active when people think about themselves or attempt to interpret the mental states of other people. The analysis showed that messages triggering this social processing network were more likely to be effective both for the person watching and for larger audiences.

A significant finding emerged when the researchers compared brain data to the real-world success of the messages at the population level. They found that neural activity in the mentalizing system predicted population-level outcomes, such as how often a video was shared. This predictive power held true even after accounting for the participants’ stated opinions in surveys. This suggests that the brain registers social relevance in ways that individuals may not consciously articulate.

The study refers to this phenomenon as “neuroforecasting.” This concept posits that neural activity in a small group of people can forecast the behavior of a much larger population. The findings support the idea that specific brain responses are more generalizable to the public than subjective self-reports. While people might say they like a message, their neural activity related to social processing appears to be a better indicator of whether that message will resonate with others.

“On average, the specific brain activity we tracked explained a small but robust portion of why messages were effective, roughly translating to what researchers call a small effect size (Cohen’s d = 0.22) at the population level,” the researchers told PsyPost. “We found this effect when looking at our large set of over 700 diverse messages as a whole. You could understand these neural markers as a ‘common currency’ that helps explain persuasion across many different real-world domains. However, the effect sizes also vary across message domains. Explaining that variance is an important task for the field going forward.”

“In a way, it is surprising that we were able to find any commonality in the neural processes related to message effectiveness across the messages we included. These messages did not only vary in their persuasive goals (from selling products, to recruiting volunteers, to promoting smoking cessation), but also in their format (videos, text, and more), and in the way their effectiveness was evaluated (click-through rates of online campaigns, self-report surveys, etc.).”

“This introduces a lot of noise in the analysis. Yet, we were still able to pick up on some common, underlying processes that support persuasion. This suggests that the ways in which we change our minds and behavior are, at least in part, similar across a variety of domains.”

Beyond the initial hypotheses regarding reward and social processing, an exploratory review of the whole brain uncovered additional patterns. Activity in regions linked to language processing and emotion also correlated with message success at scale. This implies that successful messages tend to engage the brain’s linguistic and emotional centers more deeply than less effective content. These exploratory findings suggest that emotion may play a larger role in mass-market success than previously identified in smaller studies.

“While we hypothesized that reward and social systems would be central, we were surprised to find through exploratory analysis that language processing and emotional brain responses also played significant roles in message success,” Scholz, Chan, and Falk said.

“Interestingly, our results suggested that neural signals related to emotion were particularly strong indicators of message effectiveness at scale—meaning for large groups—rather than just for individuals. We also found it notable that social processing activity in the brain provided ‘hidden&’ information about a message’s success that participants didn’t realize they were feeling or mention in their self-reports.”

As with all research, there are some limitations. Most of the data came from participants in Western, Educated, Industrial, Rich, and Democratic societies. Cultural norms heavily influence communication and social processing, so these neural markers might differ in other populations. The study is also correlational, meaning it observes associations but cannot prove that brain activity directly causes the messages to be effective.

Technical differences between the original studies also presented challenges for the analysis. The sixteen datasets used varied scanning parameters, equipment, and experimental protocols. While the mega-analysis approach helps smooth out some noise, these inconsistencies make it difficult to identify specific factors that might strengthen or weaken the observed effects.

“These neural markers should be seen as a first step toward experimental work,” the researchers noted. “We need more work, for instance, to interpret the exact psychological and thought processes that are responsible for creating the neural patterns we observed. A brain scanner is not a ‘mind-reading’ tool.”

Future work is needed to move from prediction to explanation. The researchers propose designing experiments that specifically manipulate message content to target the identified brain regions. Such studies could verify whether activating the reward or social processing systems intentionally leads to better outcomes.

“A major goal is to move from observing these brain patterns to conducting experiments that specifically design messages to activate these reward and social mechanisms to see if they become more effective,” Scholz, Chan, and Falk explained. “We also need to diversify our samples to include a broader range of global populations to ensure our findings apply to everyone. Finally, we hope to coordinate as a field to standardize how neuroimaging data is collected across different domains to make future large-scale collaborations even more powerful.”

“This project was a massive collaborative effort involving 16 functional MRI datasets, over 500 participants, and more than 700 unique messages. Because we believe in the importance of open science, we have made our data and analysis code publicly available so other researchers can build on these findings. We hope this study serves as a bridge between neuroscience, communication, and public policy to create more effective and beneficial messaging for society.”

The study, “Brain activity explains message effectiveness: A mega-analysis of 16 neuroimaging studies,” was authored by Christin Scholz, Hang-Yee Chan, Jeesung Ahn, Maarten A. S. Boksem, Nicole Cooper, Jason C. Coronel, Bruce P. Doré, Alexander Genevsky, Richard Huskey, Yoona Kang, Brian Knutson, Matthew D. Lieberman, Matthew Brook O’Donnell, Anthony Resnick, Ale Smidts, Vinod Venkatraman, Khoi Vo, René Weber, Carolyn Yoon, and Emily B. Falk.

New research connects the size of the beauty market to male parenting effort

10 February 2026 at 19:00

New research suggests that the size of a country’s cosmetics industry may be directly linked to how much fathers contribute to childcare and the level of economic inequality within that society. The findings propose that in cultures where men are active parents or where the gap between the rich and poor is wide, women are more likely to invest in their appearance to compete for partners. These results were published in the journal Evolution and Human Behavior.

Charles Darwin originally proposed the theory of sexual selection to explain why males of many species possess exaggerated physical traits. He observed that peafowl are sexually dimorphic, meaning the males and females look different. The peacock displays a massive, colorful tail to attract a mate, while the peahen remains relatively plain.

This dynamic typically arises from the biological costs of reproduction. In most species, females expend more biological energy through the production of eggs, gestation, and lactation. Because their investment in each offspring is higher, females tend to be the choosier sex. Males must consequently compete with one another to be selected.

Humans, however, do not always fit neatly into this classical model. Human females often utilize conspicuous traits or cultural enhancements, such as makeup, to increase their attractiveness. Jun-Hong Kim, a researcher at the Pohang University of Science and Technology in the Republic of Korea, sought to explain this exception.

Kim aimed to determine if human mating follows a “revised” sexual selection theory. This framework suggests that the direction of mate choice depends on which partner contributes more resources to the relationship. If males provide substantial care and support, they become a limited and sought-after resource.

When men invest heavily in parenting, the cost of reproduction becomes high for them as well. The theory predicts that under these conditions, men will become more discriminating in their choice of partner. Consequently, women may compete for these high-investment males by enhancing their physical appearance.

The researcher also considered the role of economic environment. In societies with high economic inequality, a partner with resources can provide a substantial advantage in survival and reproductive success. This suggests that financial stratification might also intensify female competition for high-status mates.

To test these hypotheses, Kim conducted a cross-cultural analysis involving data from up to 55 countries. The study used the total financial size of the cosmetics industry in each nation as a proxy for female ornamentation and male choice. This data was sourced from Euromonitor, excluding baby products and men’s grooming items.

The researcher needed a way to measure how much fathers contribute to family life across different cultures. Kim utilized data from the OECD regarding the ratio of unpaid domestic work and childcare hours performed by women versus men. A lower ratio indicates that men are doing a larger share of the domestic work.

Economic inequality was measured using income inequality data from the CIA and a social mobility index from the World Economic Forum. These metrics helped determine how difficult it is to move between economic classes. The study also controlled for factors like urbanization and Gross Domestic Product per capita.

Kim’s analysis revealed a strong association between paternal effort and the beauty market. In countries where men performed a higher proportion of childcare and domestic labor, per capita spending on cosmetics was higher. This supports the idea that when men are active caregivers, they become “prizes” that warrant increased mating effort from women.

The study quantified this relationship with specific monetary figures. The data indicated that for every hour increase in paternal investment relative to maternal investment, per capita spending on cosmetics rose by roughly $2.17. This trend held true even when accounting for the general wealth of the nation.

Economic disparity also emerged as a strong predictor of beauty spending. The analysis showed that as income inequality and social mobility scores increased, so did the size of the cosmetics industry. This suggests that in stratified societies, women may invest more in their appearance to attract partners who can offer financial security.

The study posits that this behavior is a form of mutual mate choice. Unlike many mammals where one sex is clearly the chooser and the other is the competitor, humans appear to engage in a bidirectional assessment. Men evaluate potential partners based on cues of fitness and fertility, which cosmetics can highlight.

Kim also tested other variables that frequently appear in evolutionary psychology literature. One such variable was the operational sex ratio, which compares the number of marriageable men to women. Previous theories suggested that a surplus of women would lead to higher competition and beauty spending.

However, the results for sex ratio were not statistically significant in this model. The density of the population also failed to predict variations in cosmetics use. The primary drivers remained paternal investment and economic stratification.

The researcher checked for geographic clustering to ensure the results were not simply due to neighboring countries acting similarly. Visualizing the data on maps showed no distinct regional patterns that would skew the statistics. This suggests the link between parenting, economics, and cosmetics is not merely a byproduct of shared regional culture.

There are limitations to this type of cross-cultural research. The study relies on observational data, which can identify associations but cannot definitively prove causation. It is possible that other unmeasured cultural factors influence both how men parent and how women spend money.

The measurement of paternal investment was also restricted by data availability. Because the study relied on OECD time-use surveys, the analysis regarding childcare was limited to developed nations. This reduces the ability to generalize the findings to non-industrialized or developing societies.

Kim also notes that unpaid work hours are an imperfect proxy for total paternal investment. This metric does not capture the quality of care or the emotional support provided by fathers. It focuses strictly on the time spent on domestic tasks.

Future research could address these gaps by using more direct measures of parenting effort. Kim suggests that standardized surveys across a wider range of cultures could provide granular detail on how fathers contribute. This would allow for a more robust test of the revised sexual selection theory.

The study provides a new lens through which to view the multi-billion dollar beauty industry. Rather than seeing cosmetics solely as a product of modern marketing, the research frames them as tools in an ancient biological strategy. It highlights how economic structures and family dynamics shape human behavior.

This perspective challenges the stereotype that sexual selection is always male-driven. It underscores that in humans, the high cost of raising children makes distinct demands on both parents. When men step up as caregivers, the dynamics of attraction and competition appear to shift in measurable ways.

The study, “Paternal investment and economic inequality predict cross-cultural variation in male choice,” was authored by Jun-Hong Kim.

Holding racist attitudes predicts increased psychological distress over time

10 February 2026 at 15:00

New research published in the journal Comprehensive Psychiatry challenges the common belief that mental illness is a primary driver of racist attitudes. The findings suggest that the relationship actually works in the opposite direction, with prejudiced beliefs predicting an increase in psychological distress over time. The study also highlights social connectedness as a significant factor, indicating that a lack of social connection may fuel both prejudice and mental health struggles.

Psychologists and social scientists have historically sought to understand the roots of extreme prejudice. A frequent explanation in both academic literature and media coverage is that racism is a symptom of poor mental health. This narrative often surfaces after events of mass violence, where the perpetrator’s actions are attributed to psychological instability rather than ideological conviction. For example, counterterrorism strategies frequently list mental health issues as a key risk factor for radicalization.

Tegan Cruwys, a researcher at the School of Medicine and Psychology at The Australian National University, led a team to investigate the validity of this assumption. The researchers argued that attributing racism to mental illness is problematic for several reasons. It has poor predictive power and risks stigmatizing people with mental health disorders who are not prejudiced.

The research team sought to test the reverse possibility. They wanted to see if holding racist views might actually be toxic to the person holding them. They also hypothesized that a third variable, such as social isolation, might be the true cause of both prejudiced attitudes and psychological decline.

To test these ideas, the researchers analyzed data from three separate longitudinal studies conducted in Australia. Longitudinal studies involve surveying the same group of people at multiple points in time. This design allows scientists to observe which changes occur first and provides better evidence for the direction of cause and effect than one-time surveys. Each of the three studies was large, nationally representative, and spanned a period of approximately six months.

The first study took place during the early stages of the COVID-19 pandemic in 2020. It included 2,361 adults. The researchers measured racism using an adapted scale that assessed social distancing preferences. Participants were asked how much physical distance they would prefer to keep from members of various ethnic outgroups compared to their own family or friends. They also rated their feelings toward these groups on a “warmth” thermometer.

Psychological distress was measured using a standard clinical tool that assesses symptoms of depression and anxiety. Social connectedness was evaluated by asking participants how often they felt lonely or left out.

The second study was conducted in 2023 leading up to the Australian Indigenous Voice referendum. This was a national vote on whether to recognize Aboriginal and Torres Strait Islander peoples in the constitution. The sample included 3,860 participants.

In this study, racism was measured by asking participants to rate how they believed Indigenous peoples were treated in Australia. Scores indicating a belief that Indigenous people receive “special treatment” were interpreted as indicative of prejudice. Psychological distress was measured using a five-item screening questionnaire often used to detect mental ill-health in the general population. Social connectedness was operationalized as the level of trust participants placed in institutions such as the government, police, and scientists.

The third study also occurred during the Voice referendum period and included 2,424 non-Indigenous Australians. The team measured attitudes using a specific scale designed to gauge views on Indigenous Australians. Psychological well-being was assessed using a five-item survey from the World Health Organization. In this dataset, social connectedness was defined by how strongly participants identified with various social groups, including their family, neighborhood, and country.

The results from all three studies showed a consistent pattern. When the researchers looked at the data from a single point in time, the link between racism and psychological distress was weak and inconsistent. This lack of a strong immediate connection suggests that simply having a mental health condition does not automatically make a person more likely to hold racist views.

However, the longitudinal analysis revealed a different story. In all three datasets, an increase in racist beliefs consistently preceded and predicted an increase in psychological distress. In the first study, participants whose racist attitudes intensified over the six months were more likely to experience worsening anxiety and depression. In the third study, psychological distress increased markedly over time only among those participants who held higher levels of racist attitudes.

The second study provided a nuanced view of this trend. During the timeframe of the second study, psychological distress was generally declining across the population. However, this improvement was not evenly shared. Participants who reported the lowest levels of racism showed the steepest decline in distress. In contrast, those with the highest levels of racism experienced a much more modest improvement in their mental health.

The researchers also tested the reverse pathway to see if psychological distress predicted a later increase in racism. The evidence for this was mixed. While two of the studies showed some association, it was not consistent across all contexts. In the third study, psychological distress did not predict any change in racist attitudes over time.

A key component of the study was the investigation of social connectedness. The analysis showed that social connection served as a protective factor against both racism and psychological distress. In the first study, participants who felt less socially connected over time saw increases in both racist attitudes and mental health symptoms.

In fact, when the researchers statistically accounted for the role of social connectedness, the direct link between racism and distress often disappeared or weakened. This suggests that the feeling of being excluded or alienated may be a “common cause” that drives people toward both prejudice and poor mental health.

The researchers propose that prejudiced attitudes may be psychologically harmful because they are inherently threatening. Racism often involves viewing other groups as a danger to one’s own safety, culture, or resources. Living with this constant sense of threat can induce a state of hypervigilance and anxiety that erodes mental well-being over time.

These findings have implications for how society addresses both prejudice and mental health. The results challenge the idea that treating mental illness will automatically reduce racism or extremism. Instead, the study suggests that prejudice itself is a risk factor for mental decline. It implies that interventions designed to foster social connection and community inclusion could have a dual benefit. By helping people feel more connected to society, it may be possible to simultaneously improve mental health outcomes and reduce the prevalence of prejudiced attitudes.

There are limitations to this research that should be noted. The measures of racism varied across the three studies to fit the specific social context of the time. This makes it difficult to compare the absolute levels of prejudice between the different samples. Additionally, the study relied on self-reported data, which can be influenced by a participant’s desire to present themselves in a favorable light. The research was conducted in Australia, so the specific social dynamics may differ in other countries or cultural contexts.

It is also important to avoid interpreting these findings as an explanation for violent extremism. The study surveyed the general population rather than radicalized individuals or members of hate groups. While prejudice is a predictor of radicalization, the psychological dynamics of violent offenders may differ from those of the general public.

Future research is needed to determine if these patterns hold true for other forms of prejudice, such as sexism or homophobia. The researchers also suggest that future studies should test whether practical interventions that boost social connectedness can effectively interrupt the cycle of prejudice and distress. The study indicates that mental health is not a fixed trait but is responsive to our social attitudes and our sense of belonging.

The study, “What goes around comes around? Holding racist attitudes predicts increased psychological distress over time,” was authored by Tegan Cruwys, Olivia Evans, Michael J. Platow, Iain Walker, Katherine J. Reynolds, Christienne Javier, Catherine Haslam, S. Alexander Haslam, and Hema Preya Selvanathan.

Unexpected study results complicate the use of brain stimulation for anxiety

10 February 2026 at 05:00

A new study suggests that a promising noninvasive brain stimulation technique may not function exactly as psychiatrists had hoped for patients with combined depression and anxiety. Researchers found that while electrical stimulation of the brain’s frontal cortex improved mental focus and reaction times, it also unexpectedly heightened sensitivity to potential threats.

These findings indicate that the treatment might wake up the brain’s alertness systems rather than simply calming down fear responses. The results were published in the journal Biological Psychiatry: Cognitive Neuroscience and Neuroimaging.

Major depressive disorder is one of the world’s most persistent public health burdens. It becomes even harder to treat when accompanied by anxiety. This combination is common. Patients with both conditions often experience more severe symptoms and are less likely to respond to standard antidepressants or talk therapy. This resistance to treatment has led scientists to look for biological causes within the brain’s circuitry.

Neuroscientists have identified specific patterns of brain activity in people with anxious depression. Typically, the prefrontal cortex shows lower than average activity. This area sits just behind the forehead. It is responsible for planning, decision-making, and regulating emotions. At the same time, the amygdala often shows hyperactivity. The amygdala is a deep brain structure that acts as the body’s alarm system for danger. In a healthy brain, the prefrontal cortex helps quiet the amygdala when a threat is not real. In anxious depression, this regulatory system often fails.

Researchers have been exploring transcranial direct current stimulation as a way to correct this imbalance. This technique involves placing electrodes on the scalp to deliver a weak electrical current. The goal is to encourage neurons in the prefrontal cortex to fire more readily. Theoretically, boosting the “thinking” part of the brain should help it exert better control over the “feeling” alarm system.

A team led by Tate Poplin and senior author Maria Ironside at the Laureate Institute for Brain Research in Tulsa, Oklahoma, sought to test this theory in a large clinical sample. They recruited 101 adults who were currently experiencing a major depressive episode and high levels of anxiety. The researchers wanted to see if a single session of stimulation could alter the way these patients processed threats.

The study was designed as a double-blind, randomized trial. This is the gold standard for clinical research. The participants were divided into two groups. One group received thirty minutes of active stimulation to the dorsolateral prefrontal cortex. The other group received a sham, or placebo, stimulation. The sham version mimicked the physical sensations of the device but did not deliver the therapeutic current. This ensured that neither the patients nor the staff knew who was receiving the real treatment.

The researchers administered the stimulation while the participants lay inside a magnetic resonance imaging scanner. This allowed the team to observe changes in blood flow within the brain in real time. During the scan, the participants completed a cognitive task. They viewed pictures of faces with fearful or neutral expressions. Letters were superimposed over the faces. The participants had to identify the letters.

This task was designed to measure “attentional load.” Some rounds were easy and required little mental effort. Other rounds were difficult and demanded intense focus. This design allowed the researchers to see how the brain prioritized information. They wanted to know if the stimulation would help the brain ignore the fearful faces and focus on the letters.

After the brain scans, the participants underwent a physical test of their anxiety levels. This involved measuring the startle reflex. The researchers placed sensors on the participants’ faces to detect eye blinks. The participants then listened to bursts of white noise. Sometimes the noise signaled a predictable electric shock. Other times, the shock was unpredictable.

This distinction is important in psychology. Reacting to a known danger is considered fear. Reacting to an unknown or unpredictable threat is considered anxiety. By measuring how hard the participants blinked in anticipation of the shock, the researchers could physically quantify their threat sensitivity.

The findings painted a complex picture of how the stimulation affected the brain. On one hand, the treatment appeared to improve cognitive performance. The group that received active stimulation was more accurate at identifying the letters than the placebo group. They also reacted faster.

The brain scans supported this behavioral improvement. When the task was difficult, the active group showed increased activity in the inferior frontal gyrus and the parietal cortex. These regions are heavily involved in attention and executive control. This suggests the stimulation successfully engaged the brain’s command centers.

However, the results regarding emotional regulation contradicted the team’s original predictions. The researchers hypothesized that the stimulation would reduce the amygdala’s reaction to the fearful faces. Instead, the opposite occurred during the easy version of the task. The amygdala showed greater activation in the active group compared to the placebo group.

The startle test revealed a similar pattern. The researchers found that active stimulation did not calm the participants’ physical reflexes. In fact, it made them jumpier. The active group showed a stronger startle response during the unpredictable threat condition. They also reported feeling higher levels of anxiety during these moments of uncertainty.

Ironside noted the dual nature of these results. “Compared to the sham stimulation, frontal tDCS increased the activation of the bilateral inferior frontal gyrus… when the task was more cognitively demanding and, unexpectedly, increased amygdala… response when the task was less cognitively demanding,” she said.

Ironside also highlighted the physical findings. “We also observed that tDCS increased eyeblink startle response under conditions of unpredictable threat.”

These results suggest that transcranial direct current stimulation does not act as a simple tranquilizer for the brain. Instead, it may function as a general amplifier of arousal and engagement. By boosting the excitability of the frontal cortex, the treatment might make the brain more alert to everything. This includes both the task at hand and potential threats in the environment.

The increase in startle response might reflect a state of heightened vigilance. When the brain is more engaged, it may process all incoming signals more intensely. This interpretation aligns with the improved reaction times on the cognitive task. The participants were “sharper,” but this sharpness came with a cost of increased sensitivity to anxiety-provoking stimuli.

There are several important caveats to consider regarding this study. First, the participants only received a single session of stimulation. Clinical treatments for depression typically involve daily sessions over several weeks. It is possible that the cumulative effect of repeated stimulation is different from the acute effect of a single dose. Long-term changes in brain plasticity might take time to develop.

Second, the environment may have influenced the results. Undergoing a brain scan can be stressful. The MRI machine is loud and confining. For people who already suffer from high anxiety, this environment might have heightened their baseline stress levels. Receiving electrical stimulation in such a high-stress context could have interacted with their anxiety in unique ways.

The researchers also noted that the demographics of the study leaned heavily toward women. While this reflects the higher prevalence of depression and anxiety in women, it means the results might not fully generalize to men.

Despite the unexpected increase in threat sensitivity, the authors believe the findings offer a path forward. The clear improvement in task engagement and frontal brain activity is a positive signal. It suggests that the stimulation is effectively reaching the target brain regions and altering their function.

The failure to reduce anxiety might be due to the passive nature of the treatment. In this study, participants received stimulation while resting or doing a simple task. The researchers suggest that future trials should explore “context-dependent” stimulation.

This approach would involve pairing the brain stimulation with active therapy. For example, if a patient is undergoing exposure therapy to face their fears, the stimulation might help them engage more fully with the therapeutic exercises. If the stimulation boosts the brain’s ability to focus and learn, it could act as a catalyst for psychological interventions.

The study, “Frontal Cortex Stimulation Modulates Attentional Circuits and Increases Anxiety-Potentiated Startle in Anxious Depression,” was authored by Tate Poplin, Rayus Kuplicki, Ebony A. Walker, Kyle Goldman, Cheldyn Ramsey, Nicholas Balderston, Robin L. Aupperle, Martin P. Paulus, and Maria Ironside.

Psychology shows why using AI for Valentine’s Day could be disastrous

As Valentine’s Day approaches, finding the perfect words to express your feelings for that special someone can seem like a daunting task – so much so that you may feel tempted to ask ChatGPT for an assist.

After all, within seconds it can dash off a well-written, romantic message. Even a short, personalized limerick or poem is no sweat.

But before you copy and paste that AI-generated love note, you might want to consider how it could make you feel about yourself.

We research the intersection of consumer behavior and technology, and we’ve been studying how people feel after using generative AI to write heartfelt messages. It turns out that there’s a psychological cost to using the technology as your personal ghostwriter.

The rise of the AI ghostwriter

Generative AI has transformed how many people communicate. From drafting work emails to composing social media posts, these tools have become everyday writing assistants. So it’s no wonder some people are turning to them for more personal matters, too.

Wedding vows, birthday wishes, thank you notes and even Valentine’s Day messages are increasingly being outsourced to algorithms.

The technology is certainly capable. Chatbots can craft emotionally resonant responses that sound genuinely heartfelt.

But there’s a catch: When you present these words as your own, something doesn’t sit right.

When convenience breeds guilt

We conducted five experiments with hundreds of participants, asking them to imagine using generative AI to write various emotional messages to loved ones. Across every scenario we tested – from appreciation emails to birthday cards to love letters – we found the same pattern: People felt guilty when they used generative AI to write these messages compared to when they wrote the messages themselves.

When you copy an AI-generated message and sign your name to it, you’re essentially taking credit for words you didn’t write.

This creates what we call a “source-credit discrepancy,” which is a gap between who actually created the message and who appears to have created it. You can see these discrepancies in other contexts, whether it’s celebrity social media posts written by public relations teams or political speeches composed by professional speechwriters.

When you use AI, even though you might tell yourself you’re just being efficient, you can probably recognize, deep down, that you’re misleading the recipient about the personal effort and thought that went into the message.

The transparency test

To better understand this guilt, we compared AI-generated messages to other scenarios. When people bought greeting cards with preprinted messages, they felt no guilt at all. This is because greeting cards are transparently not written by you. Greeting cards carry no deception: Everyone understands you selected the card and that you didn’t write it yourself.

We also tested another scenario: having a friend secretly write the message for you. This produced just as much guilt as using generative AI. Whether the ghostwriter is human or an artificial intelligence tool doesn’t matter. What matters most is the dishonesty.

There were some boundaries, however. We found that guilt decreased when messages were never delivered and when recipients were mere acquaintances rather than close friends.

These findings confirm that the guilt stems from violating expectations of honesty in relationships where emotional authenticity matters most.

Somewhat relatedly, research has found that people react more negatively when they learn a company used AI instead of a human to write a message to them.

But the backlash was strongest when audiences expected personal effort – a boss expressing sympathy after a tragedy, or a note sent to all staff members celebrating a colleague’s recovery from a health scare. It was far weaker for purely factual or instructional notes, such as announcing routine personnel changes or providing basic business updates.

What this means for your Valentine’s Day

So, what should you do about that looming Valentine’s Day message? Our research suggests that the human hand behind a meaningful message can help both the writer and the recipient feel better.

This doesn’t mean you can’t use generative AI as a brainstorming partner rather than a ghostwriter. Let it help you overcome writer’s block or suggest ideas, but make the final message truly yours. Edit, personalize and add details that only you would know. The key is co-creation, not complete delegation.

Generative AI is a powerful tool, but it’s also created a raft of ethical dilemmas, whether it’s in the classroom or in romantic relationships. As these technologies become more integrated into everyday life, people will need to decide where to draw the line between helpful assistance and emotional outsourcing.

This Valentine’s Day, your heart and your conscience might thank you for keeping your message genuinely your own.The Conversation

 

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Why some brain cells resist the toxic proteins linked to Alzheimer’s disease

10 February 2026 at 01:00

A new study has identified specific cellular machinery that helps brain cells dispose of toxic proteins associated with Alzheimer’s disease. By screening thousands of genes in lab-grown human neurons, researchers discovered a protein complex that acts as a disposal system and a separate mechanism linking cellular stress to the formation of harmful protein fragments. These findings, published in the journal Cell, offer potential new targets for treating neurodegenerative conditions.

The protein tau normally functions to stabilize the internal skeleton of nerve cells. In diseases such as Alzheimer’s and frontotemporal dementia, this protein collapses into sticky clumps that injure and eventually kill the cell. This process does not affect all neurons equally, as some brain cells succumb quickly while their neighbors survive for years. Understanding the molecular differences between these vulnerable and resilient cells is a primary goal for neuroscientists.

Avi Samelson, an assistant professor of Neurology at UCLA Health, led this investigation while working at the University of California, San Francisco. He collaborated with a team including senior author Martin Kampmann. They sought to uncover the genetic instructions that determine whether a neuron clears tau away or allows it to accumulate.

“We wanted to understand why some neurons are vulnerable to tau accumulation while others are more resilient,” says Samelson. “By systematically screening nearly every gene in the human genome, we found both expected pathways and completely unexpected ones that control tau levels in neurons.”

The research team began by creating human neurons from induced pluripotent stem cells. These cells were engineered to carry a genetic mutation known to cause a hereditary form of dementia, ensuring the cells would naturally develop tau abnormalities. The team then employed a gene-editing technology called CRISPR interference. This tool allowed them to systematically switch off roughly 20,000 genes, one at a time, to observe which ones influenced tau levels.

This exhaustive screening process identified a protein complex involving the gene CUL5 as a primary regulator of tau. The researchers found that this complex functions as a tagging system. It attaches a molecular label called ubiquitin to specific sections of the tau protein.

The attachment of ubiquitin serves as a signal to the cell’s waste disposal machinery. This machinery, known as the proteasome, recognizes the tag and destroys the marked tau protein. The study revealed that CUL5 works in tandem with an adaptor protein called SOCS4 to physically grab the tau molecule.

To verify the relevance of this finding to human health, the team analyzed data from donated human brain tissue. They examined gene expression patterns in neurons from patients who had died with Alzheimer’s disease. A distinct pattern emerged regarding this disposal system.

Neurons from patients with Alzheimer’s disease that expressed higher levels of CUL5 were more likely to survive the disease process. This correlation suggests that a robust disposal system may protect specific brain cells from degeneration. Cells with lower levels of these components appeared to be more vulnerable to death.

The study also revealed a connection between the cell’s energy production and tau toxicity. The initial screen indicated that genes involved in mitochondrial function were necessary for keeping tau levels in check. Mitochondria are the power plants of the cell, and their failure often leads to cellular stress.

When the researchers inhibited the mitochondria in their lab-grown neurons, the cells experienced a rise in reactive oxygen species. These are unstable molecules that can damage cellular components. This state of oxidative stress impaired the function of the proteasome.

Instead of fully degrading tau, the malfunctioning disposal machinery only partially processed the protein. This resulted in the production of a specific tau fragment approximately 25 kilodaltons in size. This fragment was not randomly generated but appeared consistently when the cells were under stress.

“This tau fragment appears to be generated when cells experience oxidative stress, which is common in aging and neurodegeneration,” says Samelson. “We found that this stress reduces the efficiency of the proteasome, the cell’s protein recycling machine, causing it to improperly process tau.”

This specific fragment appears to be biologically active. It resembles a biomarker found in the spinal fluid of Alzheimer’s patients, suggesting it might be released from stressed neurons into the surrounding fluid. The researchers confirmed that the stressed neurons secreted this fragment into their culture media.

Experiments in test tubes showed that the presence of this fragment altered how other tau proteins bonded together. The fragment caused tau to form straighter, stiffer structures compared to the tangles typically seen in disease. This suggests that the fragment is not merely a byproduct but an active participant in the aggregation process.

The discovery highlights the duality of the proteasome’s role. Under normal conditions, aided by CUL5, it helps clear tau and maintain cell health. Under stress, however, it can malfunction and produce fragments that may worsen the disease.

The findings provide a potential explanation for why aging is the biggest risk factor for neurodegeneration. As we age, mitochondrial function often declines, and oxidative stress increases. This environment could promote the generation of these toxic fragments over time.

There are limitations to the current findings that must be considered. The neurons used in these experiments were grown in a dish and resemble fetal brain cells more than the mature cells found in an aging adult brain. They also lack the complex environment of a living brain, which includes support cells and blood vessels.

Additionally, the study relied on antibodies to detect tau clumps. These tools do not always provide a complete picture of the protein’s three-dimensional shape. The researchers focused on specific types of tau aggregates, and other forms may interact differently with the CUL5 system.

Future work will need to determine if these mechanisms operate similarly in animal models of dementia. Samelson and his colleagues aim to explore whether enhancing the CUL5 system could serve as a therapeutic strategy. They are also investigating ways to protect the proteasome from oxidative stress to prevent the formation of toxic fragments.

“What makes this study particularly valuable is that we used human neurons carrying an actual disease-causing mutation,” says Samelson. “These cells naturally have differences in tau processing, giving us confidence that the mechanisms we identified are relevant to human disease.”

This research underscores the complexity of protein quality control in the brain. It suggests that reinforcing the brain’s natural ability to tag and clear proteins could offer a new avenue for medicine. Simultaneously, protecting the energy systems of the cell might prevent the creation of seeds that start the aggregation process.

The study, “CRISPR screens in iPSC-derived neurons reveal principles of tau proteostasis,” was authored by Avi J. Samelson, Nabeela Ariqat, Justin McKetney, Gita Rohanitazangi, Celeste Parra Bravo, Rudra S. Bose, Kyle J. Travaglini, Victor L. Lam, Darrin Goodness, Thomas Ta, Gary Dixon, Emily Marzette, Julianne Jin, Ruilin Tian, Eric Tse, Romany Abskharon, Henry S. Pan, Emma C. Carroll, Rosalie E. Lawrence, Jason E. Gestwicki, Jessica E. Rexach, David S. Eisenberg, Nicholas M. Kanaan, Daniel R. Southworth, John D. Gross, Li Gan, Danielle L. Swaney, and Martin Kampmann.

Study finds associations between gut microbiota composition and autism

9 February 2026 at 23:00

A study conducted in Taiwan found that autistic individuals tend to show differences in gut microbiota composition compared to both non-autistic individuals and their siblings without autism. More specifically, the autistic group showed distinct differences in the beta diversity of their gut microbiota. Individuals with more Anaerostipes bacteria exhibited significantly less social impairment and internalizing problems. The paper was published in Translational Psychiatry.

The gut microbiota is the complex community of microorganisms, mainly bacteria, that live in the human gastrointestinal tract. These microorganisms play a crucial role in digestion, metabolism, immune function, and protection against pathogens.

They also communicate with the central nervous system through a bidirectional communication pathway called the microbiota-gut-brain axis. Signals between the brain and the gut microbiota travel via multiple physiological paths, including the vagus nerve, immune system biochemicals, and microbial metabolites such as short-chain fatty acids.

Research shows that, through the microbiota-gut-brain axis, gut microbiota can influence brain development, stress reactivity, and emotional regulation. Differences in gut microbiota composition have been associated with psychological characteristics such as anxiety, depression, stress sensitivity, and cognitive functioning.

Experimental studies suggest that altering the gut microbiota through diet, probiotics, or antibiotics can lead to changes in mood and behavior. Early-life microbiota development appears particularly important for later psychological outcomes.

Study author Jung-Chi Chang and his colleagues wanted to investigate the associations between gut microbiota composition and autism features. They also wanted to compare microbial profiles between autistic individuals, their non-autistic siblings, and unrelated non-autistic individuals. The study authors hypothesized that gut microbiota diversity would differ between these three groups and that the gut microbiota composition of autistic individuals and their siblings would show unique features compared to non-autistic individuals.

The study included 239 autistic individuals, 102 non-autistic biological siblings of these individuals, and 81 unrelated non-autistic children and young adults from Taiwan. The average age of the autistic participants and their siblings was approximately 12 years, while the average age of the unrelated non-autistic participants was approximately 14. In general, participants’ ages ranged between 4 and 25 years.

Autistic participants were required to have a clinical diagnosis of autism. This diagnosis was confirmed through interviews conducted by the authors (using the Autism Diagnostic Interview-Revised and the Autism Diagnostic Observation Schedule). Non-autistic participants were required to have no diagnosis of psychiatric disorders or neurological or systemic medical conditions.

The study authors obtained information about participants’ autism-related behaviors and emotional and behavioral issues from their caregivers (using the Social Responsiveness Scale and the Child Behavior Checklist). Study participants provided fecal samples, allowing the authors to analyze the composition of their gut microbiota. Participants and their parents also reported any gastrointestinal symptoms the participants had experienced in the past 4 weeks.

Results showed that, compared to unrelated non-autistic participants, siblings of autistic individuals had higher alpha diversity, while autistic participants had a different beta diversity in their gut microbiota.

Alpha diversity of gut microbiota refers to the diversity of microbial species within a single individual or sample, reflecting the richness and evenness of the community. Beta diversity of gut microbiota reflects differences in microbial composition between individuals or samples, indicating how distinct their microbial communities are from one another.

Unrelated non-autistic participants had a higher relative abundance of Blautia, Eubacterium hallii group, Anaerostipes, Erysipelotrichaceae UCG 003, Parasutterella, and Ruminococcaceae UCG 013 at the genus level compared to autistic participants and their siblings. The family Prevotellaceae and genera Agathobacter microbes were more abundant in siblings of autistic participants compared to both autistic participants and unrelated non-autistic participants. Individuals with more Anaerostipes bacteria tended to have significantly less social impairment and internalizing problems.

“Our study reveals unique microbial compositions in the ASD [autism spectrum disorder] and SIB [siblings] groups and a relationship between behavior patterns and microbial composition. These findings suggest the potential of microbial interventions for autistic individuals that warrant further exploration,” the study authors concluded.

The study contributes to the scientific understanding of the links between gut microbiota composition and psychological processes and characteristics. However, it should be noted that the cross-sectional design of this study does not allow for causal inferences to be derived from the results.

The paper, “Identifying gut microbiota composition disparities in autistic individuals and their unaffected siblings: correlations with clinical characteristics,” was authored by Jung-Chi Chang, Yu-Chieh Chen, Hai-Ti Lin, Yan-Lin Chen, and Susan Shur-Fen Gau.

Peri-orgasmic phenomena: Women report diverse symptoms ranging from laughter to foot pain

9 February 2026 at 22:00

A recent survey investigation indicates that many women experience unexpected physical and emotional reactions during sexual climax, ranging from uncontrollable laughter to foot pain. These occurrences, known as peri-orgasmic phenomena, appear to be diverse and often happen inconsistently rather than with every orgasmic experience. The findings were published in the Journal of Women’s Health.

Medical understanding of the female orgasm typically focuses on standard physiological release and emotional satisfaction. Physiologically, an orgasm is generally defined as a brief episode of physical release that responds to sexual stimulation. Emotionally, it is usually perceived as a subjective peak of reaction to that stimulation. However, anecdotal reports and isolated case studies have historically hinted at a broader range of experiences that fall outside this expected norm.

Existing medical literature on these unusual symptoms is limited and relies heavily on individual patient reports rather than broader data collection. The authors of this new paper sought to categorize these unique physical and emotional symptoms more systematically. They aimed to determine which specific symptoms women experience and how frequently these sensations occur. Additionally, the team wanted to identify the context in which these phenomena are most likely to manifest, such as during partnered sex or solo masturbation.

“My co-author had written a paper on this topic. Before conducting this survey, occurrences of peri-orgasmic symptoms during orgasm were only acknowledged in the medical literature as rare case reports,” said Lauren F. Streicher, a professor of obstetrics and gynecology at the Feinberg School of Medicine at Northwestern University.

To gather information, the authors created a short educational video explaining peri-orgasmic phenomena. They posted this content on various social media platforms to recruit individuals who identified with having these experiences. The video defined the phenomena as weird physical or emotional occurrences, such as ear pain or crying, that happen specifically during an orgasm. Viewers who recognized these symptoms in their own lives were invited to participate in an anonymous online survey.

The questionnaire consisted of six items designed to capture demographic data and specific details about orgasmic reactions. A total of 3,800 individuals viewed the recruitment video during the study period. From this audience, 86 women aged 18 and older completed the survey to report their personal experiences. The researchers collected data regarding the types of symptoms, their consistency, and the sexual scenarios in which they appeared.

The analysis revealed that emotional reactions were the most commonly reported type of peri-orgasmic phenomenon. Eighty-eight percent of the respondents indicated they experienced emotional symptoms during climax. Among these emotional responses, crying was the most prevalent, affecting 63 percent of the participants. This finding aligns with existing concepts of postcoital dysphoria, although the prevalence in this specific sample was notable.

Forty-three percent of the women reported feelings of sadness or an urge to cry even during a positive sexual experience. An equal number of women, 43 percent, reported laughing during orgasm. This high rate of laughter contrasts with the scarcity of such reports in previous medical journals. A small minority, comprising 4 percent of the group, reported experiencing hallucinations during the event.

Physical symptoms were also widely represented in the survey results. Sixty-one percent of respondents reported bodily sensations unrelated to standard sexual physiology. The most frequent physical complaint was headache, which was noted by 33 percent of the women. These headaches varied in description, but their association with the moment of climax was clear.

Muscle weakness occurred in 24 percent of the cases reported in the study. This sensation is clinically referred to as cataplexy when it occurs in patients with narcolepsy. However, in this sample, it appeared as an isolated symptom associated with sexual release. Foot pain or tingling was another notable physical symptom, affecting 19 percent of the participants.

Less common physical reactions included facial pain or tingling, which was reported by 6 percent of the group. Sneezing was observed in 4 percent of the respondents. Yawning occurred in 3 percent of the cases. Ear pain or other ear sensations and nosebleeds were each reported by 2 percent of the women.

The data showed that these symptoms often overlap within the same individual. Fifty-two percent of the women experienced more than one type of symptom. Twenty-one percent of the respondents reported having both physical and emotional reactions. Some women reported clusters of symptoms, such as crying and laughing together or headaches accompanied by crying.

Regarding consistency, the study found that these phenomena do not necessarily happen every time a person reaches climax. Sixty-nine percent of the participants stated that they experienced these symptoms only sometimes. In contrast, 17 percent reported that the symptoms occurred consistently with every orgasm. This variability suggests a multifaceted nature to these responses.

The researchers also examined whether the method of sexual stimulation influenced the likelihood of these events. The majority of respondents, 51 percent, experienced these symptoms exclusively during sexual activity with a partner. Only 9 percent reported symptoms specifically during masturbation. The use of a vibrator was associated with these symptoms in 14 percent of the cases.

“The findings from this survey indicate that, although the precise prevalence is still unknown, such phenomena are not as rare as previously believed,” Streicher told PsyPost. “The survey also broadens our understanding of symptom types and prevalence, highlighting both emotional and physical manifestations. Notably, this is the first survey to discover that individuals are more likely to experience these symptoms during partnered sexual activity compared to masturbation. This observation suggests a possible emotional component to the etiology, even though the underlying cause remains unknown.”

The researchers postulate that the presence of a partner may evoke more complex psychological and physiological responses. This might hint at the involvement of an emotional component in triggering these phenomena. A heightened emotional state during sexual activity with a partner may potentially activate different neurophysiological pathways. Solo sexual activity might not trigger these same pathways to the same extent.

The study discusses potential biological mechanisms for some of these physical symptoms. Regarding headaches, the authors note that the hypothalamus is intensely stimulated during orgasm. This brain region is also involved in certain types of cluster headaches. It is possible that the modulation of circuits around the hypothalamus during climax plays a role in generating or relieving head pain.

The reports of foot pain are analyzed through the lens of neuroanatomy. The researchers reference theories suggesting that the somatosensory-evoked potentials of the foot and female genitalia are in close proximity in the brain. It is hypothesized that this closeness could lead to “cross-wiring” or referred sensations. Previous case studies have documented women feeling orgasmic sensations in their feet, which supports this neurological theory.

The high prevalence of laughing reported in this sample stands out against the backdrop of existing medical literature. Previous scientific publications have rarely documented laughter as a direct response to orgasm. This survey provides evidence that laughing may be a more common peri-orgasmic phenomenon than clinical case reports have previously suggested. The authors note that the etiologies behind this laughter, as well as the feelings of sadness, remain medically unknown.

But as with all research, there are limitations. The sample size was relatively small, with only 86 women responding out of thousands of viewers. This low response rate makes it difficult to estimate the actual prevalence of these phenomena in the general population. The recruitment method via social media may have introduced selection bias.

The respondents were predominantly older, with a significant portion over the age of 45. This age skew reflects the specific demographic that follows the primary author on social media platforms. The results may not fully represent the experiences of younger women. Additionally, the data relies entirely on self-reporting, which depends on the participants’ memory and interpretation of their symptoms.

Future investigations would benefit from larger and more diverse sample groups to validate these preliminary numbers. Researchers suggest that understanding the underlying physiological mechanisms requires more rigorous clinical study. Detailed physiological monitoring during sexual activity could provide objective data to support these self-reports. Further research could also explore why these symptoms appear more frequently with partners than during solo acts.

The researchers emphasize that recognizing these symptoms is a step toward normalizing the experience for women. “If they experience one of these phenomena, it should not be interpreted as an indication of underlying psychological or physical pathology,” Streicher said.

The study, “Emotional and Physical Symptoms in Women with Peri-Orgasmic Phenomena,” was authored by Lauren F. Streicher and James A. Simon.

Evolutionary motives of fear and coercion shape political views on wealth redistribution

9 February 2026 at 21:00

Recent psychological research suggests that political views on wealth redistribution are driven by deep-seated evolutionary motives rather than just economic logic. New evidence indicates that the fear of conflict and a desire for equal outcomes are powerful predictors of support for government transfer payments. These findings imply that social policies are often supported as a way to appease potential aggressors or to enforce group conformity.

The Role of Egalitarianism and Coercion

Researchers Chien-An Lin and Timothy C. Bates of the University of Edinburgh sought to expand the understanding of why individuals support economic redistribution. Their work builds upon the “three-person two-situation” model. This evolutionary framework previously identified three primary motives: self-interest, compassion for the needy, and malicious envy toward the wealthy.

In a study published in the journal Personality and Individual Differences in 2024, they aimed to determine if a specific preference for equal outcomes could explain support for redistribution better than existing models. They also investigated whether the willingness to use force to achieve these outcomes played a role.

Lin and Bates conducted two separate investigations to test their hypotheses. In Study 1, they recruited 403 participants from the United Kingdom using the Prolific Academic platform. The sample was representative of the UK population regarding ethnicity and gender.

The researchers measured attitudes using several established psychological scales. They assessed support for economic redistribution and the three traditional motives of self-interest, compassion, and envy. They also introduced measures for “Egalitarian Fairness” and “Instrumental Harm.”

Egalitarian Fairness was defined as a motive to divide resources so that no individual wishes to switch their share with another. Instrumental Harm assessed the belief that the ends justify the means, even if it requires harming innocent people. Additionally, the researchers developed a new scale to measure “Support for Coercive Redistribution.”

This new scale included items assessing willingness to punish those who question redistribution. It also asked about using force to reveal hidden wealth. The results of Study 1 provided evidence that Egalitarian Fairness predicts support for redistribution independently of other motives.

The data indicated that this fairness motive accounts for unique variance in political views. It operates alongside self-interest, compassion, and envy.

The study also revealed a connection between Instrumental Harm and the willingness to use coercion. Individuals who scored high on Instrumental Harm were more likely to support forcible redistribution. Malicious envy also predicted this support for coercion. The researchers found that compassion did not reduce the support for coercive measures.

To validate these findings, Lin and Bates conducted Study 2 with a fresh sample of 402 UK participants. This replication aimed to confirm the initial results and test for discriminant validity against other forms of fairness. They measured “Procedural Fairness” and “Distributional Fairness” to see if they yielded different results.

The second study confirmed the findings of the first. Egalitarian Fairness reliably increased support for redistribution. The motive for coercion was again predicted by Instrumental Harm, envy, and self-interest.

The study showed that Procedural Fairness had no significant link to redistribution support. This suggests that the desire for redistribution is specifically about outcomes rather than the rules of the game. The final motivational model accounted for over 40% of the variance in support for redistribution.

Fear of Violent Dispossession

Following this line of inquiry, Bates and Daniel Sznycer of Oklahoma State University investigated a different evolutionary driver: fear. They proposed that support for redistribution might stem from a “Bismarckian” strategy of appeasement. This theory suggests people give up resources to avoid the greater cost of being attacked or robbed.

Otto von Bismarck is a 19th-century German Chancellor credited with establishing the first modern welfare state. Bismarck was a conservative leader who implemented social protections such as health insurance and pensions, yet his primary motivation was not compassion. He intended these reforms to undermine the appeal of radical socialist movements.

Their paper, titled “Bismarckian welfare revisited,” was published in the journal Evolution and Human Behavior. The researchers argued that the human mind evolved to navigate asymmetric conflicts. In this view, appeasement is a biological adaptation to avoid injury when facing a desperate or formidable opponent.

They hypothesized that a “Fear of Violent Dispossession” would predict support for progressive taxation. This fear arises when individuals perceive that others value their resources more highly than they do. It leads to a strategy of ceding resources to preempt violence.

Sznycer and Bates conducted three studies to test this hypothesis. Study 1 involved 303 participants from the UK. They developed a “Fear of Violent Dispossession” scale with items such as “I worry that economic hardship could lead to violence directed at people like me.”

The results showed a strong positive association between this specific fear and support for redistribution. The effect remained significant even when controlling for compassion, envy, and self-interest. This suggests that fear acts as a distinct pathway to political support for welfare.

Study 2 sought to replicate these findings in a different cultural context. The researchers recruited a nationally representative sample of 804 participants from the United States. This study included controls for political orientation and party support.

The data from the US sample mirrored the UK findings. Fear of Violent Dispossession was a strong predictor of support for redistribution. This association held true regardless of whether the participant identified as liberal or conservative.

Study 3 was a pre-registered replication using another representative US sample of 804 participants. This study included a measure of “Coercive Egalitarianism” to see if the fear motive remained robust. The results confirmed the previous patterns.

The analysis indicated that fear of dispossession predicts redistribution support over and above coercive egalitarianism. It also outperformed the motive of proportionality. The researchers concluded that appeasement is a key psychological mechanism underlying modern welfare views.

Fear and Broader Progressive Policies

In a related single-author paper published in Personality and Individual Differences, Bates extended this framework. He investigated whether this fear of dispossession explains support for broader progressive policies beyond taxation. These policies included affirmative action, diversity quotas, and support for social justice movements.

Bates theorized that “progressive policy” acts as a broad mechanism for transferring power and control. He hypothesized that the same fear driving economic redistribution would drive support for these social regulations. He also looked at the motive of self-interest in relation to these policies.

Study 1 in this paper involved 502 US participants. The sample was representative regarding age, sex, and political party. Bates developed a “Support for Progressive Policy” scale covering issues like DEI training, decolonization, and boardroom diversity.

The results demonstrated that these diverse policy preferences form a single, coherent psychological construct. As predicted, Fear of Violent Dispossession predicted support for these progressive policies. Individuals who feared losing what they have were more likely to support regulations that transfer influence to others.

The study also found a strong link between self-interest and progressive policy support. Participants who expected their own economic situation to improve under these policies were much more likely to support them. This suggests a dual motivation of fear and personal gain.

Bates also tested a hypothesis regarding appeasement of powerful groups. He asked participants about their willingness to yield to strong adversaries, such as foreign powers or cartels. The data showed that Fear of Violent Dispossession predicted a general tendency to appease strong groups.

Study 2 was a pre-registered replication with 500 US participants. It aimed to confirm the findings while controlling for socioeconomic status. The results were consistent with the first study.

Fear of Violent Dispossession remained a robust predictor of support for progressive policy. The study found that this fear motivates individuals to cede resources to both the needy and the powerful. It challenges the idea that progressive views are solely driven by compassion or moral ideals.

Limitations and Future Directions

These three papers provide a new perspective on political psychology, but they have limitations. The data in all studies were correlational. This means researchers cannot definitively claim that fear causes the policy support, only that they are linked.

The measures relied on self-reports. Participants might answer in ways they believe are socially acceptable. Future research should use experimental designs to induce fear or compassion to see if policy views change in real-time.

Another limitation is the reliance on Western samples from the UK and US. It is unknown if these motives operate identically in non-Western cultures. Cultural norms regarding fear and sharing might influence these biological drives.

Future studies could investigate how these motives interact with dark personality traits. Research could look at whether individuals high in Machiavellianism exploit this fear in others to advance their own interests. Additionally, further work is needed to distinguish this specific fear of dispossession from general anxiety.

The findings suggest that political debates are shaped by ancient mechanisms of survival. Recognizing the roles of fear, envy, and coercion may help explain why political polarization is so persistent. It appears that economic and social policies are often viewed through the lens of potential conflict.

The study, “Support for redistribution is shaped by motives of egalitarian division and coercive redistribution,” was authored by Chien-An Lin and Timothy C. Bates.

The study, “Fear of violent dispossession motivates support for progressive policy,” was authored by Timothy C. Bates.

The study, “Bismarckian welfare revisited: Fear of being violently dispossessed motivates support for redistribution,” was authored by Daniel Sznycer and Timothy C. Bates.

❌
❌