Normal view

Today — 4 February 2026PsyPost – Psychology News

One specific reason for having sex is associated with higher stress levels the next day

4 February 2026 at 15:00

Sexual activity is often touted in popular culture as a natural remedy for daily tension and anxiety. A recent study published in the Archives of Sexual Behavior provides evidence that while sex is associated with lower stress on the day it occurs, these benefits generally do not persist into the following day. The findings also suggest that the motivation behind the sexual encounter plays an important role in its emotional aftermath, as sex initiated to avoid relationship conflict was linked to increased stress levels 24 hours later.

The idea that physical intimacy can alleviate stress is not merely a product of television sitcoms or magazines. Psychological theories regarding affectionate touch suggest that physical contact reduces negative emotions through specific neurobiological pathways. Sexual activity triggers the release of hormones like oxytocin and endogenous opioids, which are known to modulate the body’s stress response.

Prior research has supported this connection to some degree, linking frequent sexual activity to higher life satisfaction and lower negative mood. However, few studies have examined the day-to-day fluctuations of this relationship or tested how long the stress-relieving effects actually last.

Previous daily-diary studies have produced mixed results and often relied on small samples of college students or older adults. The authors of the current study aimed to address these gaps by analyzing a large, pooled sample of newlywed couples to provide a more robust test of these associations.

“There’s this widespread, lay belief that sex is a natural stress reliever, but very little research has actually provided compelling empirical evidence to support this belief,” explained study author Sierra D. Peters, an assistant professor of psychology at Rhodes College. “Prior studies were small, inconsistent, and often focused on students or single individuals. Thus, our goal was to examine whether sexual activity actually does reduce stress in a high-powered study of real long-term couples. We were also interested in whether the context of sex—why couples are having it and how satisfying it is—impacts any stress-relieving properties of sex.”

To investigate the temporal relationship between sex and stress, the researchers combined data from three independent studies. The final sample included 645 individuals, comprising 319 couples. The participants were generally young adults in their mid-twenties to early thirties. All participants were in the early stages of marriage, having been wed for less than six months on average.

The research employed a daily-diary design, which allows researchers to capture real-life experiences as they happen rather than relying on retrospective memory. Couples first completed a baseline session where they provided demographic information and completed standard measures of personality and relationship satisfaction. Following this, each partner completed a survey every evening for 14 consecutive days.

On each of these 14 days, participants reported whether they had engaged in sexual activity with their partner. They also rated their daily experiences of stress and anxiety on a scale from one to seven. The researchers combined reports from both partners to ensure accuracy regarding whether sex occurred on a given day.

When sexual activity did take place, participants answered additional questions about the encounter. They rated how satisfied they were with the sex. They also indicated their motives for being intimate. Specifically, they reported the extent to which they engaged in sex to please their partner, known as an approach motive. They also reported if they had sex to avoid conflict in the relationship, known as an avoidance motive.

The researchers used advanced statistical modeling to analyze the daily fluctuations within each person and couple. They looked at the association between sex and stress on the same day. They also examined the “lagged” association to see if having sex on one day predicted stress levels on the subsequent day.

The researchers found an association between sexual activity and reduced stress on the same day. On days when couples engaged in sex, they reported lower levels of stress compared to days when they did not. This association remained significant even when the researchers accounted for other factors like daily negative mood.

This immediate reduction in stress appeared to be universal within the sample. The researchers found no evidence that the effect differed between men and women. It also did not depend on the couple’s general level of marital satisfaction. Both husbands and wives in happy or less happy marriages experienced similar same-day benefits.

However, the stress-relieving properties of sexual activity appeared to be transient. The analysis revealed that engaging in sex on a given day was not associated with reduced stress the next day. The beneficial effects observed on the day of intimacy did not carry over across a 24-hour period. This suggests that the neurobiological or psychological boost provided by sex is relatively short-lived.

“I was a little surprised to see how transient the benefits of engaging in partnered sex were; we expected them to last at least 24 hours, but they did not,” Peters told PsyPost.

The researchers also found that the quality of the sexual experience mattered for immediate well-being. People who reported higher satisfaction with the sexual encounter experienced greater reductions in stress that same day. However, like the act of sex itself, this satisfaction did not predict lower stress levels the following day.

One of the most significant findings concerned the motivations behind sexual activity. The data indicated that why people have sex is just as important as whether they have sex. When individuals engaged in sexual activity to avoid negative outcomes, such as conflict or partner disappointment, the results were detrimental.

Specifically, engaging in sex with avoidance motives was associated with higher levels of stress the next day. This finding aligns with broader psychological theories regarding approach and avoidance motivation. Actions taken to evade negative experiences often result in increased anxiety and vigilance. In the context of a relationship, having sex to prevent a fight may paradoxically create the very tension the individual hopes to escape.

On the other hand, engaging in sex for approach motives, such as wanting to please a partner, showed a different pattern. There was some evidence that this motivation was linked to lower stress the next day. However, this particular finding was not as robust when the researchers controlled for other personality variables.

“One big takeaway from this research is that sex can reduce stress—but these beneficial effects appear to be fairly short-lived,” Peters explained. “We found that on days couples had sex, they felt less stressed that same day. However, those benefits didn’t carry over to the next day. Another important conclusion from this research is that why people are having sex matters. When couples had sex to avoid conflict or tension in their relationship, they actually felt more stressed, and that heightened stress carried into the next day.”

“These were small effects, but that’s typical for daily events that occur within relationships (e.g., mood, stress) which are influenced by many different things simultaneously. The changes we observed weren’t dramatic—but they were reliable across more than 8,000 days of data. In practical terms, sex isn’t the end all, be all cure for stress. It may provide a short-term buffer, but it’s probably not a substitute for addressing the underlying sources of stress.”

The study has several strengths, including its large sample size and the use of dyadic data from both spouses. Focusing on newlyweds also provided a sample where sexual frequency is typically higher than in long-term marriages. This allowed for sufficient variability in the data to detect these daily patterns.

Despite these strengths, the study is not without limitations. The data is correlational, which means researchers cannot definitively claim that sex causes the reduction in stress. It is equally plausible that days with lower stress levels simply make people more inclined to engage in sexual activity. The researchers attempted to control for prior-day stress to account for this, but the direction of causality remains a question.

Another limitation involves the demographic homogeneity of the sample. The participants were primarily heterosexual, Caucasian, and residing in the United States. They were also all newlyweds, a group that typically reports high relationship satisfaction.

“One caveat worth noting is that these data come from newlywed couples,” Peters noted. “Thus, the findings may not generalize to longer-term marriages, dating couples, or single individuals. It’s also important to remember that these data are correlational, so drawing causal conclusions is not appropriate.”

“Going forward, I’m interested in differentiating between different sources (e.g., internal versus external) and types (e.g., acute versus chronic) of stress and complementing self-report measures with physiological indicators of stress, such as cortisol or blood pressure. If the benefits of sex are primarily short-term and neurobiological, these kinds of measures may provide a clearer picture of the conditions under which sexual activity truly helps regulate different kinds of stress.”

The study, “Does Sex Today Relieve Stress Tomorrow? Examining Lagged Associations Between Partnered Sexual Activity and Stress Among Newlywed Couples,” was authored by Sierra D. Peters, Devon S. Glicken, and Andrea L. Meltzer.

Can shoes boost your brain power? What neuroscience says about the new claims

4 February 2026 at 03:00

Athletic footwear has entered a new era of ambition. No longer content to promise just comfort or performance, Nike claims its shoes can activate the brain, heighten sensory awareness and even improve concentration by stimulating the bottom of your feet.

“By studying perception, attention and sensory feedback, we’re tapping into the brain-body connection in new ways,” said Nike’s chief science officer, Matthew Nurse, in the company’s press release for the shoes. “It’s not just about running faster — it’s about feeling more present, focused and resilient.”

Other brands like Naboso sell “neuro-insoles,” socks and other sensory-based footwear to stimulate the nervous system.

It’s a compelling idea: The feet are rich in sensory receptors, so could stimulating them really sharpen the mind?

As a neurosurgeon who studies the brain, I’ve found that neuroscience suggests the reality is more complicated – and far less dramatic – than the marketing implies.

Close links between feet and brain

The soles of the feet contain thousands of mechanoreceptors that detect pressure, vibration, texture and movement.

Signals from these receptors travel through peripheral nerves to the spinal cord and up to an area of the brain called the somatosensory cortex, which maintains a map of the body. The feet occupy a meaningful portion of this map, reflecting their importance in balance, posture and movement.

Footwear also affects proprioception – the brain’s sense of where the body is in space – which relies on input from muscles, joints and tendons. Because posture and movement are tightly linked to attention and arousal, changes in sensory feedback from the feet can influence how stable, alert or grounded a person feels.

This is why neurologists and physical therapists pay close attention to footwear in patients with balance disorders, neuropathy or gait problems. Changing sensory input can alter how people move.

But influencing movement is not the same thing as enhancing cognition.

Minimalist shoes and sensory awareness

Minimalist shoes, with thinner soles and greater flexibility, allow more information about touch and body position to reach the brain compared with heavily cushioned footwear. In laboratory studies, reduced cushioning can increase a wearer’s awareness of where their foot is placed and when it’s touching the ground, sometimes improving their balance or the steadiness of their gait.

However, more sensation is not automatically better. The brain constantly filters sensory input, prioritizing what is useful and suppressing what is distracting. For people unaccustomed to minimalist shoes, the sudden increase in sensory feedback may increase cognitive load – drawing attention toward the feet rather than freeing mental resources for focus or performance.

Sensory stimulation can heighten awareness, but there is a threshold beyond which it becomes noise.

Can shoes improve concentration?

Whether sensory footwear can improve concentration is where neuroscience becomes especially skeptical.

Sensory input from the feet activates somatosensory regions of the brain. But brain activation alone does not equal cognitive enhancement. Focus, attention and executive function depend on distributed networks involving various other areas of the brain, such as the prefrontal cortex, the parietal lobe and the thalamus. They also rely on hormones that modulate the nervous system, such as dopamine and norepinephrine.

There is little evidence that passive underfoot stimulation – textured soles, novel foam geometries or subtle mechanical features – meaningfully improves concentration in healthy adults. Some studies suggest that mild sensory input may increase alertness in specific populations – such as older adults training to improve their balance or people in rehabilitation for sensory loss – but these effects are modest and highly dependent on context.

Put simply, feeling more sensory input does not mean the brain’s attention systems are working better.

Belief, expectation and embodied experience

While shoes may not directly affect your cognition, that does not mean the mental effects people report are imaginary.

Belief and expectation still play a powerful role in medicine. Placebo effects and their influence on perception, motivation and performance are well documented in neuroscience. If someone believes a shoe improves focus or performance, that belief alone can change perception and behavior – sometimes enough to produce measurable effects.

There is also growing interest in embodied cognition, the idea that bodily states influence mental processes. Posture, movement and physical stability can shape mood, confidence and perceived mental clarity. Footwear that alters how someone stands or moves may indirectly influence how focused they feel, even if it does not directly enhance cognition.

In the end, believing a product gives you an advantage may be the most powerful effect it has.

Where science and marketing diverge

The problem is not whether footwear influences the nervous system – it does – but imprecision. When companies claim their shoes are “mind-altering,” they often blur the distinction between sensory modulation and cognitive enhancement.

Neuroscience supports the idea that shoes can change sensory input, posture and movement. It does not support claims that footwear can reliably improve concentration or attention for the general population. If shoes truly produced strong cognitive changes, those effects would be robust, measurable and reproducible. So far, they are not.

Shoes can change how we feel in our bodies, how you move through space and how aware you are of your physical environment. Those changes may influence confidence, comfort and perception – all of which matter to experience.

But the most meaningful “mind-altering” effects a person can experience through physical fitness still come from sustained movement, training, sleep and attention – not from sensation alone. Footwear may shape how the journey feels, but it is unlikely to rewire the destination.The Conversation

 

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Shared viewing of erotic webcams is rare but may enhance relationship intimacy

4 February 2026 at 01:00

Couples seeking to reinvigorate their romantic lives often turn to novel experiences, ranging from travel to shared hobbies. A new study suggests that for some partners, this exploration has moved into the digital realm of erotic webcam sites. The research indicates that while using these platforms with a partner is relatively rare, those who do so often report positive outcomes for their relationship. These findings were published recently in the Journal of Social and Personal Relationships.

The integration of technology into human intimacy is a growing field of inquiry for social scientists. Erotic webcam modeling websites, or “camsites,” allow users to view and interact with live performers. Historically, researchers have viewed the consumption of online erotic content as a solitary activity. This new investigation shifts that focus to explore how romantic partners utilize these platforms together.

Jessica T. Campbell, a researcher at The Kinsey Institute at Indiana University, led the study. She collaborated with colleagues Ellen M. Kaufman, Margaret Bennett-Brown, and Amanda N. Gesselman. The team sought to apply the “self-expansion model” of relationships to digital intimacy. This psychological theory suggests that individuals in relationships are motivated to expand their sense of self. They often achieve this expansion by including their partner in new and challenging activities.

The researchers posit that shared participation in camsite viewing could serve as one of these expanding activities. Previous academic work has looked at couples who watch pre-recorded pornography together. Those studies have generally found links between shared viewing and increased sexual communication. Campbell and her team aimed to see if the interactive nature of camsites produced similar results.

To gather data, the research team recruited participants directly through an advertisement on LiveJasmin.com, a major webcam platform. The banner ad invited site visitors to complete a survey about their experiences. This method allowed the researchers to access an active community of users rather than relying on a general population sample. The initial pool included more than 5,000 participants.

From this large group, the investigators filtered for specific criteria to form their final dataset. They isolated a subsample of 312 participants who were in romantic relationships. These participants also indicated that their partners were aware of their camsite usage. The demographic profile of this group was specific. The majority of respondents were white, heterosexual, cisgender men who reported being in committed, exclusive relationships.

The study aimed to quantify how often these couples engaged in the activity together. The results showed that shared usage is not the norm for most camsite users. Only about 35 percent of the partnered subsample had ever viewed a cam show with their significant other. When looking at the total initial sample of over 5,000 users, only 2.1 percent engaged in this behavior. This suggests that for the vast majority of users, camming remains a private activity.

However, the data revealed a pattern of repeated behavior among the minority who did participate together. Among the couples who had used a camsite together, 56 percent reported doing so multiple times. Roughly one in four of these participants stated they had engaged in the activity more than 20 times. This frequency implies that for those couples who cross the initial threshold, the experience often becomes a recurring part of their sexual repertoire.

The researchers also investigated the motivations behind this shared digital consumption. The survey provided a range of options for why couples chose to log on together. The primary driver for these couples was a desire to introduce novelty into their dynamic. Approximately 36 percent of respondents indicated they wanted to spice up their relationship or try something new.

Fulfilling specific fantasies or desires was another leading motivation, selected by nearly 28 percent of the group. A similar percentage cited curiosity or entertainment as their main reason. Less frequently, participants mentioned using the sites to learn about sex or to engage with specific kinks. These responses align with the self-expansion model, as couples appeared to use the technology to broaden their sexual horizons.

The study then assessed how these experiences impacted the relationship itself. The findings defied the stereotype that online erotica necessarily creates distance between partners. A significant portion of the respondents reported neutral or positive effects. About 27 percent said the activity had no impact on their relationship at all.

Conversely, nearly a quarter of the participants felt the experience enhanced their relationship overall. When asked about specific benefits, 37 percent reported that it improved their communication regarding sex. Twenty-eight percent said it helped them understand their partner’s sexual interests better. Others noted that the shared activity helped reduce awkwardness or discomfort around sexual topics.

Negative outcomes were reported by a very small minority of the sample. Only about 5 percent of respondents indicated that using the camsite with their partner had a negative impact on their relationship. This low figure suggests that for the specific demographic surveyed, the activity was generally safe for the relationship. The high likelihood of repeat usage supports this conclusion. Sixty-four percent of the participants said they were likely or very likely to use a camsite with their partner again.

These findings build upon and add nuance to previous research regarding technology and intimacy. Earlier studies on shared pornography consumption have shown that it can foster intimacy when both partners are willing participants. This new study extends that logic to live, interactive platforms. It suggests that the interactive element of camsites may offer unique opportunities for couples to articulate their desires in real-time.

The results also complement recent work regarding the educational potential of adult platforms. A separate study published in Sexuality & Culture found that users of OnlyFans often reported learning new things about their own preferences and sexual health. Similarly, the participants in Campbell’s study indicated that camsites served as a venue for learning and exploration. This counters the narrative that such platforms are solely sources of passive entertainment.

However, the current study contrasts somewhat with research focusing on the solitary use of these platforms. A study published in Computers in Human Behavior highlighted that some solo viewers experience feelings of guilt or isolation. The dynamic appears to change when the activity becomes a shared pursuit. By bringing a partner into the digital space, the secrecy that often fuels feelings of shame is removed.

It is important to consider the demographics of the current study when interpreting the results. The sample consisted almost entirely of men. This means the data reflects the male partner’s perception of the shared experience. The researchers did not survey the female partners to verify if they shared the same positive outlook. It is possible that the non-responding partners might have felt differently about the activity.

The method of recruitment also introduces a degree of selection bias. By advertising on the camsite itself, the researchers naturally selected individuals who were already comfortable enough with the platform to be online. Couples who tried the activity once, had a terrible experience, and vowed never to return would likely not be present to take the survey. This may skew the results toward a more positive interpretation of the phenomenon.

Additionally, the study notes that some participants were partnered with cam models. For these specific individuals, “using” the site together might simply mean supporting their partner’s work. This is a fundamentally different dynamic than two laypeople watching a third party. The researchers acknowledge that the motivations for this subgroup would differ from the general trend.

Future research will need to address these gaps to provide a more complete picture. Obtaining data from both members of the couple would be a vital next step. This would allow scientists to see if the reported improvements in communication are mutual. It would also help to determine if one partner is merely complying with the other’s desires.

Researchers also suggest exploring how different demographics engage with this technology. The current study was heavily skewed toward white, heterosexual couples. It remains unclear if LGBTQ+ couples or couples from different cultural backgrounds experience similar outcomes. Different relationship structures, such as polyamory, might also interact with these platforms in unique ways.

Despite these limitations, the study offers a rare glimpse into a private behavior. It challenges the assumption that digital erotica is inherently isolating. Instead, it proposes that for some couples, the screen can serve as a bridge. By navigating the virtual sexual landscape together, these partners appear to find new ways to connect in the real world.

The study, “Connected, online and off: Romantic partnered experiences on erotic webcam sites,” was authored by Jessica T. Campbell, Ellen M. Kaufman, Margaret Bennett-Brown and Amanda N. Gesselman.

Wealthier men show higher metabolism in brain regions controlling reward and stress

3 February 2026 at 23:00

An analysis of positron emission tomography data in Korea found that higher family income was associated with increased neural activity (estimated through increased glucose metabolism) in the caudate, putamen, anterior cingulate, hippocampus, and amygdala regions of the brain of middle-aged men. These areas of the brain are involved in reward processing and stress regulation. The paper was published in the European Journal of Neuroscience.

Socioeconomic status refers to a person’s position in society based on income, education, and social standing. It is a powerful predictor of many life outcomes. Individuals with higher socioeconomic status tend to have better physical and mental health and to live longer. Lower socioeconomic status is associated with higher rates of cardiovascular disease, diabetes, depression, anxiety, and psychotic disorders.

Cognitive abilities, intelligence, and academic achievement also tend to be higher in individuals with higher socioeconomic status. These effects are thought to arise partly through neurobiological pathways shaped by long-term social and environmental exposure. Research in animals shows that social hierarchy can alter neurotransmitter systems, influencing motivation, stress sensitivity, and vulnerability to addiction.

In humans, differences in socioeconomic status have been shown to produce differences in language development, learning opportunities, and responses to reward as early as childhood. Later in life, higher socioeconomic status contributes to cognitive reserve, affecting how well individuals maintain cognitive function despite aging or brain pathology.

Study author Kyoungjune Pak and his colleagues wanted to explore the associations between neural activity in middle adulthood, education, and family income. They note that a lot of research has focused on children, young people, and the elderly, but that the number of studies on middle-aged adults is relatively low. On the other hand, this period of life is particularly important, as accumulated experiences and exposures associated with socioeconomic status can have lasting effects on brain health.

They analyzed positron emission tomography data of 233 healthy males who underwent a health check-up program at Samsung Changwon Hospital Health Promotion Center (in Changwon, South Korea) in 2013. Their average age was 43 years. Participants’ mean family income was 61,319 USD per year. On average, they completed 13–14 years of education. Study authors also included data from 232 men with missing socioeconomic status data for comparison to ensure the sample was representative.

In their analysis, study authors used positron emission tomography recordings of participants’ brains alongside data on family income and education level. They also used data on stress (collected using the Korean National Health and Nutrition Examination Survey), anxiety (the Beck Anxiety Inventory), and depression (the Centre for Epidemiologic Studies Depression Scale).

Results showed that individuals with higher family income tended to have a higher education level. Higher family income was also associated with increased glucose metabolism in the caudate, putamen, anterior cingulate, hippocampus, and amygdala regions of the brain.

This means that neural activity in these regions was higher in individuals with higher family income. These regions of the brain are involved in reward processing and stress regulation. Interestingly, education level was not associated with brain activity patterns.

“Family income and education level show differential associations with brain glucose metabolism in middle-aged males. Family income is associated with elevated brain glucose metabolism in regions involved in reward processing and stress regulation, suggesting a potential link between current socioeconomic resources and neural activity. However, these findings are cross-sectional and must be interpreted as associative rather than causal. Education level does not show a significant association with brain glucose metabolism,” the study authors concluded.

The study contributes to the scientific understanding of neural correlates of socioeconomic status. However, it is important to note that study participants were Korean middle-aged men, so it remains unknown how much these findings can be generalized to other demographic groups and other cultures.

The paper, “Family Income Is Associated With Regional Brain Glucose Metabolism in Middle-Aged Males,” was authored by Kyoungjune Pak, Seunghyeon Shin, Hyun-Yeol Nam, Keunyoung Kim, Jihyun Kim, Myung Jun Lee, and Ju Won Seok.

Yesterday — 3 February 2026PsyPost – Psychology News

What your fears about the future might reveal about your cellular age

3 February 2026 at 21:00

A new study published in Psychoneuroendocrinology indicates that women who experience high levels of anxiety regarding their declining health tend to age faster at a molecular level compared to those who do not.

The concept of aging is often viewed simply as the passage of time marked by birthdays. However, scientists increasingly view aging as a biological process of wear and tear that varies from person to person.

Two individuals of the same chronological age may possess vastly different biological ages based on their cellular health. To measure this, researchers look at the epigenome. The epigenome consists of chemical compounds and proteins that can attach to DNA and direct such actions as turning genes on or off, controlling the production of proteins in particular cells.

One specific type of epigenetic modification is called DNA methylation. As people age, the patterns of methylation on their DNA change in predictable ways. Scientists have developed algorithms known as “epigenetic clocks” to analyze these patterns.

These clocks can estimate a person’s biological age and the pace at which they are aging. When a person’s biological clock runs faster than their chronological time, it is often a harbinger of poor health outcomes and earlier mortality.

Researchers have previously established that general psychological distress can accelerate these biological clocks. However, less is known about the specific impact of aging anxiety. This form of anxiety is a multifaceted stressor. It encompasses fears about losing one’s attractiveness, the inability to reproduce, and the deterioration of physical health. Women often face unique societal pressures regarding these aspects of life.

Mariana Rodrigues, a researcher at the NYU School of Global Public Health, led a team to investigate this issue. Rodrigues and her colleagues sought to understand if these specific anxieties became biologically embedded in women. They hypothesized that the stress of worrying about aging acts as a persistent signal to the body. They believed this signal might trigger physiological responses that degrade cells over time.

To explore this connection, the team utilized data from the Midlife in the United States (MIDUS) study. This is a large, national longitudinal study that focuses on the health and well-being of U.S. adults. The researchers analyzed data from 726 women who participated in the biomarker project of the study. These participants provided blood samples and completed detailed questionnaires about their psychological state.

The researchers assessed aging anxiety across three distinct domains. First, they asked women about their worry regarding declining attractiveness. Second, they assessed anxiety related to declining health and illness. Third, they asked about worries concerning reproductive aging, such as being too old to have children.

The study employed two advanced epigenetic clocks to measure biological aging from the blood samples. The first clock, known as GrimAge2, estimates cumulative biological damage. It is often used to predict mortality risk by looking at a history of exposure to stressors.

The second clock, DunedinPACE, functions differently. Instead of measuring total accumulated damage, DunedinPACE acts like a speedometer. It measures the current pace of biological aging at the time the blood sample was taken.

The researchers used statistical models to test the relationship between the different types of anxiety and the two epigenetic clocks. They accounted for various factors that could skew the results. These included sociodemographic factors like age, race, and income. They also controlled for marital status and whether the women had entered menopause.

The analysis revealed distinct patterns in how different worries affect the body. The researchers found that anxiety about declining health was linked to a faster pace of aging as measured by DunedinPACE.

Women who reported higher levels of worry about illness and physical decline showed signs that their bodies were aging more rapidly than women with lower anxiety. This association persisted even when the researchers adjusted for the number of chronic health conditions the women already had.

This suggests that the worry itself, rather than just the presence of disease, plays a role in accelerating the aging process. However, the connection weakened when the researchers factored in health behaviors.

When they accounted for smoking, alcohol consumption, and body mass index, the statistical link between health anxiety and faster aging diminished. This reduction indicates that lifestyle behaviors likely mediate the relationship. Women who are anxious about their health might engage in coping behaviors that are detrimental to their physical well-being.

The study did not find the same results for the other domains of anxiety. Worries about declining attractiveness showed no statistical association with accelerated aging. Similarly, anxiety about reproductive aging was not linked to the epigenetic clocks. This lack of connection may be due to the fact that appearance and fertility concerns often fade as women grow older. Health concerns, by contrast, tend to persist or increase with age.

The researchers also combined the scores to look at cumulative aging anxiety. They found that the total burden of aging worries was associated with a faster pace of aging. Like the findings for health anxiety, this association was largely explained by health behaviors and existing chronic conditions.

It is worth noting that the findings were specific to the DunedinPACE clock. The researchers did not observe statistically significant associations between any form of aging anxiety and the GrimAge2 clock.

This discrepancy highlights the difference between the two measures. DunedinPACE captures the current speed of decline, which may be more sensitive to ongoing psychological stressors like anxiety. GrimAge2 reflects accumulated damage over a lifetime, which might not be as responsive to current subjective worries.

The authors propose that health-related anxiety operates as a chronic cycle. Fear of health decline leads to heightened body monitoring. This vigilance creates psychological distress. That distress triggers physiological stress responses, such as inflammation. Over time, these responses contribute to the wear and tear observed in the epigenetic data.

There are limitations to this study that affect how the results should be interpreted. The data was cross-sectional, meaning it captured a snapshot in time. Because of this design, the researchers cannot definitively prove that anxiety causes accelerated aging.

It is possible that the relationship works in the opposite direction. Perhaps women who are biologically aging faster feel physically worse, leading to increased anxiety.

Additionally, the measures for aging anxiety were based on single items in a questionnaire. This might not capture the full depth or nuance of a woman’s experience. The sample also consisted of English-speaking adults in the United States. Cultural differences in how aging is perceived and experienced could lead to different results in other populations.

Future research is needed to clarify the direction of these associations. Longitudinal studies that follow women over many years would help determine if anxiety precedes the acceleration of biological aging. Tracking changes in anxiety levels and epigenetic markers over time would provide stronger evidence of a causal link.

The study supports a biopsychosocial model of health. This model suggests that our subjective experiences and fears are not isolated in the mind. Instead, they interact with our biology to shape our long-term health. The findings suggest that addressing psychological distress about aging could be a potential avenue for improving physical health.

The study, “Aging anxiety and epigenetic aging in a national sample of adult women in the United States,” was authored by Mariana Rodrigues, Jemar R. Bather, and Adolfo G. Cuevas.

The hidden role of vulnerable dark personality traits in digital addiction

3 February 2026 at 19:00

Recent research indicates that specific personality traits marked by emotional fragility and impulsivity are strong predictors of addictive behaviors toward smartphones and social media. The findings suggest that for insecure individuals, social media applications frequently serve as a psychological gateway that leads to broader, compulsive phone habits. This investigation was published in the journal Personality and Individual Differences.

Psychologists have recognized for years that personality plays a role in how people interact with technology. Much of the previous work in this area focused on the “Big Five” personality traits, such as neuroticism or extraversion. Other studies looked at the “Dark Tetrad,” a cluster of traits including classic narcissism, Machiavellianism, psychopathy, and sadism.

These darker traits are typically associated with callousness, manipulation, and a lack of empathy. However, less attention has been paid to the “vulnerable” side of these darker personalities. This oversight leaves a gap in understanding how emotional instability drives digital compulsion.

Marco Giancola, a researcher at the University of L’Aquila in Italy, sought to address this gap. He and his colleagues designed a project to examine the “Vulnerable Dark Triad.” This specific personality taxonomy consists of three distinct components.

The first is Factor II Psychopathy, which is characterized by high impulsivity and reckless behavior rather than calculated manipulation. The second is Vulnerable Narcissism, which involves a fragile ego, hypersensitivity to criticism, and a constant need for reassurance. The third is Borderline Personality, marked by severe emotional instability and a fear of abandonment.

The researchers aimed to understand how these specific traits correlate with Problematic Smartphone Use (PSU) and Problematic Social Media Use (PSMU). They based their approach on the I-PACE model. This theoretical framework suggests that a person’s core characteristics interact with their emotional needs to shape how they use technology.

The team posited that people with vulnerable dark traits might not use technology to exploit others. Instead, these individuals might turn to digital devices to regulate their unstable moods or satisfy unmet needs for social validation.

The investigation consisted of two distinct phases. The first study involved 298 adult participants. The researchers administered a series of detailed questionnaires to assess personality structures. They also measured the participants’ levels of addiction to smartphones and social media platforms.

The team utilized statistical regression analysis to isolate the specific effects of the Vulnerable Dark Triad. They adjusted the data to account for sociodemographic factors like age and gender. They also controlled for standard personality traits and the antagonistic “Dark Tetrad” traits.

The results from this first study highlighted distinct patterns. Factor II Psychopathy emerged as the strongest and most consistent predictor of both smartphone and social media problems. This suggests that the impulsivity and lack of self-control inherent in this trait make it difficult for individuals to resist digital distractions. The inability to delay gratification appears to be a central mechanism here.

The analysis also revealed nuanced differences between the other traits. Vulnerable Narcissism was more strongly linked to generalized Problematic Smartphone Use. Individuals with this trait often harbor deep insecurities and a hidden sense of entitlement. They may use the smartphone as a safety blanket to avoid real-world social risks while seeking validation from a distance. The device allows them to construct a protected self-image that shields their fragile ego.

Conversely, Borderline Personality traits were more closely tied to Problematic Social Media Use. This makes sense given the interpersonal nature of the condition. People with these traits often struggle with intense fears of rejection. Social media platforms provide a space where they can constantly monitor relationships and seek signs of acceptance. The instantaneous feedback loop of likes and comments may temporarily soothe their anxiety about abandonment.

The researchers did not stop at identifying these associations. They conducted a second study with a larger sample of 586 participants to understand the sequence of these behaviors. The goal was to test a “bridge” hypothesis. The team suspected that these personality traits do not immediately cause a generalized phone addiction. They theorized that the addiction starts specifically with social media.

In this model, social media acts as the primary hook. The emotionally vulnerable individual turns to these apps to cope with negative feelings or to seek connection. Over time, this specific compulsion generalizes. The user begins to check the phone constantly, even when not using social media. The specific habit bleeds into a broader dysregulation of technology use.

The data from the second study supported this mediation model. The statistical analysis showed that Problematic Social Media Use effectively bridged the gap between the Vulnerable Dark Triad and general Problematic Smartphone Use. This was true for all three traits investigated. The path was indirect but clear. The vulnerability leads to social media compulsion, which in turn leads to a generalized dependency on the smartphone.

Factor II Psychopathy and Borderline Personality traits showed no direct link to general phone addiction in the second model. Their influence was entirely channeled through social media use. This indicates that for impulsive or emotionally unstable people, the social aspect of the technology is the primary driver. The device is merely the delivery mechanism for the social reinforcement they crave.

Vulnerable Narcissism showed a slightly different pattern. It had both a direct link to smartphone use and an indirect link through social media. This suggests a more complex relationship for this trait. These individuals likely use the phone for purposes beyond just social networking. They might engage in other validating activities like gaming or content consumption that prop up their self-esteem.

These findings offer a fresh perspective on digital addiction. They challenge the notion that “dark” personalities use the internet solely for trolling or cyberbullying. The research highlights a group of users who are internally suffering. Their online behavior is a coping mechanism for profound insecurity and emotional dysregulation.

The study aligns with the Problem Behavior Theory. This theory posits that maladaptive behaviors rarely occur in isolation. They tend to cluster together and reinforce one another. In this context, the smartphone provides an environment rich in rewards. It offers constant opportunities for mood modification. For someone with low impulse control or high emotional pain, the device becomes a necessary crutch.

There are important caveats to consider regarding this research. Both studies relied on self-reported data. Participants described their own behaviors and feelings. This method can introduce bias, as people may not assess their own addiction levels accurately.

Additionally, the research design was cross-sectional. The data captured a snapshot in time rather than tracking changes over a long period. While the statistical models suggest a direction of effect, they cannot definitively prove causation.

The sample collection method also presents a limitation. The researchers used a snowball sampling technique where participants recruited others. This approach can sometimes result in a pool of subjects that is not fully representative of the general population. The study was also conducted in Italy, which may limit how well the findings apply to other cultural contexts.

Future research should aim to address these shortcomings. Longitudinal studies are needed to track individuals over months or years. This would help confirm whether the personality traits definitively precede the addiction.

It would also be beneficial to use objective measures of screen time rather than relying solely on questionnaires. Seeing exactly which apps are used and for how long would provide a more granular picture of the behavior.

This research has practical implications for mental health and education. It suggests that treating technology addiction requires looking at the underlying personality structure. A one-size-fits-all approach to “digital detox” may not work.

Interventions might need to target the specific emotional deficits of the user. For instance, helping someone manage fear of abandonment or improve impulse control could be more effective than simply taking the phone away.

Understanding the “vulnerable” side of dark personality traits helps humanize those struggling with digital dependency. It shifts the narrative from one of bad habits to one of unmet psychological needs. As digital lives become increasingly intertwined with psychological well-being, this nuance is essential for developing better support systems.

The study, “The vulnerable side of technology addiction: Pathways linking the Vulnerable Dark Triad to problematic smartphone and social media use,” was authored by Marco Giancola, Laura Piccardi, Raffaella Nori, Simonetta D’Amico, and Massimiliano Palmiero.

Depression and anxiety linked to stronger inflammation in sexual minority adults compared to heterosexuals

3 February 2026 at 17:00

A new study published in the journal Brain, Behavior, and Immunity provides evidence that sexual minority adults may experience a distinct physiological reaction to mental health challenges compared to heterosexual adults. The findings indicate that while depression and anxiety are more common in sexual minority populations, these conditions are also accompanied by stronger inflammatory responses for this group.

Health disparities affecting lesbian, gay, bisexual, and other non-heterosexual individuals are well-documented in medical literature. Statistics indicate that these groups face a higher risk for chronic physical conditions like heart disease, asthma, and diabetes compared to heterosexual adults. They also report rates of anxiety and depression that are often significantly higher than those seen in the general population.

Scientists often utilize the minority stress theory to explain these gaps. This framework suggests that the unique social stressors faced by marginalized groups create a burden that wears down physical health over time.

A key biological mechanism that might explain how stress becomes physical illness is inflammation. While acute inflammation is a necessary immune response to heal injuries or fight infection, chronic low-grade inflammation is damaging to the body.

Elevated levels of inflammatory markers are linked to a range of age-related conditions, including cardiovascular disease and cognitive decline. This process is sometimes referred to as “inflammaging,” where chronic inflammation contributes to accelerated biological aging.

“Sexual minority adults face well-documented disparities in both mental and physical health, including higher rates of depression, anxiety, and chronic conditions like cardiovascular disease,” said study author Lisa M. Christian, a professor and member of the Institute of Brain, Behavior and Immunology at The Ohio State University.

“While minority stress theory provides a framework for understanding these disparities, there has been very little research on the biological mechanisms that link psychological distress to physical health in this population. Specifically, data on inflammation, a key pathway to chronic disease, are scarce. Our study aimed to address this gap by examining whether depressive symptoms and anxiety are associated with greater inflammatory responses among sexual minority adults compared to heterosexual adults.”

The research team analyzed data from the National Couples’ Health and Time Study (NCHAT). This project involves a population-representative sample of married and cohabiting adults across the United States.

“This study utilizes data from Wave 1 of the National Couples’ Heath and Time (NCHAT) Stress Biology Study (NCHAT-BIO),” Christian noted. “NCHAT-BIO the first US-based study focused on stress biology within a large, diverse sample of married/cohabiting sexual minority and heterosexual adults.”

“NCHAT-BIO capitalized on the unique opportunity of NCHAT, a population-representative US sample which intentionally oversampled sexual minority respondents. Wave 1 NCHAT-BIO data have been deposited at ICPSR for public release to all researchers. We encourage interested researchers to take advantage of this unique and impactful dataset.”

The researchers focused on a subset of participants who provided biological samples. The final analysis included 572 participants. There were 321 individuals who identified as heterosexual and 251 who identified as sexual minorities, a group that included lesbian, gay, bisexual, and other non-heterosexual identities.

Participants completed detailed surveys assessing their mental health. To measure anxiety, they used the Generalized Anxiety Disorder scale (GAD-7). This tool asks respondents how often they have been bothered by problems such as feeling nervous or being unable to stop worrying.

To evaluate depressive symptoms, the researchers used the Center for Epidemiologic Studies Depression scale (CES-D 10). This measure asks participants how often they felt specific ways, such as fearful or lonely, during the past week.

The study also assessed adverse childhood experiences (ACEs) to understand early life stress. Participants reported if they had experienced events before age 18 such as abuse, neglect, household dysfunction, or parental incarceration.

Additionally, the survey asked about experiences of everyday discrimination and aggression. This included questions about being treated with less respect, being harassed, or facing physical attacks.

To measure biological markers, participants provided dried blood spots. They collected these samples at home by pricking a finger and placing blood drops on a special collection card. The researchers analyzed these samples for two specific markers of systemic inflammation: Interleukin-6 (IL-6) and C-reactive protein (CRP).

IL-6 is a cytokine that signals the immune system to respond to trauma or infection, while CRP is a protein produced by the liver in response to inflammation. Higher levels of these markers generally indicate a state of higher systemic inflammation.

The results showed that sexual minority participants reported higher levels of both anxiety and depressive symptoms compared to heterosexual participants. This aligns with prior statistics regarding mental health in these communities.

A statistical analysis revealed that this difference was partially explained by a higher number of adverse childhood experiences among the sexual minority group. Sexual minority respondents reported an average ACE score that was significantly higher than that of heterosexual respondents.

The most distinct finding emerged when the researchers analyzed the relationship between these mental health symptoms and inflammation levels. The data revealed a physiological pattern for sexual minority adults that was absent in heterosexual adults.

Among sexual minority participants, higher scores on the depression scale were associated with higher levels of both IL-6 and CRP. Similarly, higher anxiety scores were linked to higher CRP levels in the sexual minority group.

“We expected sexual minority adults to have higher depression and anxiety, which is consistent with prior research,” Christian told PsyPost. “What surprised us was the pattern of inflammatory response: sexual minority adults showed greater elevations in CRP with rising anxiety and depression. This effect was not seen in heterosexual adults. This suggests a unique physiological sensitivity among sexual minority individuals that warrants further investigation.”

The researchers adjusted their statistical models to account for potential confounding factors. They controlled for age, race, sex assigned at birth, education level, and existing health conditions.

They also ran sensitivity analyses that included body mass index and tobacco use. Even with these behavioral and physical factors included, the connection between distress and inflammation remained significant for sexual minority adults.

The study authors propose that this heightened inflammatory response is not an inherent trait of sexual minority individuals. Instead, it is likely a consequence of living in a marginalized social context.

Chronic exposure to stressors, such as discrimination or the threat of judgment, can sensitize the immune system. This sensitization means that when an individual experiences depression or anxiety, their body mounts a stronger inflammatory defense than it otherwise would.

This sensitization contributes to a “double burden” for sexual minority adults. First, they experience a higher prevalence of anxiety and depression, largely due to adverse childhood experiences and minority stress.

Second, when they do experience these symptoms, their bodies react with greater inflammation. Over time, even modest elevations in markers like CRP and IL-6 can increase the risk for chronic illnesses, potentially explaining some of the physical health disparities seen in this population.

“The main takeaway is that sexual minority adults not only experience higher rates of depression and anxiety but also show stronger inflammatory responses when they do,” Christian explained. “Even modest elevations in inflammation can increase long-term risk for chronic illnesses. This means that mental health challenges in sexual minority populations may have ripple effects on physical health, underscoring the importance of integrated care and targeted prevention efforts.”

There are some limitations to consider. The study used data collected at a single point in time for the survey, with blood samples collected several months later. This timeline makes it difficult to determine causality.

It is possible that inflammation exacerbates mood symptoms, rather than the other way around. The gap between the survey and the blood collection introduces some statistical noise, though the findings remained robust despite this.

“It is notable that the current effects in sexual minority adults were observed despite the presence of this statistical ‘noise,'” Christian said. “However, future studies in which time of collection is both simultaneous and longitudinal would be ideal.”

“Indeed, it is plausible that the presence of associations between inflammation and mental health indicators among sexual minority respondents, but not heterosexual respondents, is a function of greater chronicity of symptoms among sexual minority respondents. This could not be tested in the current analyses.”

The sample consisted entirely of married or cohabiting adults. People who are partnered often have better health outcomes and more social support than single individuals. This means the results might not fully reflect the experiences of unpartnered sexual minority adults.

The researchers also caution against interpreting these results to mean that sexual minority adults are inherently less healthy. “There is nothing problematic or unhealthy about being a sexual minority,” Christian told PsyPost.

“The differences we observed reflect the physiological costs of living in a society where sexual minority individuals are exposed to higher levels of stress, discrimination, and adversity, not something intrinsic to their identity. In other words, the burden comes from external exposures, not from who people are.”

The researchers have received funding from the National Institute on Aging to extend this work into a longitudinal study. They intend to examine how inflammatory markers change as the participants age. They also plan to look at epigenetic aging, which uses DNA methylation to measure biological age. This will help determine if the observed inflammation is translating into accelerated aging at the cellular level.

“This manuscript is part of a larger longitudinal study,” Christian said. “As with NCHAT-BIO Wave 1 data, assay results from Wave 3 will be made publicly available to other researchers through ICPSR alongside the survey, time diary, and contextual data from NCHAT Waves 1 through 3, and biological data from NCHAT-BIO Wave 1. Together, these resources will provide an exceptional dataset for future researchers.”

The study, “Sexual minority adults exhibit greater inflammation than heterosexual adults in the context of depressive symptoms and Anxiety: Pathways to health disparities,” was authored by Lisa M. Christian, Rebecca R. Andridge, Juan Peng, Nithya P. Kasibhatla, Thomas W. McDade, Tessa Blevins, Steve W. Cole, Wendy D. Manning, and Claire M. Kamp Dush.

High-precision neurofeedback accelerates the mental health benefits of meditation

3 February 2026 at 15:00

A new study published in the journal Mindfulness has found that high-precision brain training can help novice meditators learn the practice more effectively. The findings indicate that neurofeedback can assist individuals in reducing self-critical or wandering thoughts. This training appears to lead to sustained improvements in mindful awareness and emotional well-being during subsequent daily life.

Meditation is often promoted for its ability to reduce stress and improve mental health. The practice frequently involves focusing attention on a specific anchor, such as the sensation of breathing.

The goal is to notice when the mind wanders and gently return focus to the breath. While the concept is simple, the execution is often difficult for beginners. Novices frequently struggle to recognize when their minds have drifted into daydreams or self-referential thinking. Because meditation is an internal mental process, it lacks the external feedback that accompanies learning physical skills.

“A key problem that motivated this project, is ‘not being able to know whether what we are doing internally while meditating is what we were actually meant to be doing,'” said study author Saampras Ganesan, a postdoctoral research associate at the Laureate Institute for Brain Research and honorary research fellow at the University of Melbourne.

“You can look at a mirror to get live and detailed feedback while learning an intricate dance or exercise move. But this is not the case with something so abstract like meditation. This may be holding back the mental health benefits and wider impact that meditation could have in modern life.”

The researchers aimed to address this challenge by providing an external “mirror” for the mind. They sought to determine if real-time information about brain activity could act as a scaffold for learning.

The study focused on helping participants identify and reduce activity in the posterior cingulate cortex. This brain region is a key hub of the default mode network. This network typically becomes active when a person is not focused on the outside world, such as during daydreaming, worrying, or thinking about oneself.

To test this, the investigators recruited 40 healthy adults who had little to no prior experience with meditation. They screened these individuals to ensure they had no history of psychiatric or neurological conditions. The participants were randomly assigned to one of two groups. One group was the experimental condition, and the other served as a control.

The study employed a 7-Tesla fMRI scanner. This machine creates a magnetic field much stronger than the standard MRI scanners found in hospitals. The high magnetic field allows for extremely precise imaging of brain function. Participants lay inside the scanner and were instructed to practice focused attention meditation. They kept their eyes open and watched a visual display.

The display functioned like a thermometer. For the experimental group, the level on the thermometer changed based on the real-time activity of their own posterior cingulate cortex.

When they successfully focused on their breath and quieted this brain region, the thermometer reading went down. If their mind wandered and the region became active, the reading went up. This provided immediate confirmation of their internal mental state.

The control group went through the exact same procedure with one critical difference. The feedback they saw was not from their own brains. Instead, they viewed a recording of brain activity from a participant in the experimental group.

This is known as “sham” feedback. It allowed the researchers to control for the effects of being in the scanner, seeing visual stimuli, and trying to meditate. The participants did not know which group they were in.

The training took place over two consecutive days. Following this laboratory phase, all participants were asked to continue meditating at home for one week. They used a mobile app to guide 5-minute meditation sessions. They also completed surveys to track their mood, stress levels, and mindful awareness.

The results revealed that the blinding procedure was successful. Participants in both groups believed they were receiving genuine feedback. They also reported similar levels of effort and perceived success. This suggests that any differences in outcomes were due to the specific brain training rather than placebo effects or expectations.

“Surprisingly, people could not easily tell whether the brain feedback came from their own brain (experimental group) or someone else’s (control group),” Ganesan told PsyPost. “Both groups rated the feedback as equally accurate – even though the group receiving their own brain feedback showed more meaningful positive changes in the brain circuit linked to meditation.”

“This suggests that people, especially beginners at meditation, may not be completely aware of all the factors driving effects in meditation, and that perceivable benefits may only become clearer with time and more consistent practice following targeted, reliable training.”

Despite these similar perceptions, the brain imaging data showed distinct differences. The experimental group exhibited a change in how their brain regions communicated.

Specifically, they developed a stronger negative connection between the posterior cingulate cortex and the dorsolateral prefrontal cortex. The dorsolateral prefrontal cortex is involved in executive functions, such as controlling attention and managing distractions.

This finding implies that the neurofeedback helped the experimental group recruit their brain’s control systems to down-regulate the mind-wandering network. This neural pattern was not observed in the control group.

The ability to suppress the default mode network is often associated with experienced meditators. The novices in the experimental group appeared to acquire this neural skill rapidly through the targeted feedback.

The benefits of the training extended beyond the laboratory. During the week of home practice, the experimental group maintained higher levels of mindful awareness. In contrast, the control group showed a decline in awareness over the week. This suggests that without the specific guidance provided by the neurofeedback, the control participants struggled to sustain the quality of their meditation practice.

The study also found improvements in emotional well-being. The experimental group reported a significant reduction in emotional distress. This measure combined ratings of depression, anxiety, and stress.

The researchers found a correlation between the brain changes and the mood improvements. Participants who showed the strongest connection between the attention and default mode networks experienced the greatest reduction in distress.

“Teaching people to meditate with live feedback from their own brain can help them meditate more effectively on their own over time, with early benefits for self-awareness and mood,” Ganesan explained. “For these benefits to matter, the brain feedback needs to be well-targeted and specific to the meditation goal – more precise feedback leads to stronger results.”

One unexpected finding involved a breath-counting task. This is an objective test often used to measure mindfulness. Participants press a button for each breath and a different button for every ninth breath.

The experimental group actually performed worse on this task after the training. The researchers suggest this might be because the task requires cognitive effort and counting. The neurofeedback training emphasized “letting go” of thoughts, which might have conflicted with the requirement to actively count.

As with all research, there are limitations. The sample size was relatively small. While 40 participants is common for complex neuroimaging studies, it is small for drawing broad behavioral conclusions. The equipment used is also rare and expensive. A 7-Tesla scanner is not a tool that can be easily deployed for general therapy or training.

“An important takeaway is that while the idea of using brain feedback to support meditation is promising, most current wearable and commercial devices are not yet reliable enough to deliver clear benefits,” Ganesan said. “Many studies testing such devices find little evidence beyond placebo, often because the brain signals used are not precise enough.”

“At present, there are no widely accessible, well-validated brain-feedback systems detailed enough to reliably guide meditation training and practice. Highly advanced brain-imaging approaches, like the one used in our study, show what may be possible in principle, but they are not practical for everyday use. As technology improves, reliable and scalable tools may emerge. But until then, the benefits of brain-feedback-assisted meditation will remain limited for most people.”

The follow-up period was also short. It remains unclear if the benefits would persist longer than one week without further reinforcement.

“While the study offers promising signs that detailed brain-feedback–supported meditation training can have real-world benefits, larger studies over longer periods are needed to confirm these results,” Ganesan told PsyPost. “A major strength of the current study is the use of a well-matched control group, which helped show that the benefits were greater than placebo or other unrelated effects.”

Future research will likely focus on whether these results can be replicated with larger groups. Scientists may also explore if similar results can be achieved using less expensive technology, such as EEG sensors. If scalable methods can be developed, this approach could offer a new way to support mental health treatments. It provides a proof of concept that technology can accelerate the learning curve for meditation.

“My long-term vision is to develop a scalable but personalized, science-backed brain-feedback tool that can reliably support meditation training and mental health at a population level,” Ganesan explained. “By developing such technology and making it accessible in schools, clinics, and homes, the goal is to promote everyday emotional well-being, strengthen mental resilience, and help reduce the burden of mental illness in the modern world.”

“While there are many types of meditation, the technique studied here – focused-attention or breathing-based meditation, often grouped under mindfulness – is widely regarded by researchers and meditation experts as a foundational practice,” the researcher added. “The skills developed through this form of meditation are considered essential for learning and practicing other techniques effectively. As a result, developing reliable and targeted brain-based tools to support training in this practice is especially valuable.”

The study, “Neurofeedback Training Facilitates Awareness and Enhances Emotional Well-being Associated with Real-World Meditation Practice: A 7-T MRI Study,” was authored by Saampras Ganesan, Nicholas T. Van Dam, Sunjeev K. Kamboj, Aki Tsuchiyagaito, Matthew D. Sacchet, Masaya Misaki, Bradford A. Moffat, Valentina Lorenzetti, and Andrew Zalesky.

Stress does not appear to release stored THC into the bloodstream

3 February 2026 at 05:00

A new study published in Psychopharmacology investigates the biological phenomenon known as reintoxication in cannabis users. The findings indicate that acute physical stress caused by cold water immersion does not release stored THC back into the bloodstream. This research suggests that moderate physical stressors encountered in daily life are unlikely to cause a person to test positive for cannabis or experience impairment long after their last use.

The primary psychoactive compound in cannabis is delta-9-tetrahydrocannabinol, commonly known as THC. This chemical is highly lipophilic, meaning it dissolves readily in fats rather than water.

When a person consumes cannabis, the body metabolizes much of the THC, but a significant portion is absorbed and stored in fat tissue throughout the body. These fat deposits can act as a long-term storage depot for the drug. Traces of THC have been detected in human fat biopsies weeks after consumption has stopped.

This biological storage mechanism has led scientists to propose the reintoxication hypothesis. The body naturally breaks down fat deposits for energy when it faces a deficit, such as during periods of starvation or intense physical stress. This process is called lipolysis. The hypothesis suggests that when the body breaks down fat cells during stress, the stored THC could be released back into the bloodstream along with the stored energy.

“It has been suggested that THC stored in body fat could be released back into circulation during periods of acute stress, potentially increasing blood THC concentrations,” said study author Danielle McCartney, an associate lecturer in pharmacology at the University of Sydney.

“This idea has been discussed in scientific and legal contexts, but there is very little direct human evidence to support it. We wanted to test this under controlled conditions to see whether acute stress actually increases blood THC concentrations in regular cannabis users.”

Previous research on animals has provided some evidence for this phenomenon. Studies involving rats showed that stress hormones and food deprivation could increase blood THC concentrations in animals that had been pre-treated with the drug.

Human studies, however, have been less conclusive. One study found that intense exercise significantly raised plasma THC levels in regular users. Another study involving food deprivation and running produced mixed results. The authors of the current study aimed to clarify these findings by using a different form of stress.

The researchers recruited fifteen volunteers for the experiment. The sample included nine females and six males. All participants were regular cannabis users who reported consuming the drug at least three days per week. On average, the group used cannabis five days a week. To ensure that any THC detected was not from immediate use, participants were required to abstain from cannabis for at least twelve hours before the test. They also fasted for more than eight hours to ensure their bodies were ready to metabolize fat.

The chosen stressor for this experiment was cold water immersion. This method is known to trigger a robust “fight or flight” response and stimulate the breakdown of fats. Participants sat in a bath filled with water cooled to approximately 10 degrees Celsius, or 50 degrees Fahrenheit. They remained submerged up to their clavicles for ten minutes. This duration and temperature were selected to induce significant physiological stress without posing a danger to the volunteers.

The research team collected detailed measurements at three specific time points. They took baseline measurements immediately before the cold water immersion. They collected a second set of data five minutes after the participants exited the bath. A final set of data was collected two hours after the intervention. At each point, the team drew blood samples and administered cognitive tests.

The blood samples were analyzed for several chemical markers. The researchers looked for plasma THC and its metabolites to see if concentrations rose after the stress. They also measured levels of glycerol and free fatty acids. These compounds are byproducts of fat breakdown. An increase in glycerol and free fatty acids serves as biological proof that lipolysis is occurring. Additionally, the team monitored heart rate, blood pressure, and body temperature to quantify the physiological stress response.

Subjective and cognitive effects were also assessed. Participants completed computerized tasks designed to measure attention, processing speed, and psychomotor function. Specifically, they performed the Digit Symbol Substitution Task, the Divided Attention Task, and the Paced Serial Addition Task. Participants also used visual scales to rate how “stoned” or “euphoric” they felt, as well as their levels of calmness and nervousness.

The results demonstrated that the cold water immersion successfully induced a stress response. Participants exhibited elevated heart rates and higher systolic blood pressure following the bath. Their body temperature dropped as expected. Subjective ratings confirmed that the participants felt less calm and more nervous after the exposure.

The blood analysis confirmed that the intervention triggered the breakdown of fat. Concentrations of glycerol and free fatty acids increased significantly from the baseline to the post-intervention measurements.

Despite the successful induction of stress and fat breakdown, the researchers found no corresponding increase in blood THC levels. The concentrations of THC remained stable across all three time points. The levels of 11-COOH-THC, a primary metabolite of the drug, also did not rise following the cold water stress. In fact, the concentration of this metabolite tended to decrease slightly over the two-hour monitoring period, likely due to natural clearance from the body.

Cognitive performance remained unaffected by the stressor. The participants showed no signs of impairment on any of the computerized tasks. Their reaction times and accuracy scores did not change significantly after the cold water immersion. This aligns with the lack of change in blood THC concentrations. Without a spike in the drug’s presence in the bloodstream, functional impairment would not be expected.

There was a minor change in subjective sensations. Participants reported a slight increase in feeling “stoned” immediately after the cold bath. However, the researchers note that this effect was negligible. The average rating on a 100-point scale increased by fewer than three points. The authors suggest this was likely a result of the general physiological shock of the cold water or a placebo effect, rather than true intoxication.

The researchers also examined oral fluid, which is commonly tested in roadside drug screenings. The researchers found that the cold water stress did not lead to a surge in positive results for oral fluid tests. This provides evidence that stress-induced fat breakdown is unlikely to cause a false positive on saliva-based drug tests used by law enforcement.

“We found that brief physical stress, like cold water immersion, does not increase blood THC concentrations or cause intoxication in regular cannabis users,” McCartney told PsyPost. “This suggests that everyday stressors are unlikely to meaningfully impact blood THC concentrations. That said, our participants were moderate regular users rather than very heavy or dependent users, so the findings should be interpreted in that context.”

The researchers offered several explanations for why their results differed from the previous study that found exercise increased THC levels. The primary factor appears to be the intensity of the stress. The exercise study involved thirty-five minutes of cycling, which raised heart rates to roughly 130 beats per minute. The cold water immersion in this study only raised heart rates to about 80 beats per minute.

Consequently, the exercise study induced a much stronger metabolic response. The increase in free fatty acids observed in the exercise study was nearly six times greater than the increase observed in the cold water study.

It appears that while cold water causes some fat breakdown, it may not be intense enough to liberate a detectable amount of stored THC. The stressor in the real world would likely need to be severe and prolonged to mimic the effects seen in the exercise study.

Another factor could be the usage habits of the participants. The volunteers in this study were moderate regular users. Individuals with heavier consumption habits might store larger quantities of THC in their fat tissue. It is possible that a similar stressor could trigger a release in very heavy users or those with a higher body mass index.

Studies involving heavier cannabis users or different types of psychological and physical stress would provide a more complete picture. For now, the evidence indicates that brief, moderate physical stress is not a risk factor for sudden cannabis intoxication.

The study, “Does acute stress induced via cold water immersion increase blood THC concentrations in regular cannabis users,” was authored by Danielle McCartney, Jordan Levoux, Rebecca Gordon, Laura Sharman, Katie Walker, Jonathon C. Arnold, and Iain S. McGregor.

Half of the racial mortality gap is explained by stress and inflammation

3 February 2026 at 03:00

Disparities in life expectancy between Black and White populations in the United States remain a persistent public health crisis. A new analysis suggests that a lifetime of accumulated stress and resulting bodily inflammation drives a large portion of this racial mortality gap. The findings appeared in a paper published in JAMA Network Open.

Researchers have sought to understand why Black Americans experience higher rates of chronic illness and earlier death. One prevailing theory involves the concept of “weathering.” This hypothesis posits that constant exposure to social and economic adversity physically erodes health over time. Black Americans often face systemic disadvantages and discrimination that generate chronic psychological pressure. This burden is thought to disrupt the immune system and accelerate aging.

Isaiah D. Spears, a graduate student at Washington University in St. Louis, led the new investigation. Spears worked alongside senior author Ryan Bogdan, who directs the BRAIN lab within the university’s Department of Psychological and Brain Sciences. They aimed to move beyond looking at single stressful events. Instead, they sought to measure the total weight of stress a person carries from childhood into old age.

Spears noted the motivation behind the work in a statement. He said he “saw the stark difference between the rate in which our Black participants in the sample have been dying relative to the white participants.” This observation prompted the team to investigate the biological mechanisms that might connect social experience to physical survival.

The team analyzed data from the St. Louis Personality and Aging Network (SPAN). This longitudinal project recruited late middle-aged adults from the St. Louis metropolitan area. The researchers followed these individuals for a period stretching up to seventeen years. The total sample included 1,554 participants. Approximately one-third of the group identified as Black, and two-thirds identified as White.

The researchers created a cumulative stress score for each person to capture the breadth of their life experiences. This score was not based on a single survey. It combined answers from multiple assessments regarding adverse life events. The team looked at exposure to maltreatment during childhood. They included traumatic events experienced during adulthood. They also accounted for specific stressful life episodes and reported experiences of discrimination.

Socioeconomic status served as another component of the stress score. The researchers factored in household income and education levels. They also looked at the education levels of the participants’ parents. This approach allowed the team to build a comprehensive model of the strain placed on an individual throughout their entire lifespan.

The study also required biological evidence of physical wear and tear. The researchers analyzed blood samples collected from the participants. They specifically looked for two biomarkers of inflammation. One is called C-reactive protein, or CRP. The other is Interleukin-6, or IL-6. These proteins are immune system messengers.

High levels of these markers indicate that the body is in a state of chronic inflammation. Short-term inflammation helps the body heal from injury or fight infection. Chronic inflammation, however, damages tissues and organs over time. It is a known risk factor for heart disease, cancer, and other age-related conditions.

The researchers then consulted the National Death Index to track mortality. They recorded which participants died during the study period and the cause of death. This allowed them to calculate survival times for Black and White participants.

The data revealed a clear pattern regarding survival. Black participants in the study died sooner than White participants. This aligned with national trends regarding excess death in minority populations. The Black participants also had higher scores for cumulative lifespan stress. Their blood tests showed higher levels of the inflammatory markers CRP and IL-6.

The researchers used statistical models to test whether these factors were connected. They found that the higher stress levels and subsequent inflammation were not merely coincidental. These factors statistically explained a large amount of the difference in survival rates.

The model suggested a specific pathway. Identifying as Black was associated with higher cumulative stress. This stress was associated with higher inflammation. Finally, that inflammation was associated with an increased risk of earlier death.

The combined effect of lifespan stress and inflammation accounted for 49.3 percent of the racial disparity in mortality. This means that roughly half of the excess mortality risk observed in Black participants could be attributed to these specific biological and environmental factors. The researchers found that stress alone and inflammation alone also played roles, but the combined pathway was the most explanatory.

Ryan Bogdan explained the biological logic in a press statement. He noted that “If stress becomes chronic, that could be incorporated into one’s homeostasis; you may become less able to mount your biological systems to respond to acute stress challenges and your may be less able to return to a bodily state that promotes regeneration and restoration.”

The study supports the idea that social inequality becomes biological reality. The stress measured in the study likely stems from structural racism. This includes factors such as unequal access to resources, neighborhood segregation, and economic barriers. These systemic issues create a constant background hum of adversity for many Black Americans.

Spears emphasized the physical toll of this environment. He stated, “Over time continued chronic exposure to stress leads to dysregulation and an earlier breakdown of some of the biological systems in the human body.” This breakdown manifests as the chronic diseases that disproportionately kill Black adults.

The authors noted several limitations to their work. The study took place in the St. Louis region. The specific social dynamics and health disparities there might not perfectly represent every part of the United States. The results might differ in regions with different economic or social structures.

The researchers also pointed out that their study is observational. They used statistical methods to infer a pathway from race to stress to death. However, they cannot definitively prove causation. Other unmeasured variables could be influencing the results.

The findings leave approximately 50 percent of the mortality gap unexplained. The authors suggest that other factors must be involved. These could include exposure to environmental toxicants like air pollution or lead. Differences in access to quality healthcare or trust in medical institutions could also play a role. Genetic or epigenetic factors that are influenced by ancestral stress might also contribute.

The study has implications for public policy and healthcare. It suggests that medical interventions alone cannot solve racial health disparities. Treating the downstream effects, such as high blood pressure or heart disease, is necessary but insufficient. The root causes of stress must be addressed.

Bogdan suggested that the work points toward the need for broader societal changes. He said, “Addressing large-scale societal issues requires concerted efforts enacted over time. That needle can be extremely hard to move.”

Policies that reduce structural discrimination could lower the stress burden on Black communities. This might involve economic reforms, housing policies, or changes in the criminal justice system. Reducing the sources of stress could prevent the chronic inflammation that leads to early death.

The researchers also see a need for better medical treatments for stress. Interventions that help the body manage the physiological response to adversity could save lives. This would be valuable while long-term societal changes are being implemented. Bogdan noted, “Stress exposure will always be there – so we need to devote more efforts to understand the mechanisms through which stress contributes to adverse health outcomes so that factors could be targeted to minimize health risks among those exposed.”

The study, “Cumulative Lifespan Stress, Inflammation, and Racial Disparities in Mortality Between Black and White Adults,” was authored by Isaiah D. Spears, Aaron J. Gorelik, Sara A. Norton, Michael J. Boudreaux, Megan W. Wolk, Jayne Siudzinski, Sarah E. Paul, Mary A. Cox, Cynthia E. Rogers, Thomas F. Oltmanns, Patrick L. Hill, and Ryan Bogdan.

For romantic satisfaction, quantity of affection beats similarity

3 February 2026 at 01:00

A new study suggests that the total amount of warmth shared between partners matters more than whether they express it equally. While similarity often breeds compatibility in many areas of life, researchers found that maximizing affectionate communication yields better relationship quality than simply matching a partner’s lower output. These results were recently published in the journal Communication Studies.

Relationship science often relies on two competing ideas regarding how couples succeed. One concept, known as assortative mating, suggests that people gravitate toward partners with similar traits, backgrounds, and behaviors. This principle implies that a reserved partner might feel most comfortable with an equally quiet companion.

Under that theory, a mismatch in expressiveness could lead to friction or misunderstanding. The logic holds that if one person is highly demonstrative and the other is stoic, the gap could cause dissatisfaction.

Conversely, a framework called affection exchange theory posits that expressing fondness is a fundamental human need that directly fuels bonding. This theory argues that affection acts as a resource that promotes survival and procreation capabilities.

Kory Floyd, a researcher at Washington State University, led the investigation to resolve which mechanism plays a larger role in romantic satisfaction. Floyd and his colleagues sought to determine if mismatched couples suffer from imbalance or if the sheer volume of warmth compensates for disparity.

The research team recruited 141 heterosexual couples from across the United States to participate in the study. These pairs represented a diverse range of ages, ethnic backgrounds, and socioeconomic levels. The researchers looked at the couple as a unit, rather than just surveying isolated individuals.

Each participant completed detailed surveys designed to measure their typical behaviors and feelings. They reported their “trait” affectionate communication, which refers to their general tendency to express and receive warmth. This included verbal affirmation, nonverbal gestures like holding hands, and acts of support.

Participants also rated the quality of their relationship across several specific dimensions. These metrics included feelings of trust, intimacy, passion, and overall satisfaction. The researchers then utilized complex statistical models to analyze how these factors influenced one another.

They examined “actor effects,” which measure how a person’s own behavior influences their own happiness. The analysis revealed that for both men and women, being affectionate predicted higher personal satisfaction. When an individual expressed more warmth, they generally felt better about the relationship.

The team also looked for “partner effects,” determining how one person’s actions change their partner’s experience. The study produced evidence that an individual’s expressions of warmth positively impacted their partner’s view of the relationship in about half of the categories tested.

However, the primary focus was comparing the absolute level of affection against the relative similarity of affection. The researchers created a mathematical comparison to pit the “birds of a feather” hypothesis against the “more is better” hypothesis.

The data showed that the absolute level of affectionate communication was a far stronger predictor of relationship health than the relative difference between partners. In simpler terms, a couple where one person is highly demonstrative and the other is moderate scores higher on satisfaction than a couple where both are equally reserved.

While similarity did not drag relationship scores down, it simply did not provide the same boost as high overall warmth. The results indicated that for most metrics of quality, the total volume of affection matters more than who fills the bucket.

This challenges the notion that finding a “mirror image” partner is the key to happiness. Colin Hesse, a co-author from Oregon State University, noted the distinction in the team’s press release.

Hesse stated, “The study does not discount the importance of similarity in many aspects of romantic relationships but instead highlights once again the specific importance of affectionate communication to the success and development of those relationships.”

The benefits appear to stem from the stress-relieving properties of positive touch and verbal affirmation. A high-affection environment creates a buffer against conflict and builds a reservoir of goodwill.

Hesse explained, “Generally speaking, affectionate communication is beneficial both for the partner who gives it and the partner receiving it.” This suggests that even if one partner does the heavy lifting, the union still thrives.

The findings offer reassurance to couples who worry about having different love languages or expressive styles. If one partner enjoys public displays of affection and the other prefers quiet support, the relationship is likely still healthy as long as the total affection remains high.

There were, however, specific exceptions in the data regarding feelings of love and commitment. For these two specific variables, the total amount of affection was not more influential than the similarity between partners. This nuance suggests that while satisfaction and passion are driven by volume, the core sense of commitment might operate differently.

While the study offers strong evidence for the power of affection, there are limitations to consider. The sample consisted entirely of heterosexual couples, meaning the dynamics might differ in LGBTQ+ relationships. The researchers relied on self-reported perceptions, which can sometimes be biased by a person’s current mood or memory.

Additionally, the study captures a snapshot in time rather than following couples over years. Future research could investigate how these dynamics shift over decades of marriage. It would be useful to see if the need for matched affection levels increases as a relationship matures.

Scientists might also look at specific types of affection to see if verbal or physical expressions carry different weights. For now, the message to couples is that increasing warmth is rarely a bad strategy.

Hesse concluded in the press release, “We would not prescribe specific affectionate behaviors but would in general counsel people to engage in affectionate communication.”

The study, “Affectionate Communication in Romantic Relationships: Are Relative Levels or Absolute Levels More Consequential?,” was authored by Kory Floyd, Lisa van Raalte, and Colin Hesse.

Before yesterdayPsyPost – Psychology News

The surprising reason why cancer patients may be less likely to get Alzheimer’s

2 February 2026 at 23:00

Cancer and Alzheimer’s disease are two of the most feared diagnoses in medicine, but they rarely strike the same person. For years, epidemiologists have noticed that people with cancer seem less likely to develop Alzheimer’s, and those with Alzheimer’s are less likely to get cancer, but nobody could explain why.

A new study in mice suggests a surprising possibility: certain cancers may actually send a protective signal to the brain that helps clear away the toxic protein clumps linked to Alzheimer’s disease.

Alzheimer’s is characterised by sticky deposits of a protein called amyloid beta that build up between nerve cells in the brain. These clumps, or plaques, interfere with communication between nerve cells and trigger inflammation and damage that slowly erodes memory and thinking.

In the new study, scientists implanted human lung, prostate and colon tumours under the skin of mice bred to develop Alzheimer‑like amyloid plaques. Left alone, these animals reliably develop dense clumps of amyloid beta in their brains as they age, mirroring a key feature of the human disease.

But when the mice carried tumours, their brains stopped accumulating the usual plaques. In some experiments, the animals’ memory also improved compared with Alzheimer‑model mice without tumours, suggesting that the change was not just visible under the microscope.

The team traced this effect to a protein called cystatin‑C that was being pumped out by the tumours into the bloodstream. The new study suggests that, at least in mice, cystatin‑C released by tumours can cross the blood–brain barrier – the usually tight border that shields the brain from many substances in the circulation.

Once inside the brain, cystatin‑C appears to latch on to small clusters of amyloid beta and mark them for destruction by the brain’s resident immune cells, called microglia. These cells act as the brain’s clean‑up crew, constantly patrolling for debris and misfolded proteins.

In Alzheimer’s, microglia seem to fall behind, allowing amyloid beta to accumulate and harden into plaques. In the tumour‑bearing mice, cystatin‑C activated a sensor on microglia known as Trem2, effectively switching them into a more aggressive, plaque‑clearing state.

Surprising trade-offs

At first glance, the idea that a cancer could “help” protect the brain from dementia sounds almost perverse. Yet biology often works through trade-offs, where a process that is harmful in one context can be beneficial in another.

In this case, the tumour’s secretion of cystatin‑C may be a side‑effect of its own biology that happens to have a useful consequence for the brain’s ability to handle misfolded proteins. It does not mean that having cancer is good, but it does reveal a pathway that scientists might be able to harness more safely.

The study slots into a growing body of research suggesting that the relationship between cancer and neurodegenerative diseases is more than a statistical quirk. Large population studies have reported that people with Alzheimer’s are significantly less likely to be diagnosed with cancer, and vice versa, even after accounting for age and other health factors.

This has led to the idea of a biological seesaw, where mechanisms that drive cells towards survival and growth, as in cancer, may push them away from the pathways that lead to brain degeneration. The cystatin‑C story adds a physical mechanism to that picture.

However, the research is in mice, not humans, and that distinction matters. Mouse models of Alzheimer’s capture some features of the disease, particularly amyloid plaques, but they do not fully reproduce the complexity of human dementia.

We also do not yet know whether human cancers in real patients produce enough cystatin‑C, or send it to the brain in the same way, to have meaningful effects on Alzheimer’s disease risk. Still, the discovery opens intriguing possibilities for future treatment strategies.

One idea is to develop drugs or therapies that mimic the beneficial actions of cystatin‑C without involving a tumour at all. That could mean engineered versions of the protein designed to bind amyloid beta more effectively, or molecules that activate the same pathway in microglia to boost their clean‑up capacity.

The research also highlights how interconnected diseases can be, even when they affect very different organs. A tumour growing in the lung or colon might seem far removed from the slow build up of protein deposits in the brain, yet molecules released by that tumour can travel through the bloodstream, cross protective barriers and change the behaviour of brain cells.

For people living with cancer or caring for someone with Alzheimer’s today, this work will not change treatment immediately. But the study does offer a more hopeful message: by studying even grim diseases like cancer in depth, scientists can stumble on unexpected insights that point towards new ways to keep the brain healthy in later life.

Perhaps the most striking lesson is that the body’s defences and failures are rarely simple. A protein that contributes to disease in one organ may be used as a clean‑up tool in another, and by understanding these tricks, researchers may be able to use them safely to help protect the ageing human brain.The Conversation

 

This article is republished from The Conversation under a Creative Commons license. Read the original article.

 

Early maternal touch may encourage sympathy and helping behaviors in adolescence

2 February 2026 at 21:00

A study in China found that junior high school students who recall more maternal touch in childhood tend to manifest more prosocial behavior. Attachment to mothers might be a mediator of this relationship. The paper was published in the International Journal of Adolescence and Youth.

Maternal touch refers to physical contact initiated by a mother toward her child, such as holding, cuddling, skin-to-skin contact, or gentle stroking, especially during early development. Although it was long underemphasized in developmental research, recent studies show that maternal touch plays a crucial role not only in infants’ physical growth but also in cognitive, emotional, and social development.

Frequent maternal touch has been linked to better psychomotor development, reduced stress responses, improved emotional regulation, and stronger mother–child bonding. Health organizations now formally recognize its importance, as reflected in recommendations for immediate skin-to-skin contact after birth, particularly for preterm and low-birthweight infants.

Maternal touch is thought to support early attachment formation by providing comfort, safety, and a rewarding relationship experience. Secure attachment, in turn, is associated with greater empathy, emotional stability, and prosocial behavior later in life.

Theoretical models suggest that early tactile experiences may scaffold the development of human prosociality by shaping how children relate to others. Maternal touch also stimulates hormonal and neural processes that support caregiving, breastfeeding, and emotional connection.

Study authors Kuo Zhang and Jinlong Su wanted to explore the links between maternal touch experiences and prosocial behavior. They also wanted to see if the attachment pattern of a person is associated with this link. In their study, these authors decided to focus on adolescents because prosociality becomes relatively stable at that age. Prosociality is a tendency to display voluntary behaviors intended to benefit others, such as helping, sharing, cooperating, and showing empathy.

Study participants were 572 students from a public junior high school in western China. They were between 12 and 16 years of age, with the average being 13.56 years. Approximately 50% of them were boys. 61% of participating students were from rural areas.

Students completed an assessment of maternal touch experiences constructed by the study authors based on existing measures. It consisted of three items: ‘my mother usually held me in her arms when I was a little child’, ‘my mother usually held my hand when I was a little child’, and ‘my mother usually patted me as I fell asleep when I was a little child’.

They also completed assessments of prosocial behavior (the Prosocial Tendencies Measure), empathic concern (which study authors refer to as “sympathy”, using the Interpersonal Reactivity Index”), and mother-child affective attachment (the Experiences in Close Relationships – Relationship Structures Questionnaire).

Results showed that participants who reported more maternal touch in childhood tended to score higher on the prosocial behavior assessment, specifically regarding compliant prosocial behavior. Their attachment with their mother tended to be more secure and they tended to report more empathic concern. Empathic concern is the tendency to experience feelings of compassion, warmth, and concern for others who are distressed or in need.

Study authors tested a statistical model proposing that maternal touch leads to more secure affective attachment (i.e., less attachment anxiety and avoidance), and that this type of attachment, in turn, leads to more compliant prosocial behavior and more empathic concern. Results showed that attachment fully mediated these relationships.

“Our findings provided an initial empirical support for the touch-scaffolded prosociality model and suggested the importance of tactile interactions between mothers and children in daily parenting practice,” the study authors concluded.

The study contributes to the scientific understanding of the importance of maternal touch in early childhood for children’s psychosocial development. However, it should be noted that information about maternal touch in early childhood came from self-reports based on recall in adolescence. This means that recall and reporting bias might have affected the results. Additionally, the design of the study does not allow any causal inferences to be derived from the results.

The paper, “Early maternal touch predicts prosocial behaviour in adolescents: the mediation role of attachment,” was authored by Kuo Zhang and Jinlong Su.

Brain scans reveal neural connectivity deficits in Long COVID and ME/CFS

2 February 2026 at 19:00

New research suggests that the brains of people with Long COVID and Myalgic Encephalomyelitis/Chronic Fatigue Syndrome (ME/CFS) struggle to communicate effectively during mentally tiring tasks. While healthy brains appear to tighten their neural connections when fatigued, these patients show disrupted or weakened signals between key brain areas. This study was published in the Journal of Translational Medicine.

ME/CFS and Long COVID are chronic conditions that severely impact the quality of life for millions of people. Patients often experience extreme exhaustion and “brain fog,” which refers to persistent difficulties with memory and concentration.

A defining feature of these illnesses is post-exertional malaise. This describes a crash in energy and a worsening of symptoms that follows even minor physical or mental effort. Doctors currently lack a definitive biological test to diagnose these conditions. This makes it difficult to distinguish them from one another or from other disorders with similar symptoms.

The research team sought to identify objective biological markers of these illnesses. Maira Inderyas, a PhD candidate at the National Centre for Neuroimmunology and Emerging Diseases at Griffith University in Australia, led the investigation. She worked alongside senior researchers including Professor Sonya Marshall-Gradisnik. They aimed to understand how the brain behaves when pushed to the limit of its cognitive endurance.

Professor Marshall-Gradisnik noted the shared experiences of these patient groups. “The symptoms include cognitive difficulties, such as memory problems, difficulties with attention and concentration, and slowed thinking,” Professor Marshall-Gradisnik said. The team hypothesized that these subjective feelings of brain fog would correspond to visible changes in brain activity.

To test this, the researchers utilized a 7 Tesla MRI scanner. This device is much more powerful than the standard scanners found in most hospitals. The high magnetic field allows for extremely detailed imaging of deep brain structures. It can detect subtle changes in blood flow that weaker scanners might miss.

The study involved nearly eighty participants. These included thirty-two individuals with ME/CFS and nineteen with Long COVID. A group of twenty-seven healthy volunteers served as a control group for comparison.

While inside the scanner, participants performed a cognitive challenge known as the Stroop task. This is a classic psychological test that requires focus and impulse control. Users must identify the color of a word’s ink while ignoring the actual word written. For example, the word “RED” might appear on the screen written in blue ink. The participant must select “blue” despite their brain automatically reading the word “red.”

“The task, called a Stroop task, was displayed to the participants on a screen during the scan, and required participants to ignore conflicting information and focus on the correct response, which places high demands on the brain’s executive function and inhibitory control,” Ms. Inderyas said.

The researchers structured the test to induce mental exhaustion. Participants performed the task in two separate sessions. The first session was designed to build up cognitive fatigue. The second session took place ninety seconds later, after fatigue had fully set in. This “Pre” and “Post” design allowed the scientists to see how the brain adapts to sustained mental effort.

The primary measurement used in this study was functional connectivity. This concept refers to how well different regions of the brain synchronize their activity. When two brain areas activate at the same time, it implies they are communicating or working together.

The results revealed clear differences between the healthy volunteers and the patient groups. In healthy participants, the brain responded to the fatigue of the second session by increasing its connectivity. Connections between deep brain regions and the cerebellum became stronger. This suggests that a healthy brain actively recruits more resources to maintain performance when it gets tired. It becomes more efficient and integrated under pressure.

The pattern was markedly different for patients with Long COVID. They displayed reduced connectivity between the nucleus accumbens and the cerebellum. The nucleus accumbens is a central part of the brain’s reward and motivation system. A lack of connection here might explain the sense of apathy or lack of mental drive patients often report.

Long COVID patients also showed an unusual increase in connectivity between the hippocampus and the prefrontal cortex. The researchers interpret this as a potential compensatory mechanism. The brain may be trying to bypass damaged networks to keep functioning. It is attempting to use memory centers to help with executive decision-making.

Patients with ME/CFS showed their own distinct patterns of dysfunction. They exhibited increased connectivity between specific areas of the brainstem, such as the cuneiform nucleus and the medulla. These regions are responsible for controlling automatic body functions. This finding aligns with the autonomic nervous system issues frequently seen in ME/CFS patients.

The researchers also looked at how these brain patterns related to the patients’ medical history. In the ME/CFS group, the length of their illness correlated with specific connectivity changes. As the duration of the illness increased, communication between the hippocampus and cerebellum appeared to weaken. This suggests a progressive change in brain function over time.

Direct comparisons between the groups highlighted the extent of the impairment. When compared to the healthy controls, both patient groups showed signs of neural disorganization. The healthy brain creates a “tight” network to handle stress. The patient brains appeared unable to form these robust connections.

Instead of tightening up, the networks in sick patients became looser or dysregulated. This failure to adapt dynamically likely contributes to the cognitive dysfunction known as brain fog. The brain cannot summon the necessary energy or coordination to process information efficiently.

“The scans show changes in the brain regions which may contribute to cognitive difficulties such as memory problems, difficulty concentrating, and slower thinking,” Ms. Inderyas said. This provides biological validation for symptoms that are often dismissed as psychological.

The study does have some limitations that must be considered. The number of participants in each group was relatively small. This is common in studies using such advanced and expensive imaging technology. However, it means the results should be replicated in larger groups to ensure accuracy.

The researchers also noted that they lacked complete medical histories regarding prior COVID-19 infections for the ME/CFS group. It is possible that some ME/CFS patients had undiagnosed COVID-19 in the past. This could potentially blur the lines between the two conditions.

Future studies will need to follow patients over a longer period. Longitudinal research would help determine if these brain changes evolve or improve over time. It would also help clarify if these connectivity issues are a cause of the illness or a result of it.

Despite these caveats, the use of 7 Tesla fMRI offers a promising new direction for research. It has revealed abnormalities that standard imaging could not detect. These findings could eventually lead to new diagnostic tools. Identifying specific broken circuits may also help researchers target treatments more effectively.

The study, “Distinct functional connectivity patterns in myalgic encephalomyelitis and long COVID patients during cognitive fatigue: a 7 Tesla task-fMRI study,” was authored by Maira Inderyas, Kiran Thapaliya, Sonya Marshall-Gradisnik & Leighton Barnden.

The neural path from genes to intelligence looks different depending on your age

2 February 2026 at 17:00

New research published in Scientific Reports provides evidence that the path from genetic predisposition to general intelligence travels through specific, frequency-dependent networks in the brain. The findings indicate that these neural pathways are not static but appear to shift significantly between early adulthood and older age.

Intelligence is a trait with a strong biological basis. Previous scientific inquiries have established that genetic factors account for approximately 50% of the differences in intelligence between individuals. Genome-wide association studies have identified hundreds of specific variations in the genetic code that correlate with cognitive ability.

These variations are often aggregated into a metric known as a polygenic score, which estimates an individual’s genetic propensity for a certain trait. Despite this knowledge, the specific biological mechanisms that translate a genetic sequence into the ability to reason, plan, and solve problems remain unclear.

Scientists have hypothesized that the brain’s functional connectivity acts as the intermediary between genes and behavior. Functional connectivity refers to how well different regions of the brain communicate with one another. While past studies using functional magnetic resonance imaging (fMRI) have attempted to map these connections, the results have been inconsistent.

fMRI is excellent at locating where brain activity occurs but is less precise at measuring when it occurs. The authors of the new study opted to use electroencephalography (EEG). This technology records the electrical activity of the brain with high temporal resolution, allowing researchers to observe the speed and rhythm of neural communication.

“We already know that intelligence is highly heritable, which is why we are especially interested in the role of the brain as a ‘neural pathway’ linking genetic variation to cognitive ability,” said study author Rebecca Engler of the Leibniz Research Centre for Working Environment and Human Factors (IfADo).

“The lack of integrative approaches combining genetics, brain network organization, and intelligence motivated us to take a closer look at resting-state EEG markers, with a particular focus on differences between young and older adults.”

“In a recent large-scale study (Metzen et al., 2024) using resting-state fMRI, we found no robust association between functional architecture of specific brain regions and intelligence. This motivated our shift toward resting-state EEG, which captures brain dynamics at much higher temporal resolution. EEG measures brain activity as oscillations across different frequencies, allowing us to study frequency-specific brain networks that may carry distinct information relevant to cognitive ability.”

For their study, the researchers recruited a representative sample of 434 healthy adults from the Dortmund Vital Study. The participants were categorized into two distinct age groups. The young adult group consisted of 199 individuals between the ages of 20 and 40. The older adult group included 235 individuals aged 40 to 70.

To measure intelligence, the research team administered a comprehensive battery of cognitive tests. These assessments covered a wide range of mental capabilities, including verbal memory, processing speed, attention span, working memory, and logical reasoning. The scores from these tests were combined to calculate a single factor of general intelligence, often denoted as g. This factor serves as a reliable summary of an individual’s overall cognitive performance.

Genetic data were obtained through blood samples. The researchers analyzed the DNA of each participant to compute a polygenic score for intelligence. This score was calculated based on summary statistics from previous large-scale genetic studies. It represents the cumulative effect of many small genetic variations that are statistically associated with higher cognitive function.

Brain activity was recorded while participants sat quietly with their eyes closed for two minutes. This “resting-state” EEG data allowed the researchers to analyze the intrinsic functional architecture of the brain.

The team employed a method known as graph theory to quantify the organization of the brain networks. In this framework, the brain is modeled as a collection of nodes (regions) and edges (connections).

The researchers calculated metrics such as “efficiency,” which measures how easily information travels across the network, and “clustering,” which measures how interconnected specific local neighborhoods of the brain are. These metrics were analyzed across different frequency bands, including delta, theta, alpha, and beta waves.

The study employed complex statistical modeling to test for mediation effects. A mediation analysis determines whether a third variable—in this case, brain connectivity—explains the relationship between an independent variable (genetics) and a dependent variable (intelligence). The researchers looked for instances where the polygenic score predicted a specific brain network property, which in turn predicted the intelligence score.

The results showed that global measures of brain efficiency did not mediate the link between genetics and intelligence. This suggests that simply having a “more efficient” brain overall is not the primary mechanism by which genes influence cognition.

In other words, “there is no single brain region responsible for intelligence,” Engler told PsyPost. “Instead, cognitive ability relies on efficient and dynamic communication across a broad network of regions throughout the brain, and this network organization changes as we age.”

The specific neural pathways identified varied substantially by age. For young adults, the connection between genetics and intelligence was mediated by brain activity in the beta and theta frequency bands. These effects were predominantly located in the frontal and parietal regions of the brain.

The frontal and parietal lobes are areas traditionally associated with executive functions, such as decision-making, working memory, and attention. This aligns with prominent theories that attribute intelligence to the efficient integration of information between these higher-order brain regions.

But for older adults, the mediating effects were found primarily in the low alpha and theta frequency bands. Furthermore, the specific brain regions involved shifted away from the frontal cortex. The analysis identified the superior parietal lobule and the primary visual cortex as key mediators. These areas are largely responsible for sensory processing and integration.

This shift suggests that the neural architecture supporting intelligence evolves as people age. In younger adulthood, cognitive ability appears to rely heavily on the rapid, high-frequency communication of executive control networks in the front of the brain. As the brain ages, it may undergo a process of reorganization.

The reliance on posterior brain regions and slower frequency bands in older adults implies a strategy that prioritizes the integration of sensory information. This finding is consistent with the concept of neural dedifferentiation, where the aging brain recruits broader, less specialized networks to maintain performance.

The researchers also found that certain brain areas, such as the primary visual cortex, played a consistent role across both groups, though the direction of the effect varied. In both young and older adults, higher nodal efficiency in the visual cortex was associated with higher intelligence.

However, a higher genetic predisposition for intelligence was associated with lower efficiency in this region. This complex relationship highlights that the genetic influence on the brain is not always a straightforward enhancement of connectivity.

“When comparing the two age groups, we were surprised that the brain regions consistently mediating the link between genetic variation and intelligence are primarily involved in sensory processing and integration,” Engler explained. “One might expect such stable neural anchors to be associated with higher-order executive functions like reasoning or planning, typically located in frontal networks. Instead, our results suggest that sensory and associative regions play a more central role in maintaining cognitive ability than is typically emphasized in dominant models of intelligence.”

As with all research, there are some limitations to note. The study utilized a cross-sectional design, meaning it compared two different groups of people at a single point in time. It did not follow the same individuals as they aged.

Consequently, it is not possible to definitively prove that the observed differences are caused by the aging process itself rather than generational differences. Longitudinal studies that track participants over decades would be necessary to confirm the shift in neural strategies.

The study focused exclusively on resting-state EEG. While intrinsic brain activity provides a baseline of functional organization, it does not capture the brain’s dynamic response to active problem-solving.

It is possible that different network patterns would emerge if participants were recorded while performing the cognitive tests. Future research could investigate task-based connectivity to see if it offers a stronger explanatory link between genetics and performance.

“A crucial next step would be to replicate our findings in independent samples to ensure their robustness and generalizability,” Engler said. “Furthermore, it would be interesting to investigate age-related changes in functional network organization from a longitudinal rather than from a cross-sectional perspective. A further long-term goal is to investigate the triad of genetic variants, the brain’s functional connectivity, and intelligence by analyzing task-based EEG data rather than resting-state EEG data.”

The study, “Electrophysiological resting-state signatures link polygenic scores to general intelligence,” was authored by Rebecca Engler, Christina Stammen, Stefan Arnau, Javier Schneider Penate, Dorothea Metzen, Jan Digutsch, Patrick D. Gajewski, Stephan Getzmann, Christoph Fraenz, Jörg Reinders, Manuel C. Voelkle, Fabian Streit, Sebastian Ocklenburg, Daniel Schneider, Michael Burke, Jan G. Hengstler, Carsten Watzl, Michael A. Nitsche, Robert Kumsta, Edmund Wascher, and Erhan Genç.

Data from 560,000 students reveals a disturbing mental health shift after 2016

2 February 2026 at 15:00

A comprehensive analysis of data spanning fifteen years reveals that depression symptoms have increased among college students in the United States, with the most severe rises occurring after 2016. The findings indicate that while distress is growing across the board, the escalation is particularly steep for women, racial minorities, and students facing financial difficulties. These results were published in the Journal of Affective Disorders.

Mental health professionals recognize depression as a debilitating condition that can severely impact daily functioning. Rates of the disorder have been climbing for decades, with young adults showing some of the highest prevalence.

While the general increase in diagnosis rates is well-documented, less is understood about how specific symptoms manifest and change over time within different groups. Some theories suggest that exposure to stress varies by demographic, potentially altering how depression presents itself.

Past research has debated whether cultural background or social status influences the expression of distress. Some scholars propose that individuals from minority groups or lower socioeconomic backgrounds may express depression through physical symptoms, such as fatigue or sleep issues.

Others suggest that Western cultures are more likely to manifest distress through emotional or cognitive symptoms, like guilt or hopelessness. The authors of this new study aimed to clarify these distinctions by analyzing trends at the level of individual symptoms rather than just overall diagnosis scores.

“The motivation for this study was an interest in understanding if the increase in prevalence of youth depression over the past two decades, which is a known phenomenon for which we don’t have definite answers, was also reflected in college students,” explained study author Carol Vidal, an associate professor at Johns Hopkins Medicine and author of Status and Social Comparisons Among Adolescents.

“I am a pediatric psychiatrist and personally had an experience with one of my patients a few years ago who made me think about looking at item level changes. This young patient had presented to us for depression initially, but after treatment, her depression improved in our clinical assessment and by her report, but she continued to have elevated PHQ-9 scores.”

PHQ-9, or the Patient Health Questionnaire-9, is a screening tool that asks participants to rate how often they have been bothered by nine specific problems over the past two weeks. The items cover a range of experiences, including low interest in activities, feelings of failure, trouble concentrating, and thoughts of self-harm. Responses are rated on a scale from zero to three.

“When looking closely at her repeated PHQ-9s, her therapist and I saw that those scores were driven mostly by sleep, appetite, and concentration problems, but her mood was much better and she did not present anhedonia, which are core symptoms of depression,” Vidal said.

“We thought maybe the increase in depression at the population level was driven by certain items and decided to explore PHQ-9 changes over time by item. We also wanted to see if the global increases in youth depression were driven by increases in certain demographic groups.”

For their new study, the researchers utilized data from the Healthy Minds Study. This is a large, nationally representative survey of undergraduate and graduate students. The dataset included responses from approximately 560,000 students across 450 colleges and universities collected between 2007 and 2022. The primary measure used was the PHQ-9.

The researchers used statistical models to estimate the trends of depression symptoms over time. They examined how these trends interacted with sex, race, ethnicity, and self-reported financial stress. The analysis focused on identifying which specific symptoms were rising the fastest and which groups were most affected.

The analysis showed that average scores for every single item on the PHQ-9 increased from 2007 to 2022. The total depression scores remained relatively stable from 2007 to 2015 but began a meaningful ascent starting in 2016. By 2022, the average student score was approaching the threshold for moderate depression.

“The main findings are that depression increased for all PHQ-9 items between 2007 and 2022, but that the most meaningful increase was after 2016. I was surprised to find the steep increase after 2016. Other epidemiological studies find steeper increases starting in 2012,” Vidal told PsyPost.

The specific symptoms that saw the largest growth were alarming. The most dramatic increase was in suicidal ideation, defined as thoughts of being better off dead or hurting oneself. This specific symptom increased by 153.9 percent over the study period.

Other symptoms also showed sharp rises. Psychomotor agitation, which involves moving or speaking slowly or being fidgety and restless, increased by 79.6 percent. Trouble concentrating on things like reading or watching television rose by 77.7 percent. Feelings of worthlessness increased by 66 percent.

When breaking the data down by sex, clear disparities emerged. Women and intersex students reported steeper annual increases in nearly every symptom compared to men. Intersex students showed the most rapid growth in fatigue, psychomotor changes, and suicidal ideation. While men also experienced increases, the rate of change was slower.

The study also revealed nuanced differences regarding race and ethnicity. For several physical symptoms, White students showed flat or declining trends, while other racial groups reported increases. For instance, sleep problems remained stable among White students but rose among all other groups. The steepest rise in sleep disturbances was observed among Hispanic students.

Similar patterns appeared for symptoms like fatigue and appetite changes. White students did not show aggregate increases in fatigue, yet every other racial group did. This suggests that the burden of physical symptoms of depression is growing disproportionately among racial and ethnic minority students.

However, cognitive symptoms showed more uniformity. Feelings of worthlessness and depressed mood increased at similar rates across all racial and ethnic groups. Most notably, suicidal ideation increased across all groups without significant differences in the rate of growth by race. This indicates that the most severe indicator of distress is rising universally, regardless of racial background.

Financial stress also proved to be a powerful predictor of worsening mental health. The researchers categorized students based on whether they found their financial situation to be stressful. Students who reported that their finances were “always stressful” had higher levels of all depression symptoms.

Furthermore, financially stressed students experienced faster yearly increases in symptoms compared to those who reported their finances were never stressful. This was particularly true for symptoms like poor appetite, feelings of worthlessness, and suicidal ideation. The gap between financially secure and financially stressed students appears to be widening over time.

“Students experiencing financial stress had higher levels of all symptoms of depression and faster yearly increases compared to those without financial stress, which is important to consider in an environment of economic uncertainty,” Vidal said.

The sharp increase in suicidal ideation is a major concern highlighted by the data. Although the absolute mean scores for this item remain lower than for common symptoms like fatigue, the rate of change is much faster. This suggests a need for targeted suicide prevention strategies on college campuses that go beyond general mental health support.

The findings challenge the idea that depression trends are monolithic. The variation in symptom trajectories suggests that different groups are experiencing the rising tide of mental health issues in distinct ways. The consistent pattern of higher increases among women and racial minority groups points to widening disparities in mental health burdens.

“I’d like to point out that popular explanations about depression causes tend to be simplistic (i.e., social media) and that we don’t really know well if other factors like economic or political changes, or even things a decrease in stigma, are also contributors,” Vidal told PsyPost.

One limitation of the study is that it relied on cross-sectional data. This means different students were surveyed each year, rather than tracking the same individuals over time. The results reflect population-level changes but cannot confirm individual trajectories of illness.

Additionally, the data is self-reported. This introduces the possibility that changes in how people perceive or report mental health issues could influence the results. For example, a decrease in stigma might lead to more students being willing to admit to symptoms they previously would have hidden.

The study focused exclusively on college students. The experiences of young adults who do not attend college may differ. However, given the large proportion of young adults who attend higher education institutions, the findings have broad relevance for this age group.

Future research aims to investigate the macro-level and environmental causes of these trends. Understanding the role of economic instability, political climate, and other societal factors is a priority for the researchers. They hope to move beyond simplistic explanations to identify the structural drivers of youth distress.

“The changes we are seeing put many youth in the clinically meaningful threshold for depression,” Vidal noted. “Prevention and promotion of mental health involving peers and other individuals who are more accessible to youth can help with having less people get to that level of severity, and at the same time, interventions with professionals for those students in need for higher level services need to be made available on campus.”

The study, “Fifteen-year trends in depression symptoms by sex, race, and financial stress among U.S. College Students,” was authored by Carol Vidal, Jenny Owens, Phillip Sullivan, and Flavius Lilly.

A process thought to destroy brain cells might actually help them store data

2 February 2026 at 05:00

Recent research provides evidence that the nervous system actively promotes the formation of amyloid structures to stabilize long-term memories. While amyloids are often associated with neurodegenerative conditions, this study identifies a specific protein chaperone that drives the creation of beneficial amyloids in response to sensory experiences. These findings, which offer a new perspective on how the brain encodes information, were published in the Proceedings of the National Academy of Sciences.

Scientists have studied the biological basis of memory for decades. A prevailing model posits that long-term memory requires the physical alteration of synapses, the connections between neurons. This process involves changes in the proteins located at these synapses.

One specific protein, known as Orb2 in fruit flies, plays a central role in this process. Orb2 creates a stable memory trace by self-assembling into an amyloid, a tight stack of proteins that is durable and self-perpetuating.

Most research on amyloids focuses on their toxic role in diseases such as Alzheimer’s. In those contexts, proteins misfold and aggregate in ways that damage cells. However, the brain appears to use a similar aggregation mechanism for beneficial purposes. The question remained regarding how the brain ensures that Orb2 forms amyloids only when a memory needs to be stored and not at random times.

A research team led by Kyle Patton investigated the regulatory systems that might control this precise timing. They hypothesized that molecular chaperones, which are proteins that assist others in folding or assembling, might be responsible for this regulation.

To identify the specific molecules involved, the researchers focused on the J-domain protein (JDP) family. This is a diverse group of chaperones known to regulate protein states. The team utilized Drosophila melanogaster, the common fruit fly, as their model organism. They examined 46 different JDPs found in the fly genome. The team narrowed their search to chaperones expressed in the mushroom body, a brain structure in insects that is essential for learning and memory.

The researchers conducted a genetic screen to determine which of these chaperones influenced memory retention. They used a classical conditioning experiment known as an associative appetitive memory paradigm. In this procedure, the researchers starved flies for a short period to motivate them. They then exposed the flies to two different odors. One odor was paired with a sugar reward, while the other was not. After training, the flies were given a choice between the two odors.

Most wild-type flies remember which odor predicts food for a certain period. The researchers genetically modified groups of flies to overexpress specific JDPs in their mushroom body neurons. They found that increasing the levels of one specific chaperone, named CG10375, significantly enhanced the flies’ ability to form long-term memories. The researchers named this protein “Funes,” inspired by a fictional character with the inability to forget.

The study showed that flies with elevated levels of Funes remembered the association between the odor and the sugar for much longer than control flies. This effect was specific to long-term memory. Short-term memory, which operates through different molecular mechanisms, appeared unaffected. This suggests that Funes plays a distinct role in the consolidation phase of memory storage.

To verify that Funes is necessary for memory—and not just a booster when artificially added—the team performed the reverse experiment. They used genetic tools to reduce the natural levels of Funes in the fly brain or to create mutations in the Funes gene.

Flies with reduced Funes activity were capable of learning the task initially. However, they failed to retain the memory 24 hours later. This indicates that Funes is an essential component of the natural machinery required for memory stabilization.

The researchers next investigated how Funes interacts with sensory information. Memory formation usually depends on the intensity of the experience. For example, a strong sugary reward creates a stronger memory than a weak one. The team tested Funes-overexpressing flies with lower concentrations of sugar and weaker odors.

Remarkably, flies with extra Funes formed robust memories even when the sensory cues were suboptimal. They learned effectively with much less sugar than typical flies required. This finding suggests that Funes helps signal the nutritional value or “salience” of the experience. It acts as a sensitizing agent, allowing the brain to encode memories of events that might otherwise be too faint to trigger long-term storage.

Following the behavioral tests, the researchers explored the molecular mechanism at play. They suspected that Funes acted by influencing Orb2, the memory protein known to form amyloids. They performed biochemical experiments to see if the two proteins interacted physically.

The results showed that Funes binds directly to Orb2. Specifically, it binds to Orb2 when it is in an oligomeric state, which is an intermediate stage between a single molecule and a full amyloid fiber.

The team then reconstituted the reaction in a test tube to observe it directly. They purified Funes and Orb2 proteins and mixed them in a controlled environment. When mixed, Funes accelerated the transition of Orb2 from these intermediate clusters into long, stable amyloid filaments. The researchers confirmed the presence of these structures using an amyloid-binding dye called Thioflavin T, which fluoresces when it attaches to amyloid fibers.

To ensure these laboratory-created fibers were the same as those found in living brains, the team utilized cryogenic electron microscopy (cryo-EM). This advanced imaging technique allows scientists to see the atomic structure of proteins.

The images revealed that the Orb2 amyloids created with the help of Funes were structurally identical to endogenous Orb2 amyloids extracted from fly heads. They possessed the same “cross-beta” architecture that characterizes functional amyloids.

The study further demonstrated that the “J-domain” of the Funes protein is essential for this activity. This domain is a specific section of the protein sequence that defines the JDP family.

The researchers generated a mutant version of Funes with a slight alteration in the J-domain. This mutant was able to bind to Orb2 but could not push it to form the final amyloid structure. When this mutant version was expressed in flies, it failed to enhance memory, confirming that the physical formation of the amyloid is the key to the memory-boosting effect.

Beyond structural formation, the researchers verified that these Funes-induced amyloids were functionally active. In the brain, Orb2 amyloids work by binding to specific messenger RNAs (mRNAs) and regulating their translation into new proteins.

The researchers used a reporter assay to measure this activity. They found that the amyloids facilitated by Funes successfully promoted the translation of target mRNAs, mimicking the natural biological process seen in memory consolidation.

One potential limitation of this study is its focus on Drosophila. While the fundamental molecular machinery of memory is highly conserved across species, it remains to be seen if a direct homolog of Funes performs the exact same function in mammals.

The human genome contains many J-domain proteins, and identifying which one corresponds functionally to Funes will be a necessary next step. The study suggests a link to human health, noting that some related chaperones have been genetically associated with schizophrenia, a condition that involves cognitive deficits.

Future research will likely investigate how Funes receives the signal to act. The current study shows that Funes responds to nutritional cues, but the precise signaling pathway that activates Funes remains to be mapped. Additionally, scientists will need to determine if Funes regulates other proteins beside Orb2. It is possible that this chaperone manages a suite of proteins required for synaptic plasticity.

This work challenges the traditional view that amyloid formation is merely a pathological accident. It provides evidence that the brain has evolved sophisticated machinery to harness these stable structures for information storage. By identifying Funes, the researchers have pinpointed a control switch for this process, offering a potential target for understanding how memories persist over a lifetime.

The study, “A J-domain protein enhances memory by promoting physiological amyloid formation in Drosophila,” was authored by Kyle Patton, Yangyang Yi, Raj Burt, Kevin Kan-Shing Ng, Mayur Mukhi, Peerzada Shariq Shaheen Khaki, Ruben Hervas, and Kausik Si.

Speaking multiple languages appears to keep the brain younger for longer

People are living longer than ever around the world. Longer lives bring new opportunities, but they also introduce challenges, especially the risk of age-related decline.

Alongside physical changes such as reduced strength or slower movement, many older adults struggle with memory, attention and everyday tasks. Researchers have spent years trying to understand why some people stay mentally sharp while others deteriorate more quickly. One idea attracting growing interest is multilingualism, the ability to speak more than one language.

When someone knows two or more languages, all those languages remain active in the brain. Each time a multilingual person wants to speak, the brain must select the right language while keeping others from interfering. This constant mental exercise acts a bit like daily “brain training”.

Choosing one language, suppressing the others and switching between them strengthens brain networks involved in attention and cognitive control. Over a lifetime, researchers believe this steady mental workout may help protect the brain as it ages.

Studies comparing bilinguals and monolinguals have suggested that people who use more than one language might maintain better cognitive skills in later life. However, results across studies have been inconsistent. Some reported clear advantages for bilinguals, while others found little or no difference.

A new, large-scale study now offers stronger evidence and an important insight: speaking one extra language appears helpful, but speaking several seems even better.

This study analysed data from more than 86,000 healthy adults aged 51 to 90 across 27 European countries. Researchers used a machine-learning approach, meaning they trained a computer model to detect patterns across thousands of datapoints. The model estimated how old someone appeared based on daily functioning, memory, education level, movement and health conditions such as heart disease or hearing loss.

Comparing this “predicted age” with a person’s actual age created what the researchers called a “biobehavioural age gap”. This is the difference between how old someone is and how old they seem based on their physical and cognitive profile. A negative gap meant someone appeared younger than their biological age. A positive gap meant they appeared older.

The team then looked at how multilingual each country was by examining the percentage of people who spoke no additional languages, one, two, three or more. Countries with high multilingual exposure included places such as Luxembourg, the Netherlands, Finland and Malta, where speaking multiple languages is common. Countries with low multilingualism included the UK, Hungary and Romania.

People living in countries where multilingualism is common had a lower chance of showing signs of accelerated ageing. Monolingual speakers, by contrast, were more likely to appear biologically older than their actual age. Just one additional language made a meaningful difference. Several languages created an even stronger effect, suggesting a dose-dependent relationship in which each extra language provided an additional layer of protection.

These patterns were strongest among people in their late 70s and 80s. Knowing two or more languages did not simply help; it offered a noticeably stronger shield against age-related decline. Older multilingual adults seemed to carry a kind of built-in resilience that their monolingual peers lacked.

Could this simply reflect differences in wealth, education or political stability between countries? The researchers tested this by adjusting for dozens of national factors including air quality, migration rates, gender inequality and political climate. Even after these adjustments, the protective effect of multilingualism remained steady, suggesting that language experience itself contributes something unique.

Although the study did not directly examine brain mechanisms, many scientists argue that the mental effort required to manage more than one language helps explain the findings. Research shows that juggling languages engages the brain’s executive control system, the set of processes responsible for attention, inhibition and switching tasks.

Switching between languages, preventing the wrong word from coming out, remembering different vocabularies and choosing the right expression all place steady demands on these systems. Work in our lab has shown that people who use two languages throughout their lives tend to have larger hippocampal volume.

This means the hippocampus, a key brain region for forming memories, is physically bigger. A larger or more structurally robust hippocampus is generally linked to better memory and greater resistance to age-related shrinkage or neurodegenerative diseases such as Alzheimer’s.

This new research stands out for its scale, its long-term perspective and its broad approach to defining ageing. By combining biological, behavioural and environmental information, it reveals a consistent pattern: multilingualism is closely linked to healthier ageing. While it is not a magic shield, it may be one of the everyday experiences that help the brain stay adaptable, resilient and younger for longer.The Conversation

 

This article is republished from The Conversation under a Creative Commons license. Read the original article.

New findings challenge assumptions about men’s reading habits

2 February 2026 at 01:00

A longstanding belief in the publishing world suggests that men avoid reading fiction that centers on the lives of women. However, new research indicates that a protagonist’s gender has almost no impact on whether a man wants to continue reading a story. These findings appear in the Anthology of Computers and the Humanities.

The literary marketplace has historically skewed heavily toward men. For roughly two centuries, men wrote the majority of published novels. These books focused their narrative attention primarily on male characters.

That dynamic has shifted in recent years. Women now constitute the majority of published authors. In addition, women are now more likely to purchase and read books than men are.

This demographic change has sparked concern among some cultural commentators. There is an anxiety that literary fiction is becoming a pursuit exclusive to women. This worry often centers on the idea that boys and men are losing interest in reading as the representation of women increases.

Data from the industry shows a strong division between authors and readers based on gender. Men tend to read books written by men. Conversely, women tend to read books written by women.

Industry stakeholders often attribute this separation to a specific reader preference. They assume men are simply less willing to read books featuring women protagonists. This assumption suggests that publishers should release more stories centering on men to maximize their potential audience.

Federica Bologna, a doctoral student in information science at Cornell University, led a team to investigate this assumption. Co-authors included Ian Lundberg from the University of California, Los Angeles, and Matthew Wilkens from Cornell University. They noted that previous research on this topic was scarce.

Earlier studies on reader preferences often relied on small groups or interviews rather than large-scale data. Some of these smaller studies suggested that men prefer male protagonists. Others suggested that women were indifferent to character gender.

Bologna and her colleagues sought to determine if the gender of a character actually causes a reader to stop reading. They designed an experiment to isolate gender as a single variable. The team recruited approximately 3,000 participants living in the United States.

The participant pool was evenly split between men and women to ensure balanced data. The researchers excluded participants who identified as non-binary due to data limitations. The resulting sample size provided high statistical power for the analysis.

Participants read two short stories written specifically for the study. The researchers created original fiction to ensure no participant had seen the text before. One story focused on a character named Sam who goes hiking in the desert.

The second story depicted a character named Alex sketching in a coffee shop. The authors chose the names Sam and Alex because they are gender-neutral. This allowed the researchers to swap the genders of the characters without changing their names.

Crucially, the team randomized the pronouns used in each version of the stories. Half the participants read a version where Sam the hiker was a woman using “she/her” pronouns. In this version, Alex the artist was a man using “he/him” pronouns.

The other half of the participants read a version where the genders were swapped. For them, Sam was a man and Alex was a woman. This design ensured that the plot, setting, and dialogue remained identical for all readers.

Only the perceived gender of the main character changed between the groups. This approach is known as a vignette experiment. It allows researchers to attribute any difference in reader response directly to the specific variable they manipulated.

After reading the passages, participants had to answer comprehension questions. This step verified that they had actually read and understood the text. They were then asked to choose which of the two stories they would prefer to continue reading.

The researchers compared the probability of a reader selecting a story based on the protagonist’s gender. If the industry assumption were correct, men would be much less likely to choose the story when the protagonist was a woman. The results contradicted this prevailing wisdom.

When the protagonist was a woman, men chose the hiking story 76 percent of the time. When the protagonist was a man, men selected the hiking story 75 percent of the time. The statistical difference between these two numbers was effectively zero.

The presence of a female protagonist did not reduce the men’s desire to read the story. Being randomly assigned a female character increased the probability of a man choosing that story by only 0.8 percentage points. This result was not statistically distinguishable from having no effect at all.

Matthew Wilkens, an associate professor of information science, noted the clarity of the result. “This supposed preference among men for reading about men as characters just isn’t true. That doesn’t exist,” said Wilkens.

He emphasized that these findings challenge the anecdotes often cited in the publishing world. “That is contrary to the limited existing literature and contrary to widespread industry assumptions,” Wilkens added.

Women participants showed a different pattern than the men. They displayed a modest preference for stories featuring women. Women selected the hiking story 77 percent of the time when it featured a woman.

This probability dropped to 70 percent when the character was a man. The data suggests that while women leaned toward characters of their own gender, men remained indifferent. The gender of the character did not appear to be a deciding factor for male readers.

The authors acknowledged certain limitations in their experimental design. The study relied on just two specific short stories. It is possible that the genre of the story influences reader preferences in ways this experiment did not capture.

For instance, men might read more mysteries or thrillers. Those genres often feature male protagonists. If the study had used a different genre, the results might have differed.

Future research would need to randomize genre to see if that changes the outcome. Additionally, the use of unpublished fiction limits how well the study mimics real-world bookstores. In a bookstore, fame and marketing play a large role in what people choose.

However, using unpublished text provided strong internal validity. It prevented participants from recognizing the story or guessing the study’s intent. This ensures the responses were genuine reactions to the text itself.

Another limitation involved the demographics of the participants. The researchers excluded respondents with gender identities other than man or woman. This was necessary because they could not gather enough data on those groups to reach a statistical conclusion.

Bologna and her colleagues hope to include nonbinary readers in future work. Understanding how gender-nonconforming readers interact with character gender is a gap in the current science.

The study leaves open the question of why men predominantly read books by men. Since character gender is not the cause, other factors must be at play. The authors suggest that socialization or gendered expectations may influence reading habits.

Society may condition boys to view reading as a feminine activity. This could discourage them from reading at rates equal to girls. Alternatively, men may simply prefer the specific topics or writing styles found in books authored by men.

Despite these open questions, the study offers a clear message to publishers. The fear that writing about women will alienate male readers appears unfounded. Fiction editors need not reserve female protagonists for books marketed solely to women.

“Readers are pretty flexible,” Wilkens said. “Give them interesting stories, and they will want to read them.”

Bologna hopes this work will encourage the publishing industry to promote more books with a variety of girl and women characters. The team suggests that the industry creates a self-fulfilling prophecy by assuming men will not read about women. By breaking this cycle, publishers could offer a more diverse range of stories to all readers.

In future work, the researchers hope to explore whether these findings apply to other media. They question whether similar assumptions drive creators to avoid female protagonists in video games. If the same pattern holds, it would suggest that content creators across media are underestimating their male audience.

The study, “Causal Effect of Character Gender on Readers’ Preferences,” was authored by Federica Bologna, Ian Lundberg, and Matthew Wilkens.

Morning sunlight shifts sleep cycles earlier and boosts quality

1 February 2026 at 23:00

Spending more time in the sun early in the morning may help people fall into healthier sleep patterns, according to a new study published in BMC Public Health. Researchers found that morning light exposure shifts sleep timing earlier and improves sleep quality.

Scientists have long known that sunlight plays a crucial role in regulating the body’s internal clock, which helps determine when people feel alert and when they feel sleepy. This internal clock relies heavily on light signals from the environment, particularly natural daylight. In recent decades, however, many people have spent less time outdoors due to office work, screen use, and urban living. These trends intensified during pandemic lockdowns, when outdoor movement was limited for months at a time.

Led by Luiz Antônio Alves de Menezes-Júnior from the Federal University of Ouro Preto in Brazil, the researchers behind the new study wanted to better understand whether the timing of sunlight exposure matters, not just the total amount of sunlight people receive. Previous research suggested morning light might be especially important, but few large population studies had tested this idea in real-world settings.

To explore this question, the scientists surveyed 1,762 adults living in two Brazilian cities between October and December 2020. Participants reported how often and how long they were exposed to sunlight at different times of day—before 10 a.m., between 10 a.m. and 3 p.m., and after 3 p.m. They also answered detailed questions about their sleep habits, including how long they slept, how quickly they fell asleep, and when they went to bed and woke up.

One key measure examined in the study was the “midpoint of sleep,” which represents the halfway point between falling asleep and waking up. This measure is important because it reflects how well a person’s sleep schedule aligns with their internal body clock.

The findings showed that morning sunlight had the strongest influence on sleep timing. For every additional 30 minutes of sunlight exposure before 10 a.m., the midpoint of sleep shifted earlier by about 23 minutes. In practical terms, this means individuals who spent more time in morning sunlight tended to fall asleep and wake up earlier, aligning their sleep more closely with natural day-night cycles.

Sunlight exposure after 3 p.m. also shifted sleep timing earlier, but the effect was smaller. Midday sunlight showed no clear link to sleep timing. Importantly, the study also found that more morning sunlight was associated with better overall sleep quality, while total sleep time and time spent falling asleep were largely unaffected.

The researchers believe morning sunlight helps reset the body’s internal clock, sending a strong signal that it is time to be awake and alert. This signal then helps the body prepare for sleep later that evening. Without enough early-day light, the body clock can drift later, leading to delayed sleep and difficulty waking up.

“Morning sunlight, in particular, helps regulate the secretion of melatonin, a hormone crucial for sleep regulation, thereby improving sleep onset and sleep quality. Increased sunlight exposure also correlates with lower levels of daytime sleepiness and improved alertness during the day,” the authors explained.

The study does have limitations. For instance, it did not control for other exposure to artificial light, such as screens, which also impacts the body clock. Additionally, it relied on self-reported data; thus, the results may be affected by memory errors or personal bias.

The study, “The role of sunlight in sleep regulation: analysis of morning, evening and late exposure,” was authored by Luiz Antônio Alves de Menezes-Júnior, Thais da Silva Sabião, Júlia Cristina Cardoso Carraro, George Luiz Lins Machado-Coelho, and Adriana Lúcia Meireles.

What brain scans reveal about people who move more

1 February 2026 at 21:00

New research indicates that physical movement may help preserve the ability to recall numbers over short periods by maintaining the structural integrity of the brain. These findings highlight potential biological pathways connecting an active lifestyle to cognitive health in later life. The analysis was published in the European Journal of Neuroscience.

As the global population ages, the prevalence of cognitive impairment and dementia has emerged as a primary public health concern. Memory decline compromises daily independence and social engagement. Medical experts have identified physical inactivity as a modifiable risk factor for this deterioration.

Prior investigations have consistently linked exercise to better cognitive performance. Researchers have found that older adults who maintain active lifestyles often exhibit preserved memory and executive function. However, the biological mechanisms driving this protective effect remain only partially understood.

The brain undergoes physical changes as it ages. These changes often include a reduction in volume and the accumulation of damage. Neuroscientists categorize brain tissue into gray matter and white matter.

Gray matter consists largely of neuronal cell bodies and is essential for processing information. White matter comprises the nerve fibers that transmit signals between different brain regions. The integrity of these tissues is essential for optimal cognitive function.

Another marker of brain health is the presence of white matter hyperintensities. These are small lesions that appear as bright spots on magnetic resonance imaging scans. They frequently indicate disease in the small blood vessels of the brain and are associated with cognitive decline.

Previous studies attempting to link activity with brain structure often relied on self-reported data. Surveys asking participants to recall their exercise habits are prone to inaccuracies and bias. People may not remember their activity levels correctly or may overestimate their exertion.

To address these limitations, a team of researchers conducted a large-scale analysis using objective data. The study was led by Xiaomin Wu and Wenzhe Yang from the Department of Epidemiology and Biostatistics at Tianjin Medical University in China. They utilized data from the UK Biobank, a massive biomedical database containing genetic and health information.

The researchers aimed to determine if objectively measured physical activity was associated with specific memory functions. They also sought to understand if structural markers in the brain could explain this relationship statistically. They focused on a sample of middle-aged and older adults.

The final analysis included 19,721 participants. The subjects ranged in age from 45 to 82 years. The study population was predominantly white and had a relatively high level of education.

Physical activity was measured using wrist-worn accelerometers. Participants wore these devices continuously for seven days. This method captured all movement intensity, frequency, and duration without relying on human memory.

The researchers assessed memory function using three distinct computerized tests. The first was a numeric memory test. Participants had to memorize a string of digits and enter them after they disappeared from the screen.

The second assessment was a visual memory test involving pairs of cards. Participants viewed the cards briefly and then had to match pairs from memory. The third was a prospective memory test, which required participants to remember to perform a specific action later in the assessment.

A subset of 14,718 participants also underwent magnetic resonance imaging scans. These scans allowed the researchers to measure total brain volume and the volumes of specific tissues. They specifically examined gray matter, white matter, and the hippocampus.

The hippocampus is a seahorse-shaped structure deep in the brain known to be vital for learning and memory. The researchers also quantified the volume of white matter hyperintensities. They then used statistical models to look for associations between activity, brain structure, and memory.

The study found a clear positive association between physical activity and performance on the numeric memory test. Individuals who moved more tended to recall longer strings of digits. This association held true even after adjusting for factors like age, education, and smoking status.

The results for the other memory tests were less consistent. Physical activity was not strongly linked to prospective memory. The link to visual memory was weak and disappeared in some sensitivity analyses.

When examining brain structure, the researchers observed that higher levels of physical activity correlated with larger brain volumes. Active participants had greater total brain volume. They also possessed higher volumes of both gray and white matter.

The scans also revealed that increased physical activity was associated with a larger hippocampus. This was observed in both the left and right sides of this brain region. Perhaps most notably, higher activity levels were linked to a lower volume of white matter hyperintensities.

The researchers then performed a pathway analysis to understand the mechanism. This statistical method estimates how much of the link between two variables is explained by a third variable. They tested whether the brain structures mediated the relationship between activity and numeric memory.

The analysis showed that brain structural markers explained a substantial portion of the memory benefits. Total brain volume, white matter volume, and gray matter volume all acted as mediators. White matter hyperintensities played a particularly strong role.

Specifically, the reduction in white matter hyperintensities accounted for nearly 30 percent of the total effect of activity on memory. This suggests that physical activity may protect memory partly by maintaining blood vessel health in the brain. Preventing small vessel damage appears to be a key pathway.

The findings indicate that physical activity helps maintain the overall “hardware” of the brain. By preserving the volume of processing tissue and connection fibers, movement supports the neural networks required for short-term memory. The preservation of white matter integrity seems particularly relevant.

The researchers encountered an unexpected result regarding the hippocampus. Although physical activity was linked to a larger hippocampus, this volume increase did not explain the improvement in numeric memory. The pathway analysis did not find a significant mediating effect for this specific structure.

The authors suggest this may be due to the nature of the specific memory task. Recalling a string of numbers is a short-term working memory task. This type of cognitive effort relies heavily on frontoparietal networks rather than the hippocampus.

The hippocampus is more closely associated with episodic memory, or the recollection of specific events and experiences. The numeric test used in the UK Biobank may simply tap into different neural circuits. Consequently, the structural benefits to the hippocampus might benefit other types of memory not fully captured by this specific test.

The study provides evidence that the benefits of exercise are detectable in the physical structure of the brain. It supports the idea that lifestyle choices can buffer against age-related degeneration. The protective effects were observed in a non-demented population, suggesting benefits for generally healthy adults.

There are several important caveats to consider regarding this research. The study was cross-sectional in design. This means data on activity, brain structure, and memory were collected at roughly the same time.

Because of this design, the researchers cannot definitively prove causality. It is possible that people with healthier brains find it easier to be physically active. Longitudinal studies tracking changes over time are necessary to confirm the direction of the effect.

Another limitation is the composition of the study group. The UK Biobank participants tend to be healthier and wealthier than the general population. This “healthy volunteer” bias might limit how well the findings apply to broader, more diverse groups.

The measurement of physical activity, while objective, was limited to a single week. This snapshot might not perfectly reflect a person’s long-term lifestyle habits. However, it is generally considered more reliable than retrospective questionnaires.

Future research should explore these relationships in more diverse populations. Studies including participants with varying levels of cardiovascular health would be informative. Additionally, using a wider array of memory tests could help map specific brain changes to specific cognitive domains.

Despite these limitations, the study reinforces the importance of moving for brain health. It suggests that physical activity does not just improve mood or heart health. It appears to physically preserve the brain tissue required for cognitive function.

The preservation of white matter and the reduction of vascular damage markers stand out as key findings. These structural elements provide the connectivity and health necessary for the brain to operate efficiently. Simple daily movement may serve as a defense against the structural atrophy that often accompanies aging.

The study, “Association Between Physical Activity and Memory Function: The Role of Brain Structural Markers in a Cross-Sectional Study,” was authored by Xiaomin Wu, Wenzhe Yang, Yu Li, Luhan Zhang, Chenyu Li, Weili Xu, and Fei Ma.

This wearable device uses a surprising audio trick to keep you grounded

1 February 2026 at 19:00

A new study suggests that a wearable device capable of amplifying the sounds of hand movements can help individuals maintain focus on the present moment. This research indicates that heightening the acoustic feedback from manual interactions fosters a state of mindfulness and encourages curiosity during everyday tasks. The findings were published in the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies.

Mindfulness is generally defined as a mental state involving deliberate attention to the present moment combined with an attitude of openness. While formal practices such as meditation or yoga are well-known methods for cultivating this state, they often require dedicated time and a quiet environment. Many people find it difficult to sustain these formal routines amidst a busy schedule.

An alternative approach is known as informal or everyday mindfulness. This involves integrating awareness into routine daily activities, such as washing dishes, folding laundry, or writing.

Despite the potential of this approach, there are few technological tools designed to support it. Most existing mindfulness applications rely on verbal instructions or visual guides, which can demand significant cognitive effort.

Researchers at the Stanford SHAPE Lab and Virtual Human Interaction Lab aimed to develop a system that supports mindfulness through sensory cues rather than explicit commands. They theorized that a “bottom-up” sensory approach could reduce the mental load required to stay focused. By making the physical consequences of an action more noticeable, the technology attempts to naturally draw attention to the immediate experience.

The team specifically focused on the sounds produced by manual interactions. Hands are the primary tools used to interact with the world, and these interactions generate constant but often subtle acoustic signals.

The researchers hypothesized that amplifying these sounds would create a “sensory surprise.” This deviation from what the brain expects to hear could spark curiosity and prompt the user to pay closer attention to their actions.

“Mindfulness practices promote calmness and focus, yet existing technologies focus primarily on formal exercises, such as sitting meditation. In this work, we aim to explore how technology can support the informal practice of mindfulness—also called everyday mindfulness—when attention and curiosity are interwoven with daily activities, as simple as washing our hands or cooking a meal,” said study author Yujie Tao, a PhD student in Computer Science at Stanford University.

The hardware consisted of high-fidelity microphones attached to the user’s wrists and a pair of open-ear headphones. The microphones captured audio generated near the hands, such as the friction of skin against an object or the tap of a finger on a surface.

The system processed this audio in real time, increasing the volume by 15 decibels before playing it back to the user. The open-ear design allowed participants to hear the amplified sounds layered over the natural ambient noise.

The study involved 60 participants with an average age of approximately 25 years. The researchers randomly assigned these individuals to either a device group or a control group. Participants in the device group heard the amplified sounds of their hand movements throughout the experiment. Those in the control group wore the same equipment, but the audio augmentation features were deactivated.

The primary activity in the study was an object exploration task. Researchers presented participants with two distinct sets of items to manipulate. One set contained familiar household objects, including a pair of scissors, a storage bag, a paper cup, and a marker set. The second set included unfamiliar or novelty items, such as a tape dispenser with a clamp mechanism and a broom shaped like a human face.

Participants were instructed to explore these objects naturally and without a specific time limit. Following the exploration of each set, the individuals completed standardized questionnaires. These surveys were designed to measure “state mindfulness,” which refers to a temporary mindset of awareness and attention.

In addition to self-reports, the study employed objective measures to assess attention and curiosity. The researchers analyzed written descriptions provided by the participants to see what details they noticed about the objects.

They also video-recorded the sessions to code behavioral patterns. Specifically, they looked for “trial-and-error” behaviors, which are repetitive actions performed with slight variations to learn about an object’s properties.

The results provided evidence that audio augmentation influences how people engage with their physical environment. Participants in the device group reported higher levels of state mindfulness compared to the control group. This suggests that the enhanced auditory feedback helped users maintain a connection to their present activity.

“Digital technologies, from social media to virtual reality, often draw users away from everyday, real-world experiences and into synthetic ones,” Tao told PsyPost. “We want to challenge this trajectory by rethinking how technology can reconnect users to what is happening here and now. While our system is still in its initial validation, we see promising findings on how the system can guide attention back into ongoing activities rather than away from them.”

Analysis of the written descriptions revealed that the device successfully directed attention toward sensory details. Participants who heard the amplified sounds were much more likely to use sound-related terms in their responses.

The device group referenced auditory properties nearly nine times as often as the control group. This indicates that the technology made typically overlooked cues salient enough to capture conscious attention.

Behavioral data supported the idea that audio augmentation stimulates curiosity. Participants in the device group spent more time interacting with the objects than those in the control group. They also exhibited a higher frequency of trial-and-error behaviors. For example, a user might repeatedly open and close a pair of scissors or tap a cup on different parts of a table.

The researchers also investigated whether the device affected the users’ sense of agency. It is possible that altering sensory feedback could make people feel a loss of control over their actions. However, the study found no significant difference in reported agency between the two groups. This suggests that the amplified sounds were perceived as a natural extension of the users’ own movements.

The study also examined whether the familiarity of the objects influenced the results. Participants generally spent less time exploring familiar objects compared to unfamiliar ones.

However, the audio augmentation appeared to boost mindfulness and exploration regardless of whether the object was a common tool or a novelty item. This implies that the device can make even mundane, well-known objects seem novel and worthy of attention.

“We propose a wearable device that amplifies sounds produced by the hands and plays them back to the user in real time, encouraging attention to ongoing actions,” Tao explained. “With the device, you can hear more clearly these subtle yet often overlooked sounds, such as hands rubbing together and finger sliding through different surfaces. Our initial study with 60 participants in-lab showed that the audio augmentation delivered by our device can enhance state mindfulness, direct user attention to auditory properties of objects. and spark exploratory behavior.”

Despite the positive effects on mindfulness and behavior, the study did not find significant changes in other emotional states. Reports of awe and feelings of connectedness to the objects were similar across both groups. The researchers suggest that the indoor laboratory setting and the nature of the tasks might not have been conducive to eliciting strong emotions like awe.

As with all research, there are limitations. The experiment was conducted in a controlled lab environment with minimal background noise. It remains unclear how the device would perform in a noisy, real-world setting where extraneous sounds might be amplified. The task of exploring objects is also different from typical daily chores, which often have specific goals and time constraints.

“As a next step, we aim to investigate the device’s long-term effectiveness and benefits,” Tao said. “We are preparing a field study in which participants will take the device home, allowing us to understand its use in natural, real-world settings beyond the lab. We are also excited to explore the potential for integrating the device into existing mindfulness training programs, which are commonly used in therapeutic interventions for a range of mental health conditions.”

The study, “Audio Augmentation of Manual Interactions to Support Mindfulness,” was authored by Yujie Tao, Jingjin Li, Libby Ye, Andrew Zhang, Jeremy N. Bailenson, and Sean Follmer.

Alcohol shifts the brain into a fragmented and local state

1 February 2026 at 17:00

A standard glass of wine or beer does more than just relax the body; it fundamentally alters the landscape of communication within the brain. New research suggests that acute alcohol consumption shifts neural activity from a flexible, globally integrated network to a more segmented, local structure. These changes in brain architecture appear to track with how intoxicated a person feels. The findings were published in the journal Drug and Alcohol Dependence.

For decades, neuroscientists have worked to map how alcohol affects human behavior. Traditional studies often look at specific brain regions in isolation. Researchers might observe that activity in the prefrontal cortex dampens, which explains why inhibition lowers. Alternatively, they might see changes in the cerebellum, which accounts for the loss of physical coordination.

However, the brain does not operate as a collection of independent islands. It functions as a massive, interconnected web. Information must travel constantly between different areas to process sights, sounds, and thoughts. Understanding how alcohol impacts the traffic patterns of this web requires a different mathematical approach known as graph theory.

Graph theory allows scientists to treat the brain like a vast map of cities and highways. The “cities” are distinct brain regions, referred to as nodes. The “highways” are the functional connections between them, known as edges. By analyzing the flow of traffic across these highways, researchers can determine how efficiently the brain is sharing information.

Leah A. Biessenberger and her colleagues at the University of Minnesota and the University of Florida sought to apply this network-level analysis to social drinkers. Biessenberger, the study’s lead author, worked alongside senior author Jeff Boissoneault and a wider team. They aimed to fill a gap in the scientific literature regarding acute alcohol use.

While previous research has examined how chronic, heavy drinking reshapes the brain over years, less is known about the immediate network effects of a single drinking session. The researchers wanted to observe the brain in a “resting state.” This is the baseline activity that occurs when a person is awake but not performing a specific task.

To investigate this, the team recruited 107 healthy adults between the ages of 21 and 45. The participants were social drinkers without a history of alcohol use disorder. The study utilized a double-blind, placebo-controlled design. This method is the gold standard for removing bias from clinical experiments.

Each participant visited the laboratory for two separate sessions. During one visit, they consumed a beverage containing alcohol mixed with a sugar-free mixer. The dose was calculated to bring their breath alcohol concentration to 0.08 grams per deciliter, which is the legal driving limit in the United States.

During the other visit, they received a placebo drink. This beverage contained only the mixer but was misted with a small amount of alcohol on the surface and rim to mimic the smell and taste of a real cocktail. Neither the participants nor the research staff knew which drink was administered on a given day.

Approximately 30 minutes after drinking, the participants entered an MRI scanner. They were instructed to keep their eyes open and let their minds wander. The scanner recorded the blood oxygen levels in their brains, which serves as a proxy for neural activity.

The researchers then used computational tools to analyze the functional connectivity between 106 different brain regions. They looked for specific patterns in the data described by graph theory metrics. These metrics included “global efficiency” and “local efficiency.”

Global efficiency measures how easily information travels across the entire network. A network with high global efficiency has many long-distance shortcuts, allowing distant regions to communicate quickly. Local efficiency measures how well neighbors talk to neighbors. It reflects the tendency of brain regions to form tight-knit clusters that process information among themselves.

The analysis revealed distinct shifts in the brain’s topology following alcohol consumption. When participants drank alcohol, their brains moved toward a more “grid-like” state. The network became less random and more clustered.

Specifically, the study found that global efficiency decreased in several areas. This was particularly evident in the occipital lobe, the part of the brain responsible for processing vision. The reduction suggests that alcohol makes it harder for visual information to integrate with the rest of the brain’s operations.

Simultaneously, local efficiency increased. Regions in the frontal and temporal cortices began to communicate more intensely with their immediate neighbors. The brain appeared to fracture into smaller, self-contained communities. This structure requires less energy to maintain but hinders the rapid integration of complex information.

The researchers also examined a metric called “clustering coefficient.” This value reflects the likelihood that a node’s neighbors are also connected to each other. Alcohol increased the clustering coefficient across the network. This further supports the idea that the intoxicated brain relies more on local processing than global integration.

The team also looked at the “insula,” a region deeply involved in sensing the body’s internal state. Under the influence of alcohol, the insula showed increased connections with its local neighbors. It also displayed greater activity in communicating with the broader network compared to the placebo condition.

These architectural changes were not merely abstract mathematical observations. The researchers found a statistical link between the network shifts and the participants’ subjective experiences. Before the scan, participants rated how intoxicated they felt on a scale of 0 to 100.

The results showed that the degree of network reorganization predicted the intensity of the subjective “buzz.” Participants whose brains showed the largest drop in global efficiency and the largest rise in local clustering tended to report feeling the most intoxicated. The structural breakdown of long-range communication tracked with the feeling of impairment.

This correlation offers new insight into why individuals react differently to the same amount of alcohol. Even at the same blood alcohol concentration, people experience varying levels of intoxication. The study suggests that individual differences in how the brain network fragments may underlie these varying subjective responses.

The findings also highlighted disruptions in the visual system. The decrease in efficiency within the occipital regions was marked. This aligns with well-known effects of drunkenness, such as blurred vision or difficulty tracking moving objects. The network analysis provides a neural basis for these sensory deficits.

While the study offers robust evidence, the authors note certain limitations. The MRI scans did not capture the cerebellum consistently for all participants. The cerebellum is vital for balance and motor control. Because it was not included in the analysis, the picture of alcohol’s effect on the whole brain remains incomplete.

Additionally, the study focused on young, healthy adults. The brain changes observed here might differ in older adults or individuals with a history of substance abuse. Aging brains already show some reductions in global efficiency. Alcohol could compound these effects in older populations.

The researchers also point out that the participants were in a resting state. The brain rearranges its network when actively solving problems or processing emotions. Future research will need to determine if these topological shifts persist or worsen when an intoxicated person tries to perform a complex task, like driving.

This investigation provides a nuanced view of acute intoxication. It moves beyond the idea that alcohol simply “dampens” brain activity. Instead, it reveals that alcohol forces the brain into a segregated state. Information gets trapped in local cul-de-sacs rather than traveling the superhighways of the mind.

By connecting these mathematical patterns to the subjective feeling of being drunk, the study helps bridge the gap between biology and behavior. It illustrates that the sensation of intoxication is, in part, the feeling of a brain losing its global coherence.

The study, “Acute alcohol intake disrupts resting state network topology in healthy social drinkers,” was authored by Leah A. Biessenberger, Adriana K. Cushnie, Bethany Stennett-Blackmon, Landrew S. Sevel, Michael E. Robinson, Sara Jo Nixon, and Jeff Boissoneault.

Social anxiety has a “dark side” that looks nothing like shyness

1 February 2026 at 15:00

Social anxiety is commonly associated with shyness, silence, and a tendency to withdraw from social interactions. However, new research suggests that for some adolescents, this condition manifests through aggression and impulsivity rather than avoidance. This “atypical” presentation appears linked to specific narcissistic traits. The study was published in the journal Personality and Individual Differences.

“There is a prevailing assumption in the popular and professional literature that social anxiety is characterized solely by avoidance of tendencies and behavioral inhibition (i.e., shyness). This is likely a consequence from its formal classification of social phobia, which inadvertently shaped the way we study and understand the clinical phenomena,” explained study author Mollie J. Eriksson, a PhD Candidate in Louis Schmidt’s Child Emotion Lab at McMaster University.

“Nonetheless, this prototypical inhibited presentation does not reflect the lived experience of many individuals with social anxiety symptoms (for a comprehensive review see Kashdan & McKnight). And so, in the current study we aimed to examine the externalizing correlates of social anxiety that are less studied and correspondingly less understood, particularly in a population (i.e., adolescents) in which these dynamics might be especially conspicuous.”

The research team recruited 298 adolescents for the study. The participants ranged in age from 12 to 17 years old. The sample was nearly evenly split between boys and girls. Data was collected through a series of online self-report questionnaires.

Participants answered detailed questions regarding their feelings of social anxiety and their levels of narcissism. The narcissism measure distinguished between vulnerable and grandiose traits. Additional surveys assessed impulsivity and general aggression.

The researchers used a statistical method known as Latent Profile Analysis to group the participants. This technique identifies distinct categories of people based on patterns in their responses. “This is a very robust statistical technique because it uncovers patterns in the data that reflect individual variation in people and not simply associations between data points,” Eriksson said.

The analysis revealed three distinct profiles among the adolescents. The largest group comprised about 46 percent of the sample. These individuals displayed low levels of social anxiety, narcissism, and aggression. This profile appears to represent a well-adjusted or normative group with few social difficulties.

The second group accounted for approximately 30 percent of the participants. Adolescents in this profile reported the highest levels of social anxiety. They also scored high on vulnerable narcissism but low on grandiose narcissism and aggression. This group fits the prototypical description of social anxiety. These teens appear to manage their fear of rejection through inhibition and withdrawal.

The third group made up roughly 25 percent of the sample. This profile was characterized by moderate levels of social anxiety but high levels of impulsivity and aggression. Notably, these adolescents scored the highest on both vulnerable and grandiose narcissism. This combination of traits represents the “atypical” presentation of social anxiety.

“Social anxiety is a broad and heterogeneous mental health problem that is characterized by several features, beyond shyness,” Eriksson told PsyPost. “Recognizing its heterogeneity is the first step in identifying individuals, particularly adolescents, who may be struggling with social anxiety. By providing the tools (i.e., additional features that characterize social anxiety) we can intervene sooner, ideally before symptoms become entrenched, which will ultimately set the adolescent up for greater intra-personal and inter-personal success later in life.”

The researchers also found sex differences in profile membership. Boys were more likely than girls to belong to the third, aggressive profile. This suggests that boys may be more prone to expressing social fears through externalizing behaviors. This aligns with broader socialization norms where boys may be discouraged from showing vulnerability.

“It was exciting that these results replicated previous adult findings, which really underscores the robustness of these findings,” Eriksson said. “Even though this was in line with our a priori hypothesis, it was also interesting that boys were more likely to be in the ‘moderate social anxiety/high externalizing profile.’ It makes me think about how sex/gender influence the expression of social anxiety.”

But the study, like all research, has some limitations. The data was collected at a single point in time. This prevents researchers from establishing a causal relationship between narcissism and the development of aggressive social anxiety. It is unclear if the personality traits precede the anxiety or if they develop concurrently.

“A common misinterpretation we would like to preempt is the assumption that these profiles represent fixed or diagnostic categories,” Eriksson explained. “Rather, they reflect patterns of co-occurring traits and symptoms within a specific developmental window. Additionally, because the data are cross-sectional, we cannot infer developmental pathways or causal mechanisms. Replication (particularly in longitudinal designs) is therefore essential for understanding how these profiles emerge and change over time.”

Tracking these traits from childhood into adolescence could reveal early warning signs. Identifying these patterns early could lead to more effective interventions. Standard treatments for social anxiety may not work for teens who react with aggression rather than fear.

“I hope to examine early childhood antecedents of atypical social anxiety symptomology both behaviorally and biologically,” Eriksson said. “This will really inform treatment and prevention efforts. I also hope to examine in more detail the novel hypothesis we articulated: social anxiety is driven by two divergent self-regulatory pathways. This hypothesis requires a longitudinal study design, which is something we plan to do in the very near future.”

The study, “Characterizing the dark side of social anxiety in adolescence: A replication and extension study,” was authored by Mollie J. Eriksson and Louis A. Schmidt.

Memories of childhood trauma may shift depending on current relationships

1 February 2026 at 05:00

Most people assume their memories of growing up are fixed, much like a file stored in a cabinet, but new research suggests the way we remember our childhoods might actually shift depending on how we feel about our relationships today. A study published in Child Abuse & Neglect reveals that young adults report fewer adverse childhood experiences during weeks when they feel more supported by their parents. This suggests that standard measures of early trauma may reflect a person’s current state of mind as much as their historical reality.

Adverse childhood experiences, or ACEs, refer to traumatic events such as abuse, neglect, and household dysfunction that occur before the age of 18. Medical professionals and psychologists frequently use questionnaires to tally these events because a high number of ACEs is associated with poor mental and physical health outcomes later in life. These screenings rely on the assumption that an adult’s memory of the past is stable and reliable over time.

However, human memory is not a static playback device. It is a reconstructive process that can be influenced by current moods, identity development, and social contexts. This is particularly true for emerging adults, who are navigating the transition from dependence on parents to establishing their own independent identities. This developmental period often requires young people to re-evaluate their family dynamics.

Annika Jaros, a researcher at Michigan State University, led an investigation into this phenomenon alongside co-author William Chopik. They sought to determine if fluctuations in current social relationships or stress levels corresponded with changes in how young adults remembered early adversity. They hypothesized that recollections of the past might wax and wane alongside the quality of a person’s present-day interactions.

The team recruited 938 emerging adults, largely undergraduate students, to complete three identical surveys. These surveys were spaced four weeks apart over a two-month period. At each interval, participants completed the Childhood Trauma Questionnaire, a standard tool used to identify histories of emotional, physical, and sexual abuse, as well as physical and emotional neglect.

In addition to recalling the past, participants rated the current quality of their close relationships. They reported on levels of support and strain with their parents, friends, and romantic partners. They also rated their current levels of academic stress to see if general life pressure affected their memories.

The researchers used statistical models to separate the data into two distinct categories of variance. They looked at differences between people, such as whether a person with a generally happy childhood reports better adult relationships. They also looked at variations within the same person over the course of the eight weeks.

The results showed that reports of childhood adversity were largely consistent over the two months. However, there was measurable variability in the answers provided by the same individuals from month to month. The analysis revealed that this variability was not random but tracked with changes in parental relationships.

When participants reported receiving higher-than-usual support from their parents, they reported fewer instances of childhood adversity. Conversely, during weeks when parental strain was higher than their personal average, recollections of emotional abuse, sexual abuse, and emotional neglect increased. This pattern suggests that a positive shift in a current relationship can soften the recollection of past transgressions.

The influence of friends and romantic partners was less pronounced than that of parents. While supportive friendships were generally associated with fewer reported ACEs on average, changes in friendship quality did not strongly predict fluctuations in memory from week to week. Romantic partners showed a similar pattern, where high support correlated with fewer retrospective reports of sexual abuse, but the effect was limited.

Academic stress also played a minor role in how participants viewed their pasts. While higher stress was linked to slight increases in reports of emotional abuse and physical neglect, the impact was small compared to the influence of family dynamics. The primary driver of change in these memories appeared to be the quality of the bond with caregivers.

The authors noted several limitations to the study that contextualize the results. The sample consisted primarily of university students, meaning the results may not apply to older adults or those with different socioeconomic backgrounds. The study covered only an eight-week period, leaving it unclear if these fluctuations persist or change over years.

There was also a pattern of attrition that affected the data. Participants with more severe histories of trauma were more likely to stop responding to the surveys over time. This may have reduced the study’s ability to capture the full range of variability in how trauma is recalled by those with the most difficult histories.

Despite these caveats, the findings have practical implications for therapists and researchers. A single screening for childhood adversity may capture a snapshot influenced by the patient’s current state of mind rather than a definitive history. Assessing these experiences multiple times could provide a more accurate picture of a patient’s background and current psychological state.

The study challenges the idea that retrospective reports are purely factual records. Instead, they appear to be dynamic interpretations that serve a function in the present. As young adults work to integrate their pasts into their life stories, their memories seem to breathe in time with their current emotional needs.

“People are generally consistent in how they recall their past, but the small shifts in reporting are meaningful,” said Chopik. “It doesn’t mean people are unreliable, it means that memory is doing what it does — integrating past experiences with present meaning.”

The study, “Record of the past or reflection of the present? Fluctuations in recollections of childhood adversity and fluctuations in adult relationship circumstances,” was authored by Annika Jaros and William J. Chopik.

Aristotle was right: virtue appears to be vital for personal happiness

1 February 2026 at 03:00

Virtues such as compassion, patience and self-control may be beneficial not only for others but also for oneself, according to new research my team and I published in the Journal of Personality in December 2025.

Philosophers from Aristotle to al-Fārābī, a 10th-century scholar in what is now Iraq, have argued that virtue is vital for well-being. Yet others, such as Thomas Hobbes and Friedrich Nietzsche, have argued the opposite: Virtue offers no benefit to oneself and is good only for others. This second theory has inspired lots of research in contemporary psychology, which often sees morality and self-interest as fundamentally opposed.

Many studies have found that generosity is associated with happiness, and that encouraging people to practice kindness increases their well-being. But other virtues seem less enjoyable.

For example, a compassionate person wants to alleviate suffering or misfortune, but that requires there be suffering or misfortune. Patience is possible only when something irritating or difficult is happening. And self-control involves forgoing one’s desires or persisting with something difficult.

Could these kinds of virtues really be good for you?

My colleagues and I investigated this question in two studies, using two different methods to zoom in on specific moments in people’s daily lives. Our goal was to assess the degree to which, in those moments, they were compassionate, patient and self-controlled. We also assessed their level of well-being: how pleasant or unpleasant they felt, and whether they found their activities meaningful.

One study, with adolescents, used the experience sampling method, in which people answer questions at random intervals throughout the day. The other, studying adults, used the day reconstruction method, in which people answer questions about the previous day. All told, we examined 43,164 moments from 1,218 people.

During situations that offer opportunities to act with compassion, patience and self-control – encountering someone in need, for example, or dealing with a difficult person – people tend to experience more unpleasant feelings and less pleasant ones than in other situations. However, we found that exercising these three virtues seems to help people cope. People who are habitually more compassionate, patient and self-controlled tend to experience better well-being. And when people display more compassion, patience and self-control than usual, they tend to feel better than they usually do.

In short, our results contradicted the theory that virtue is good for others and bad for the self. They were consistent with the theory that virtue promotes well-being.

Why it matters

These studies tested the predictions of two venerable, highly influential theories about the relationship between morality and well-being. In doing so, they offered new insights into one of the most fundamental questions debated in philosophy, psychology and everyday life.

Moreover, in the scientific study of morality, lots of research has examined how people form moral judgments and how outside forces shape a person’s moral behavior. Yet some researchers have argued that this should be complemented by research on moral traits and how these are integrated into the whole person. By focusing on traits such as patience, compassion and self-control, and their roles in people’s daily lives, our studies contribute to the emerging science of virtue.

What still isn’t known

One open question for future research is whether virtues such as compassion, patience and self-control are associated with better well-being only under certain conditions. For example, perhaps things look different depending on one’s stage of life or in different parts of the world.

Our studies were not randomized experiments. It is possible that the associations we observed are explained by another factor – something that increases well-being while simultaneously increasing compassion, patience and self-control. Or maybe well-being affects virtue, instead of the other way around. Future research could help clarify the causal relationships.

One particularly interesting possibility is that there might be a “virtuous cycle”: Perhaps virtue tends to promote well-being – and well-being, in turn, tends to promote virtue. If so, it would be extremely valuable to learn how to help people kick-start that cycle.

 

This article is republished from The Conversation under a Creative Commons license. Read the original article.

ADHD diagnoses are significantly elevated among autistic adults on Medicaid

1 February 2026 at 01:00

An analysis of U.S. Medicaid data found that 26.7% of autistic adults without intellectual disability had an ADHD diagnosis. This was the case for 40.2% of autistic adults with intellectual disability. The paper was published in JAMA Network Open.

Autism, or autism spectrum disorder, is a neurodevelopmental condition characterized by specificities in social communication, sensory processing, and patterns of behavior or interests. It is called a spectrum because its manifestations vary widely, from individuals who need substantial daily support to those who live independently.

Autism itself is not a disease and does not inherently imply poor health. However, autistic people have higher rates of certain co-occurring physical and mental health conditions compared with the general population. These commonly include anxiety, depression, ADHD, epilepsy, sleep disorders, and gastrointestinal problems.

Barriers to healthcare access, such as communication difficulties and lack of healthcare provider training for working with autistic individuals, can worsen overall health outcomes. Chronic stress from social exclusion, stigma, or masking autistic traits can negatively impact long-term physical and mental health. At the same time, when healthcare is accessible and appropriately adapted, autistic individuals can achieve health outcomes comparable to non-autistic peers.

Study author Benjamin E. Yerys and his colleagues note that attention-deficit/hyperactivity disorder (ADHD) is one of the most commonly co-occurring mental health conditions for autistic youths. At the same time, ADHD is associated with poorer health outcomes for both autistic and non-autistic children. These authors wanted to assess how often ADHD and autism occur together.

They analyzed Medicaid claims data from 2008 to 2019. Medicaid is a joint U.S. federal and state government health insurance program that provides free or low-cost medical coverage to people with low income, disabilities, or those meeting certain other eligibility criteria.

From these data, study authors constructed four groups: autistic individuals without intellectual disability, adults with intellectual disability without autism, autistic adults with intellectual disability, and (a random sample of) adult Medicaid enrollees without autism or intellectual disability. Only data of adults (i.e., 18 years of age and above) were analyzed.

The analysis included data belonging to a total of 3,506,661 individuals. Fifty-three percent were female, 20% were Black, 17% were Hispanic, and 60% were White.

The group without autism or intellectual disability had the lowest share of ADHD diagnoses at 2.7%. This percentage was 19% in the group with intellectual disabilities but without autism. In the group of autistic adults without intellectual disability, 26.7% had ADHD, and this was the case for 40.2% of autistic adults with intellectual disability.

Among people with an ADHD diagnosis, 36% of those without autism or intellectual disability were prescribed ADHD medication, while this was the case for 17.4% of individuals with intellectual disability (but no autism) and 26.8% of individuals who had both autism and intellectual disability. Forty-seven percent (46.7%) of participants with autism but no intellectual disability who were diagnosed with ADHD were prescribed ADHD medication.

“In this cohort study of Medicaid-enrolled adults, autistic adults experienced high rates of co-occurring ADHD and were more likely to receive ADHD medication prescriptions than adults in the general population. Negative health outcome rates are higher among autistic people with co-occurring ADHD, although ADHD medication prescriptions are associated with lower rates of negative health outcomes,” study authors concluded.

The study contributes to scientific knowledge about the co-occurrence of autism and ADHD. However, it should be noted that these data come solely from individuals enrolled in Medicaid. Results in the broader, non-Medicaid enrolled population may differ.

The paper, “Attention-Deficit/Hyperactivity Disorder in Medicaid-Enrolled Autistic Adults,” was authored by Benjamin E. Yerys, Sha Tao, Lindsay Shea, and Gregory L. Wallace.

Long-term antidepressant effects of psilocybin linked to functional brain changes

31 January 2026 at 23:00

A new study suggests that the long-term antidepressant effects of psychedelics may be driven by persistent changes in how neurons fire rather than by the permanent growth of new brain cell connections. Researchers found that a single dose of psilocybin altered the electrical properties of brain cells in rats for months, even after physical changes to the neurons had disappeared. These findings were published in the journal Neuropsychopharmacology.

Depression is a debilitating condition that is often treated with daily medications. These standard treatments can take weeks to work and do not help every patient. Psilocybin, a compound found in certain mushrooms, has emerged as a potential alternative therapy. Clinical trials indicate that one or two doses of psilocybin can alleviate symptoms of depression for months or even years. However, scientists do not fully understand the biological mechanisms that allow a single treatment to produce such enduring results.

Researchers have previously focused on the concept of neuroplasticity to explain these effects. This term generally refers to the brain’s ability to reorganize itself. One specific type is structural plasticity, which involves the physical growth of new connection points between neurons, known as dendritic spines. Short-term studies conducted days or weeks after drug administration often show an increase in these spines. The question remained whether these physical structures persist long enough to account for relief lasting several months.

To investigate this, a team of researchers led by Hannah M. Kramer, Meghan Hibicke, and Charles D. Nichols at LSU Health Sciences Center designed an experiment using rats. They chose Wistar Kyoto rats for the study. This specific breed is often used in research because the animals naturally exhibit behaviors analogous to stress and depression in humans.

The investigators sought to compare the effects of psilocybin against another compound called 25CN-NBOH. Psilocybin interacts with various serotonin receptors in the brain. In contrast, 25CN-NBOH is a synthetic drug designed to target only one specific receptor known as the 5-HT2A receptor. This is the receptor believed to be primarily responsible for the psychedelic experience. By using both drugs, the team hoped to isolate the role of this specific receptor in creating long-term behavioral changes.

The study began with the administration of a single dose of either psilocybin, 25CN-NBOH, or a saline placebo to the male rats. The researchers then waited for a substantial period before testing the animals. They assessed the rats’ behavior at five weeks and again at twelve weeks after the injection. This timeline allowed the team to evaluate effects that persist well beyond the immediate aftermath of the drug experience.

The primary method for assessing behavior was the forced swim test. In this standard procedure, rats are placed in a tank of water from which they cannot escape. Researchers measure how much time the animals spend swimming versus floating motionless. In this context, high levels of immobility are interpreted as a passive coping strategy, which is considered a marker for depressive-like behavior. Antidepressant drugs typically cause rats to spend more time swimming and struggling.

The behavioral results indicated a lasting change. Rats treated with either psilocybin or 25CN-NBOH showed reduced immobility compared to the control group. This antidepressant-like effect was evident at the five-week mark. It remained equally strong at the twelve-week mark. The persistence of the effect suggests that the single dose induced a stable, long-term shift in behavior.

After the twelve-week behavioral tests, the researchers examined the brains of the animals. They focused on the medial prefrontal cortex. This brain region is involved in mood regulation and decision-making. The team utilized high-resolution microscopy to count the density of dendritic spines on the neurons. They specifically looked for the physical evidence of new connections that previous short-term studies had identified.

The microscopic analysis revealed that the number of dendritic spines in the treated rats was no different from that of the control group. The structural growth seen in other studies shortly after treatment appeared to be transient. The physical architecture of the neurons had returned to its baseline state after three months. The researchers also analyzed the expression of genes related to synaptic structure. They found no difference in gene activity between the groups.

Since structural changes could not explain the lasting behavioral shift, the team investigated functional plasticity. This refers to changes in how neurons process and transmit electrical signals. They prepared thin slices of the rats’ brain tissue. Using a technique called electrophysiology, they inserted microscopic glass pipettes into individual neurons to record their electrical activity.

The researchers classified the neurons into two types based on their firing patterns: adapting neurons and bursting neurons. Adapting neurons typically slow down their firing rate after an initial spike. Bursting neurons fire in rapid clusters of signals. The recordings showed that the drugs had altered the intrinsic electrical properties of these cells.

In the group treated with psilocybin, adapting neurons sat at a resting voltage that was closer to the threshold for firing. This state is known as depolarization. It means the cells are primed to activate more easily. The bursting neurons in psilocybin-treated rats also showed increased excitability. They required less input to trigger a signal and fired at faster rates than neurons in untreated rats.

The rats treated with 25CN-NBOH also exhibited functional changes, though the specific electrical alterations differed slightly from the psilocybin group. For instance, the bursting neurons in this group were not as easily triggered as those in the psilocybin group. However, the overall pattern confirmed that the drug had induced a lasting shift in neuronal function.

These electrophysiological findings provide a potential explanation for the behavioral results. While the physical branches of the neurons may have pruned back to normal levels, the cells “remembered” the treatment through altered electrical tuning. This functional shift allows the neural circuits to operate differently long after the drug has left the body.

The study implies that the 5-HT2A receptor is sufficient to trigger these long-term changes. The synthetic drug 25CN-NBOH produced lasting behavioral effects similar to psilocybin. This suggests that activating this single receptor type can initiate the cascade of events leading to persistent antidepressant-like effects.

There are limitations to this study that provide context for the results. The researchers used only male rats. Female rats may exhibit different biological responses to psychedelics or stress. Future research would need to include both sexes to ensure the findings are universally applicable.

Additionally, the forced swim test is a proxy for human depression but does not capture the full complexity of the human disorder. While it is a standard tool for screening antidepressant drugs, it measures a specific type of coping behavior. The translation of these specific neural changes to human psychology remains a subject for further investigation.

The researchers also noted that while spine density returned to baseline, this does not mean structural plasticity plays no role. It is possible that a rapid, temporary growth of connections acts as a trigger. This early phase might set the stage for the permanent electrical changes that follow. The exact molecular switch that locks in these functional changes remains to be identified.

Future studies will likely focus on the period between the initial dose and the three-month mark. Scientists need to map the transition from structural growth to functional endurance. Understanding this timeline could help optimize how these therapies are delivered.

The study, “Psychedelics produce enduring behavioral effects and functional plasticity through mechanisms independent of structural plasticity,” was authored by Hannah M. Kramer, Meghan Hibicke, Jason Middleton, Alaina M. Jaster, Jesper L. Kristensen and Charles D. Nichols.

Scientists identify key brain structure linked to bipolar pathology

31 January 2026 at 19:00

Recent analysis of human brain tissue suggests that a small and often overlooked region deep within the brain may play a central role in bipolar disorder. Researchers found that neurons in the paraventricular thalamic nucleus are depleted and genetically altered in people with the condition. These results point toward potential new targets for diagnosis and treatment. The findings were published in the journal Nature Communications.

Bipolar disorder is a mental health condition characterized by extreme shifts in mood and energy levels. It affects approximately one percent of the global population and can severely disrupt daily life. While medications such as lithium and antipsychotics exist, they do not work for every patient. These drugs also frequently carry difficult side effects that cause patients to stop taking them. To develop better therapies, medical experts need a precise map of what goes wrong in the brain.

Past research has largely focused on the outer layer of the brain known as the cortex. This area is responsible for higher-level thinking and processing. However, brain scans using magnetic resonance imaging have hinted that deeper structures also shrink in size during the course of the illness. One such structure is the thalamus. This central hub acts as a relay station for sensory information and emotional regulation.

Within the thalamus lies a specific cluster of cells called the paraventricular thalamic nucleus. This area is rich in chemical messengers and has connections to parts of the brain involved in emotion. Despite these clues, the molecular details of this region remained largely unmapped in humans. A team led by Masaki Nishioka and Tadafumi Kato from Juntendo University Graduate School of Medicine in Tokyo launched an investigation to bridge this gap. They collaborated with researchers including Mie Sakashita-Kubota to analyze postmortem brain tissue.

The researchers aimed to determine if the genetic activity in this deep brain region differed from healthy brains. They examined brain samples from 21 individuals who had been diagnosed with bipolar disorder and 20 individuals without psychiatric conditions. They looked at two specific areas: the frontal cortex and the paraventricular thalamic nucleus. To do this, they used a technique called single-nucleus RNA sequencing.

This technology allows researchers to catalog the genetic instructions being used by individual cells. By analyzing thousands of nuclei, the team could identify different cell types and see which genes were active or inactive. This provided a high-resolution view of the cellular landscape. They compared the data from the thalamus against the data from the cortex to see which region was more affected.

The analysis revealed that the thalamus had undergone substantial changes. Specifically, the paraventricular thalamic nucleus contained far fewer excitatory neurons in the samples from people with bipolar disorder. The researchers estimated a reduction of roughly 50 percent in these cells compared to the control group. This loss was specific to the neurons that send stimulating signals to other parts of the brain.

In contrast, the changes observed in the frontal cortex were much more subtle. While there were some alterations in the cortical cells, they were not as extensive as those seen in the deep brain. This suggests that the thalamus might be a primary site of pathology in the disorder. The team validated these findings by staining proteins in the tissue to visually confirm the lower cell density.

Inside the remaining thalamic neurons, the genetic machinery was also behaving differently. The study identified a reduced activity of genes responsible for maintaining connections between neurons. These genes are essential for the flow of chemical and electrical signals. Among the affected genes were CACNA1C and SHISA9. These specific segments of DNA have been flagged in previous genetic studies as potential risk factors for the illness.

Another gene called KCNQ3, which helps regulate electrical channels in cells, was also less active. These channels act like gates that let electrically charged potassium or calcium atoms flow in and out of the cell. This flow is what allows a neuron to fire a signal. When the genes controlling these gates are turned down, the neuron may become unstable or fail to communicate.

The specific combination of affected genes suggests a vulnerability in how these cells handle calcium and electrical activity. High-frequency firing of neurons requires tight regulation of calcium levels. If the proteins that manage this process are missing, the cells might become damaged over time. This could explain why so many of these neurons were missing in the patient samples.

The team also looked at non-neuronal cells called microglia. These are the immune cells of the brain that help maintain healthy synapses. Synapses are the junction points where neurons pass signals to one another. The data showed that the communication between the thalamic neurons and these immune cells was disrupted.

A specific pattern of gene expression that usually coordinates the interaction between excitatory neurons and microglia was weaker in the bipolar disorder samples. This breakdown could contribute to the loss of synapses or the death of neurons. It represents a failure in the support system that keeps brain circuits healthy. The simultaneous decline in both neuron and microglia function suggests a coordinated failure in the region.

The researchers note that the paraventricular thalamic nucleus is distinct from other brain regions. It contains a high density of receptors for dopamine, a neurotransmitter involved in reward and motivation. This makes it a likely target for antipsychotic medications that act on the dopamine system. The specific genetic profile of these neurons aligns with biological processes previously linked to the disorder.

There are limitations to consider regarding these results. The study relied on postmortem tissue, so it represents a snapshot of the brain at the end of life. It is difficult to know for certain if the cell loss caused the disorder or if the disorder caused the cell loss. The sample size was relatively small, with only 41 donors in total.

Additionally, the patients had been taking various medications throughout their lives. These drugs can influence gene expression. The researchers checked for medication effects and found little overlap between drug signatures and their findings. However, they could not rule out medication influence entirely.

Looking ahead, the authors suggest that the paraventricular thalamic nucleus could be a target for new drugs. Therapies that aim to protect these neurons or restore their function might offer relief where current treatments fail. Advanced imaging could also focus on this region to help diagnose the condition earlier.

Associate Professor Nishioka emphasized the importance of looking beyond the usual suspects in brain research. “This study highlights the need to extend research to the subcortical regions of the brain, which may harbor critical yet underexplored components of BD pathophysiology,” Nishioka stated. The team hopes that integrating these molecular findings with neuroimaging will lead to better patient outcomes.

Professor Kato added that the findings could reshape how scientists view the origins of the illness. “We finally identified that PVT is the brain region causative for BD,” Kato said. “This discovery will lead to the paradigm shift of BD research.”

The study, “Disturbances of paraventricular thalamic nucleus neurons in bipolar disorder revealed by single-nucleus analysis,” was authored by Masaki Nishioka, Mie Sakashita-Kubota, Kouichirou Iijima, Yukako Hasegawa, Mizuho Ishiwata, Kaito Takase, Ryuya Ichikawa, Naguib Mechawar, Gustavo Turecki & Tadafumi Kato.

Psychology study reveals how gratitude can backfire on your social standing

31 January 2026 at 17:00

Public expressions of gratitude are generally viewed as positive social glue that strengthens relationships and signals warmth. However, new research suggests that offering effusive thanks may come with a hidden cost to one’s perceived social standing.

A series of studies indicates that when individuals express intense gratitude, observers often view them as having lower status and power relative to the person they are thanking. This research was published in Social Psychological and Personality Science.

Social scientists have historically emphasized the benefits of gratitude. It creates social bonds and signals that a person is friendly and responsive. Many organizations even institutionalize this practice through “gratitude walls” or dedicated communication channels to foster a positive culture. The authors of the current study wanted to investigate a potential downside regarding how competence and influence are perceived.

They noted that while gratitude signals warmth, it might also signal a lack of agency. Agency refers to traits like competence, assertiveness, and control. In social hierarchies, individuals with higher rank typically possess more agency and control over resources.

Because high-ranking individuals are often the ones dispensing favors and resources, they are frequently on the receiving end of gratitude. The researchers hypothesized that observers might intuitively associate intense displays of gratitude with a lower position in the social hierarchy.

“The overwhelming majority of research on gratitude highlights its positive effects. But—inspired in part by work showing that hierarchical relationships can become further entrenched when higher power groups help lower power groups—we had an intuition that sometimes when you express thanks, you might be subordinating yourself to another person,” said study author Kristin Laurin, a professor of psychology at the University of British Columbia.

To test this hypothesis, the researchers conducted two initial studies involving approximately 800 participants recruited from Amazon Mechanical Turk. The team designed vignettes describing a workplace scenario.

In these scenarios, one colleague performed a favor for another, such as facilitating a meeting with a manager. The researchers included photographs to vary the gender and race of the characters to ensure the results were not driven by demographics.

Participants first rated the status of both characters based solely on the knowledge that a favor occurred. Following this, they viewed the thanker’s response. The researchers manipulated this response to be either mild or intense. A mild response was a simple phrase like “Great, thanks.” An intense response included phrases like “I’m incredibly grateful” and “I really owe you.”

The researchers found that the intensity of the gratitude significantly shaped perceptions of status. When the thanker was highly effusive, observers upgraded their perception of the helper’s status. The person receiving the thanks was seen as having more respect and influence than the person giving the thanks. This effect occurred even though the favor itself was identical in all conditions.

The researchers sought to replicate these findings across a broader range of contexts in two subsequent studies. These studies recruited roughly 740 participants from Prolific. The scenarios extended beyond the workplace to include academic settings, social media interactions, and casual encounters like a café visit. For instance, one scenario involved a student getting help with study notes.

A potential issue in the first studies was that mild gratitude might look like rudeness, which is a violation of social norms. To address this, the researchers asked participants to categorize various expressions of thanks as “appropriate,” “not enough,” or “too much.” Participants then viewed a gratitude expression that fell within the “appropriate” range but was either on the high or low end of intensity.

Participants rated both the status and power of the characters. Status was defined as respect and admiration. Power was defined as control over resources. The results reinforced the earlier findings. When thankers expressed mild gratitude, observers tended to view the helper as having less relative rank. When thankers expressed intense gratitude, the helper maintained a higher perceived rank.

The researchers also attempted to understand why this shift in perception occurs. They measured whether observers thought the thanker valued the help more or wanted to build a stronger relationship.

While intense gratitude did signal a desire for affiliation, these factors did not explain the shift in perceived status. The link between gratitude and low rank appeared to be a direct inference made by the observers.

The final set of studies moved away from hypothetical scenarios to real-world data. The researchers collected actual work-related messages exchanged by working adults. They presented these messages to over 650 participants across three separate studies. The participants viewed screenshots of emails and instant messages containing expressions of thanks.

Trained coders analyzed the messages for different types of intensity. They looked for “relative intensity,” which meant the message was primarily dedicated to expressing thanks rather than discussing other business. They also coded for “verbal amplification,” such as using extra adjectives, and “nonverbal amplification,” such as using exclamation points or emojis.

The participants rated the sender’s status, power, warmth, and competence. The findings revealed a nuanced pattern. When a message was primarily focused on gratitude, the sender was perceived as having lower status and power compared to the recipient. These senders were also viewed as less competent and assertive.

The use of nonverbal cues like emojis also tended to lower perceptions of rank. However, simply adding more words to say thanks did not consistently lower perceived status.

In some cases, verbose thankers were actually seen as having higher agency. The researchers speculated that managers might often use longer, praise-filled messages to encourage employees, which complicates the interpretation of verbal length.

“When we tested our predictions in a particular real-world context—emails sent in the workplace—we were surprised that using emojis and punctuation (like exclamation marks), or using extra words to express more effusive gratitude, actually did not result in thankers appearing lower status,” Laurin told PsyPost. “Instead, what made them appear lower status was sending an email that was solely or primarily about gratitude (as opposed to expressing thanks while also delivering other content).”

“This study was correlational, so we can’t rule out confounds: Maybe the more effusive thankers tended to be in management positions, or maybe lower status employees instinctively avoid emojis because they worry about how they’ll come across. But for now the key takeaway from these real-world studies appears to be that if you want to express gratitude without losing status, it might be safest to do so when you also have something else to say.”

The results suggest that while gratitude makes a person seem nicer, it can inadvertently signal lower professional standing. People often face a trade-off between appearing warm and appearing powerful.

That publicly expressing thanks can make observers think you have lower status than the person you are thanking. Many times that may be a price people are willing to pay, especially given gratitude’s other benefits, but it is a cost to bear in mind. The researchers note that this does not mean people should stop saying thank you.

“The effects are not huge, so the takeaway message is definitely not that you should never express gratitude if you care about your status!” Laurin clarified. “It may simply be worth asking yourself if you have a compulsion to overdo the gratitude, for example expressing it multiple times for the same favor. If so, it may be worth being aware that this may lead others to make assumptions about your status and power.”

As with all research, there are some caveats. The samples were entirely American. Cultural norms regarding hierarchy and gratitude vary significantly around the world. In some cultures, effusive gratitude might not carry the same connotations of submission.

The researchers are interested in how these dynamics play out in intergroup contexts. It remains to be seen how gratitude affects power dynamics between members of minority and majority groups.

“One of our inspirations for this project came from thinking about intergroup dynamics and pre-existing status relations: We wondered if gratitude hits differently when its expressed by a member of a minorities group to a member of a dominant group, compared to the reverse,” Laurin said. “Our initial forays into exploring this have not turned up reliable differences, but the broader question remains unresolved.”

The study, “Does Saying “Thanks a Lot” Make You Look Less Than? The Magnitude of Gratitude Shapes Perceptions of Relational Hierarchy,” was authored by Kristin Laurin, Kate W. Guan, and Ayana Younge.

Surprising link found between hyperthyroidism and dark personality traits

31 January 2026 at 15:00

New research published in Current Psychology provides evidence linking an overactive thyroid gland to specific personality characteristics known as the “Dark Tetrad.” The study indicates that individuals with hyperthyroidism report higher levels of Machiavellianism, psychopathy, and sadism compared to those with normal thyroid function. These findings suggest that physiological imbalances in the endocrine system may influence socially aversive personality traits.

The thyroid gland plays a central regulatory role within the human body. It releases hormones that control metabolism, energy levels, and heart rate. Beyond these physical functions, the thyroid significantly impacts the brain and nervous system.

Medical professionals have recognized that thyroid dysfunction often accompanies changes in mood and behavior. An overactive thyroid, or hyperthyroidism, frequently presents with symptoms such as restlessness, irritability, and anxiety. In contrast, an underactive thyroid, or hypothyroidism, is often associated with fatigue, mental fog, and emotional flatness.

The authors of the new study noted that the behavioral symptoms of hyperthyroidism overlap with descriptions of antagonistic personality traits. For instance, reduced empathy and impulsive aggression are common in both hyperthyroid patients and individuals with high psychopathy scores. Despite these parallels, there has been little scientific investigation into whether thyroid dysfunction correlates with the Dark Tetrad traits.

The Dark Tetrad refers to a constellation of four socially offensive personality traits. These include Machiavellianism, narcissism, psychopathy, and sadism. While distinct, they share a common core involving emotional coldness and a tendency to prioritize one’s own interests at the expense of others.

“This study was initially motivated by one author’s long-standing diagnosis of Graves’ disease and chronic hyperthyroidism, which provided ongoing exposure to the psychological and interpersonal dimensions of thyroid dysfunction,” explained study authors Or Maimon and Tal Ben Yaacov of Ashkelon Academic College.

“This perspective highlighted a gap in the literature, which has focused primarily on affective symptoms while largely overlooking maladaptive personality traits in the context of thyroid disorders. We sought to address this gap by examining whether chronic thyroid hormonal imbalance is associated with dark personality traits, thereby broadening the psychological framework used to understand thyroid-related conditions.”

To explore this potential connection, the researchers designed a study to compare personality profiles across different thyroid conditions. They aimed to determine if the physiological state of the thyroid could predict variations in these specific personality traits.

The researchers recruited 154 adult participants through online communities. The recruitment process targeted specific health-related support groups to find individuals with diagnosed thyroid issues. The final sample consisted of 140 women and 14 men, with an age range of 18 to 64 years.

Participants were categorized into three distinct groups based on their self-reported medical status. The first group included 49 individuals with hyperthyroidism. The second group consisted of 52 individuals with hypothyroidism. The third group served as a comparison and included 53 individuals who reported no history of thyroid disorders.

Participants with thyroid conditions indicated that they had recently undergone blood tests to verify their diagnosis. This requirement helped ensure that the study captured individuals with active or monitored conditions. The researchers then administered the Short Dark Tetrad (SD4) questionnaire to all participants.

The SD4 is a 28-item assessment tool designed to measure the four dark traits. Participants rated their agreement with statements such as “It is not wise to let people know your secrets” or “Some people deserve to suffer.” Responses were recorded on a five-point scale.

Machiavellianism was measured by items assessing strategic manipulation and cynicism. Narcissism was evaluated through statements reflecting a sense of entitlement and superiority. Psychopathy was assessed via items focusing on impulsivity and callousness. Sadism was measured by the enjoyment of hurting or dominating others.

The results revealed differences in personality scores between the groups. Individuals in the hyperthyroidism group reported higher scores on all four dark traits compared to those in the hypothyroidism group. This finding points to a distinct divergence in personality profiles based on the type of thyroid dysfunction.

When compared to the healthy control group, the hyperthyroidism group continued to show elevated scores. They reported higher levels of Machiavellianism, psychopathy, and sadism. Narcissism scores were higher than the hypothyroidism group but did not statistically differ from the comparison group.

“One notable finding was that associations were observed across multiple dark personality traits rather than being limited to a single dimension,” the researchers told PsyPost. “Drawing on personal experience of coping with chronic hormonal dysregulation, some degree of association was anticipated; however, the breadth of this pattern was unexpected, pointing to a broader involvement of interpersonal and self-related tendencies rather than a trait-specific effect.”

The researchers found no significant differences between the hypothyroidism group and the healthy comparison group. Individuals with underactive thyroids produced personality scores that were statistically similar to those with normal thyroid function. This suggests that the elevation in dark traits is specific to the hyperthyroid state.

The study also examined the relationships between the traits within each group. In the hyperthyroidism group, the four traits were strongly correlated with one another. This indicates a more cohesive or integrated “dark” personality profile among these individuals.

The researchers accounted for demographic variables such as age and sex. Previous research has shown that men generally score higher on dark traits, and these traits tend to decrease with age. The current study confirmed these patterns.

However, even after controlling for age and sex, thyroid status remained a significant predictor of the personality outcomes. This reinforces the idea that the hormonal condition itself contributes uniquely to the expression of these traits.

The authors propose that the physiological mechanisms of hyperthyroidism may drive these psychological outcomes. Elevated thyroid hormones increase metabolic rate and heighten activity in the central nervous system. This state of physiological hyperarousal can lead to emotional instability and reduced impulse control.

Such symptoms align closely with the behavioral manifestations of psychopathy and sadism. The chronic irritability and anxiety associated with hyperthyroidism may exacerbate interpersonal antagonism. Over time, these physiological drivers could shape stable patterns of behavior that are measured as personality traits.

On the other hand, the physiological slowing associated with hypothyroidism might inhibit these traits. The fatigue and emotional blunting common in hypothyroidism do not align with the active hostility or manipulation required for dark traits. This may explain why the hypothyroidism group scored lower than the hyperthyroidism group.

These findings have implications for how clinicians and the public understand behavioral changes in medical patients. The results suggest that hormonal imbalances can affect how individuals think, feel, and interact with others. Recognizing this biological influence may foster greater compassion and understanding in interpersonal relationships.

“The main takeaway is to raise awareness that thyroid hormonal imbalance may affect not only emotional well-being, but also the way people think and process information, as well as how individuals think, feel, and interact with others,” the researchers found. “These patterns are often subtle and may go unrecognized, but they can shape everyday relationships and self-perception. Greater awareness may help individuals and clinicians interpret such changes with more understanding and compassion.”

It is important to note that these effects were observed at the group level. A diagnosis of hyperthyroidism does not imply that a specific individual has a dark personality.

“The effects observed were modest, but statistically significant,” the researchers noted. “Such effects can be practically meaningful when they point to consistent patterns that may accumulate over time or influence everyday functioning. The findings should be interpreted as informative rather than diagnostic.”

“Importantly, the findings should not be interpreted as implying that individuals with thyroid disorders possess problematic or ‘dark’ personalities. The results reflect associations at the group level, not individual character judgments..”

As with all research, there are also limitations to keep in mind. The design was cross-sectional, which means it cannot prove that thyroid dysfunction causes personality changes. It is possible that shared underlying biological factors contribute to both the thyroid condition and the personality traits.

The reliance on self-reported diagnoses is another limitation. Although participants were recruited from patient support groups, the researchers did not verify hormone levels through independent laboratory testing. This introduces a potential for misclassification or inaccuracy regarding the severity of the condition.

The recruitment method may have introduced self-selection bias. Individuals who choose to participate in online studies about health and personality may differ from the general patient population. They might be more health-conscious or more introspective about their psychological state.

Future research aims to address these limitations by using larger, more diverse samples. The authors plan to incorporate objective medical data to confirm diagnoses and hormone levels. Longitudinal studies could also help track how personality traits change over time as thyroid function fluctuates or responds to treatment.

“We are currently extending this work through additional studies that examine thyroid dysfunction in relation to mental health outcomes and personality characteristics,” the researchers said. “To the best of our knowledge, this study represents the first to directly compare individuals with hyperthyroidism, hypothyroidism, and healthy comparison groups, highlighting the importance of examining thyroid-related conditions within a comparative framework rather than focusing on a single clinical group.”

“Developing this concept further, these projects are designed to broaden understanding of how different thyroid-related conditions intersect with psychological functioning in everyday life. Through this ongoing work, we aim to contribute knowledge that can inform research, clinical awareness, and public understanding of this population.”

The study, “Dark personality traits and thyroid dysfunction: a study based on self-reported thyroid hormonal imbalance,” was authored by Or Maimon and Tal Ben Yaacov.

New research links psychopathy to a proclivity for upskirting

31 January 2026 at 03:00

The unauthorized taking of intimate images, a practice often referred to as “upskirting,” has emerged as a distinct form of sexual abuse in the digital age. New research indicates that the likelihood of someone committing this offense, as well as how society judges the victims, is heavily influenced by demographic factors such as age and gender.

The study found that older individuals and men are generally more inclined to blame the victim and less likely to perceive the act as a serious criminal offense. These findings on the psychology behind image-based sexual abuse were published in the journal Sexual Abuse.

As smartphones with high-quality cameras have become ubiquitous, the barriers to committing digital sex crimes have lowered. One such offense is upskirting, which involves positioning a camera underneath a person’s clothing to photograph or film their genitals or buttocks without their consent.

This behavior is often done to obtain sexual gratification or to cause humiliation. While England and Wales formally criminalized this specific act under the Voyeurism (Offences) Act in 2019, legal frameworks around the world remain inconsistent. Some jurisdictions treat it as a breach of privacy rather than a sexual crime, while others lack specific legislation altogether.

To better address this issue, it is necessary to understand the psychological motivations of the perpetrators and the societal attitudes that might minimize the harm caused to victims. Dean Fido, a psychologist at the University of Derby, led a research team to investigate these factors.

Fido and his colleagues, Craig A. Harper, Simon Duff, and Thomas E. Page, aimed to identify which personality traits predict a willingness to commit upskirting. They also sought to determine if the physical characteristics of the victim affect how the public judges the severity of the crime.

The researchers recruited 490 participants from the United Kingdom to complete an online study. To assess social judgments, the team presented participants with a written vignette describing a fictional scenario at a spa.

In the story, a character named Taylor is relaxing on a poolside lounger. Taylor notices another character, named Ashley, lying on a lounger opposite. Taylor observes that Ashley’s robe has parted, revealing their genitals. Without Ashley noticing, Taylor uses a mobile phone to take a photograph of Ashley’s private area before leaving the premises.

The researchers manipulated the details of this story to create four different versions. In some versions, the victim, Ashley, was described as a woman, while in others, Ashley was a man.

Additionally, the researchers included a photograph of “Ashley” to manipulate perceived attractiveness. These photos depicted either an attractive or unattractive individual, based on ratings from previous psychological datasets. After reading the assigned scenario, participants answered questions about how much blame Ashley deserved, whether police intervention was necessary, and how much harm the incident caused.

The results revealed a distinct double standard regarding the gender of the victim. When the victim in the scenario was a woman, participants assigned significantly less blame compared to when the victim was a man.

Participants were also more likely to believe that the police should be involved and that the victim would suffer harm if the target was female. This aligns with broader patterns in society where sexual violence is often viewed primarily as a crime against women. The victimization of men in this context was viewed with less severity.

Physical appearance also influenced these judgments, particularly for male victims. The study found that when the male victim was depicted as attractive, participants perceived the lowest levels of victim harm. This suggests a specific bias where attractive men are less likely to be viewed as vulnerable or traumatized by non-consensual sexual attention.

For female victims, attractiveness did not play a statistically significant role in how much blame was assigned, contradicting some historical research suggesting attractive women are often blamed more for sexual victimization.

One of the strongest predictors of social attitudes was the age of the participant. The data showed that older participants consistently held more negative views toward the victim than younger participants did. Regardless of the victim’s gender or attractiveness, older respondents assigned more blame to the person who was photographed. They also perceived the act as less criminal and believed it caused less harm than their younger counterparts.

The researchers suggest this generational divide may stem from differences in technological familiarity. Younger generations, who have grown up with the internet and smartphones, may be more acutely aware of the permanence and reach of digital images. They may perceive the violation of digital privacy as a more profound threat. Conversely, older individuals might view the scenario through a different lens, potentially minimizing the severity of an act that does not involve physical contact.

Beyond judging the scenario, participants were asked about their own potential behavior. The survey included a question measuring proclivity, or willingness, to commit the crime. Participants were asked how likely they would be to take intimate pictures of an attractive person if they were guaranteed not to get caught. To understand who might answer “yes” to this question, the researchers administered standard psychological questionnaires measuring the “Dark Tetrad” of personality traits.

The Dark Tetrad consists of four distinct but related personality traits: narcissism, Machiavellianism, psychopathy, and sadism. Narcissism involves a sense of entitlement and grandiosity. Machiavellianism is characterized by manipulation and a focus on self-interest. Psychopathy involves a lack of empathy and high impulsivity. Sadism is the enjoyment of inflicting cruelty or suffering on others.

The study found that a willingness to engage in upskirting was not randomly distributed. Men were more likely to express a proclivity for the behavior than women.

Additionally, participants who admitted to past voyeuristic behaviors—such as secretly watching people undress—were more likely to say they would commit upskirting. Among the personality traits, higher levels of psychopathy emerged as a primary predictor. Individuals scoring high in psychopathy were more likely to endorse taking the non-consensual photos.

This connection to psychopathy makes theoretical sense. Upskirting requires a person to violate social norms and the rights of another person for immediate gratification, often without concern for the distress it causes the victim.

This aligns with the callousness and lack of empathy central to psychopathy. The researchers also noted that older age predicted a higher self-reported likelihood of committing the act, which mirrors the finding that older participants viewed the act as less criminal.

The study also measured “belief in a just world,” which is the psychological tendency to believe that people get what they deserve. In many studies on sexual violence, a strong belief in a just world correlates with victim blaming.

In this study, however, those with a stronger belief in a just world were less likely to express a willingness to commit upskirting. This suggests that for this specific crime, a belief in moral fairness might act as a deterrent against perpetration, even if it does not always prevent victim blaming.

There are limitations to this research that context is needed. The sample was drawn exclusively from the United Kingdom, meaning the results reflect British cultural and legal norms. Attitudes might differ in countries with different laws regarding privacy and sexual offenses. Additionally, the study relied on a single specific scenario in a spa. Upskirting frequently occurs in public spaces like public transit or escalators, and public perceptions might shift depending on the setting.

The measurement of proclivity relied on self-reports. Participants had to admit they might commit a crime, which can lead to underreporting due to social desirability bias. However, the anonymity of the online survey format was designed to encourage honest responses. The researchers also point out that while they found statistical links, they cannot definitively say one factor causes another, only that they are related.

Despite these caveats, the findings have implications for the legal and justice systems. The observation that older individuals are more likely to minimize the harm of upskirting and blame the victim is relevant for jury selection and judicial training. If older jurors or judges hold implicit biases that view this form of abuse as trivial, it could affect the outcomes of trials and the sentences handed down to offenders.

For mental health practitioners, the strong link between voyeurism and upskirting provides a pathway for intervention. Therapists working with individuals who have committed these offenses might focus on addressing underlying voyeuristic compulsions and deficits in empathy associated with psychopathic traits. Treating upskirting not just as a privacy violation but as a manifestation of voyeuristic disorder could lead to more effective rehabilitation strategies.

The study, “Understanding Social Judgments of and Proclivities to Commit Upskirting,” was authored by Dean Fido, Craig A. Harper, Simon Duff, and Thomas E. Page.

Alcohol triggers unique activity in amygdala neurons

31 January 2026 at 01:00

A study on mice identified a group of neurons in the central amygdala region of the brain that display a unique pattern of increased activity during voluntary alcohol consumption. While these neurons also responded to other fluids, their activity was significantly higher when mice drank alcohol compared to when they drank sucrose or water. This unique response did not diminish over time. The paper was published in Progress in Neuro-Psychopharmacology and Biological Psychiatry.

Alcohol use disorder is a chronic condition characterized by a problematic pattern of alcohol consumption that leads to significant distress or impairment in daily functioning. Despite treatment, relapses are frequent. Estimates suggest that around 30 million people in the U.S. alone are affected by it, which is around 9% of the population.

People with alcohol use disorder tend to have difficulty controlling how much they drink or how often they drink. They tend to continue drinking despite negative consequences. Common symptoms of this disorder include tolerance, withdrawal symptoms, and spending a great deal of time obtaining, using, or recovering from alcohol.

Excessive alcohol drinking, characteristic of alcohol use disorder, increases the risk of liver disease, cardiovascular problems, and certain cancers. It also has substantial psychological and social consequences, including depression, anxiety, family conflict, and work-related difficulties.

Study author Christina L. Lebonville and her colleagues note that studies of rodents have revealed that the central amygdala is a key region of the brain for alcohol drinking behaviors, particularly in alcohol dependence. This region contains three groups of neurons (sub-nuclei) that differ in the type of neuropeptide they express.

Neuropeptides are small protein-like molecules that neurons use to communicate with each other and to regulate various functions of the body. Unlike neurotransmitters, neuropeptides are released more slowly and they act over a longer time span.

One of these groups of neurons produces dynorphin, a neuropeptide involved in stress, pain, and negative emotional states. They are called dynorphin-expressing neurons or CeADyn neurons.

Previous studies implicated their activity in excessive alcohol drinking both during acute and chronic alcohol exposure. They also showed that CeADyn neurons regulate both binge alcohol drinking and drinking enhanced by stress in individuals with alcohol dependence. The disruption of their activity reduced alcohol drinking.

This study was conducted on 35 prodynorphin-Cre mice. These are genetically engineered mice with genetic properties that allow researchers to selectively label, monitor, and manipulate their CeADyn neurons. Mice were 8–17 weeks of age at the start of the experiment. They had free access to food throughout the experiment and free access to water outside experimental drinking sessions.

The study authors performed a surgery on these mice during which they injected a virus into their central amygdala. This virus changed their DNA so that a fluorescent calcium sensor was expressed in their CeADyn neurons, allowing the authors to measure their activity. At the same time, they implanted a small optical fiber above this region allowing them to record neural activity through light signals (fiber photometry).

After recovery from surgery, mice were given access to different solutions for 2 hours per day, 5–6 days per week. In the first experiment, mice had access to 20% alcohol for 3 weeks, water for two weeks, and 0.5% sucrose for three weeks.

In the second experiment, mice first had access to solutions with different quinine concentrations, followed by water, water after 24 hours of water deprivation, a combination of 0.5% sucrose and low quinine concentrations, and 0.5% sucrose with high quinine concentrations. The study authors recorded the brain activity of the mice during these periods.

Results showed strong increases in CeADyn neuron activity after bouts of alcohol drinking compared to sucrose or water drinking. Behaviors specific for drinking alcohol, such as longer bout durations, did not fully explain the differences in the pattern of activity of these neurons when mice were drinking alcohol compared to when they were drinking something else.

“No other conditions or solutions tested reproduced the pronounced change in CeADyn activity associated with alcohol drinking. These findings support the presence of a unique functional signature for alcohol in a cell population known to control excessive alcohol drinking and further advance fiber photometric normalization and analytical methods,” the study authors concluded.

The study contributes to the scientific understanding of the neural underpinnings of alcohol drinking behaviors. However, it should be noted that this study was done on mice, not on humans. While humans and mice share many physiological characteristics, they are still very different species. Findings on humans may differ.

The paper, “Alcohol drinking is associated with greater calcium activity in mouse central amygdala dynorphin-expressing neurons,” was authored by Christina L. Lebonville, Jennifer A. Rinker, Krysten O’Hara, Christopher S. McMahan, Michaela Hoffman, Howard C. Becker, and Patrick J. Mulholland.

Cannabidiol prevents Alzheimer’s-like cognitive decline in new rat study

30 January 2026 at 23:00

A compound found in cannabis may help protect the brain from early memory and social problems linked to Alzheimer’s disease. A new animal study published in Neuropsychopharmacology found that cannabidiol prevented cognitive decline in rats by reducing brain inflammation and activating key brain receptors.

Alzheimer’s disease is a progressive brain disorder best known for causing memory loss, but it also affects thinking, decision-making, and social engagement. Scientists increasingly recognize that inflammation in the brain plays a major role in driving these symptoms, especially in the early stages of the disease.

Cannabidiol is a chemical compound extracted from the cannabis plant that does not cause a “high.” In recent years, it has gained attention for its potential anti-inflammatory and neuroprotective properties. While cannabidiol is already used in some medical treatments, its possible role in preventing or slowing Alzheimer’s disease remains under investigation.

Roni Shira Toledano and Irit Akirav from the University of Haifa, Israel, wanted to explore whether cannabidiol could stop Alzheimer-like symptoms from developing in the first place, rather than trying to reverse damage after it occurs. They were particularly interested in the role of type 1 cannabinoid receptors, which are found throughout the brain and are involved in memory, learning, and inflammation control.

To test this, the scientists conducted experiments using male rats. The rats were injected with a substance known as streptozotocin, which triggers brain changes similar to those seen in Alzheimer’s disease, including amyloid β-protein accumulation and tau phosphorylation. Some of the rats then received regular doses of cannabidiol, while others did not.

The researchers monitored the animals’ behavior using standard tests of memory, learning, and social interaction. They also examined brain tissue to measure levels of inflammation and to determine whether type 1 cannabinoid receptors were involved in cannabidiol’s effects.

The results revealed that the rats that did not receive cannabidiol showed clear memory problems and reduced interest in social interaction—behaviors commonly seen in Alzheimer’s disease. In contrast, rats treated with cannabidiol performed normally on memory tasks and continued to interact socially with other rats.

Brain analysis revealed that cannabidiol-treated rats had lower levels of inflammation compared to untreated rats. When researchers blocked type 1 cannabinoid receptors using a different substance, many of cannabidiol’s protective effects disappeared, suggesting that these receptors play an important role in how cannabidiol protects the brain.

The findings suggest that cannabidiol may help prevent cognitive and social decline by calming inflammation in the brain and supporting normal brain signaling. The researchers emphasize that cannabidiol did not simply mask symptoms, but appeared to prevent damage from developing in the first place.

“As current Alzheimer’s disease treatments are limited, our study highlights cannabidiol as a promising candidate, demonstrating for the first time that a low dose can prevent behavioral and molecular deficits in a rodent model of [the disease],” the authors concluded.

However, the study has important limitations. It was conducted only in male rats, and animal models do not perfectly replicate human Alzheimer’s disease. Additionally, the study focused on early-stage changes rather than long-term disease progression.

The study, “Cannabidiol prevents cognitive and social deficits in a male rat model of Alzheimer’s disease through CB1 activation and inflammation modulation,” was authored by Roni Shira Toledano and Irit Akirav.

Genetic risk for depression maps to specific structural brain changes

30 January 2026 at 21:00

A new comprehensive analysis has revealed that major depressive disorder alters both the physical architecture and the electrical activity of the brain in the same specific regions. By mapping these overlapping changes, researchers identified a distinct set of genes that likely drives these abnormalities during early brain development. The detailed results of this investigation were published in the Journal of Affective Disorders.

Major depressive disorder is a pervasive mental health condition that affects millions of people globally. It is characterized by persistent low mood and a loss of interest in daily activities. Patients often experience difficulties with cognitive function and emotional regulation.

While the symptoms are psychological, the condition is rooted in biological changes within the brain. Researchers have sought to understand the physical mechanisms behind the disorder for decades. The goal is to move beyond symptom management toward treatments that address the root biological causes.

Most previous research has looked at brain changes in isolation. Some studies use structural magnetic resonance imaging to measure the volume of gray matter. This tissue contains the cell bodies of neurons. A reduction in gray matter volume typically suggests a loss of neurons or a shrinkage of connections between them.

Other studies use functional magnetic resonance imaging. This technique measures blood flow to track brain activity. It looks at how well different brain regions synchronize their firing patterns or the intensity of their activity while the person is resting.

Results from these single-method studies have often been inconsistent. One study might find a problem in the frontal lobe, while another points to the temporal lobe. It has been difficult to know if structural damage causes functional problems or if they occur independently. Additionally, scientists know that genetics play a large role in depression risk. However, it remains unclear how specific genetic variations translate into the physical brain changes seen in patients.

To bridge this gap, a team of researchers led by Ying Zhai, Jinglei Xu, and Zhihui Zhang from Tianjin Medical University General Hospital conducted a large-scale study. They aimed to integrate data on brain structure, brain function, and genetics. Their primary objective was to find regions where structural and functional abnormalities overlap. They also sought to identify which genes might be responsible for these simultaneous changes.

The research team began by conducting a meta-analysis. This is a statistical method that combines data from many previous studies to find patterns that are too subtle for a single study to detect. They gathered data from 89 independent studies.

These included over 3,000 patients with major depressive disorder and a similar number of healthy control subjects for the structural analysis. The functional analysis included over 2,000 patients and controls. The researchers used a technique called voxel-wise analysis. This divides the brain into thousands of tiny three-dimensional cubes to pinpoint exactly where changes occur.

The team looked for three specific markers. First, they examined gray matter volume to assess physical structure. Second, they looked at regional homogeneity. This measures how synchronized a brain region is with its immediate neighbors. Third, they analyzed the amplitude of low-frequency fluctuations. This indicates the intensity of spontaneous brain activity. By combining these metrics, the researchers created a detailed map of the “depressed brain.”

The analysis revealed widespread disruptions. The researchers found that patients with depression consistently showed reduced gray matter volume in several key areas. These included the median cingulate cortex, the insula, and the superior temporal gyrus. These regions are essential for processing emotions and sensing the body’s internal state.

The functional data showed a more complicated picture. In some areas, brain activity was lower than normal. In others, it was higher. The researchers then overlaid the structural and functional maps to find the convergence points. This multimodal analysis uncovered two distinct patterns of overlap.

The first pattern involved regions that showed both physical shrinkage and reduced functional activity. This “double hit” was observed primarily in the median cingulate cortex and the insula. The insula helps the brain interpret bodily sensations, such as heartbeat or hunger, and links them to emotions. A failure in this region could explain why depressed patients often feel physically lethargic or disconnected from their bodies. The reduced activity and volume suggest a breakdown in the neural circuits responsible for emotional and sensory integration.

The second pattern was unexpected. Some regions showed reduced gray matter volume but increased functional activity. This occurred in the anterior cingulate cortex and parts of the frontal lobe. These areas are involved in self-reflection and identifying errors. The researchers suggest this hyperactivity might be a form of compensation.

The brain may be working harder to maintain normal function despite physical deterioration. Alternatively, this high activity could represent neural noise or inefficient processing. This might contribute to the persistent rumination and negative self-focus that many patients experience.

After mapping these brain regions, the researchers investigated the genetic underpinnings. They used a large database of genetic information from over 170,000 depression cases. They applied a method called H-MAGMA to prioritize genes associated with the disorder. They identified 1,604 genes linked to depression risk. The team then used the Allen Human Brain Atlas to see where these genes are expressed in the human brain. This atlas maps gene activity across different brain tissues.

The team looked for a spatial correlation. They wanted to know if the depression-linked genes were most active in the same brain regions that showed structural and functional damage. The analysis was successful. They identified 279 genes that were spatially linked to the overlapping brain abnormalities. These genes were not randomly distributed. They were highly expressed in the specific areas where the researchers had found the “double hit” of shrinkage and altered activity.

The researchers then performed an enrichment analysis to understand what these 279 genes do. The results pointed toward biological processes that happen very early in life. The genes were heavily involved in the development of the nervous system. They play roles in neuron projection guidance, which is how neurons extend their fibers to connect with targets. They are also involved in synaptic signaling, the process by which neurons communicate.

The study also looked at when these genes are most active. The data showed that these genes are highly expressed during fetal development. They are particularly active in the cortex and hippocampus during the middle to late fetal stages. This suggests that the vulnerability to depression may be established long before birth. Disruptions in these genes during critical developmental windows could lead to the structural weak points identified in the MRI scans.

The researchers also examined which types of cells use these genes. They found that the genes were predominantly expressed in specific types of neurons in the cortex and striatum. This includes neurons that use dopamine, a chemical messenger vital for motivation and pleasure. This connects the genetic findings to the known symptoms of depression, such as anhedonia, or the inability to feel pleasure.

There are limitations to this study that should be noted. The meta-analysis relied on coordinates reported in previous papers rather than raw brain scans. This can slightly reduce the precision of the location data. Additionally, the gene expression data came from the Allen Human Brain Atlas, which is based on healthy adult brains. It does not reflect how gene expression might change in a depressed brain.

The study was also cross-sectional. This means it looked at a snapshot of patients at one point in time. It cannot prove that the brain shrinkage caused the depression or vice versa. The researchers also noted that demographic factors like age and sex influence brain structure. While they controlled for these variables statistically, future research should look at how these patterns differ between men and women or across different age groups.

Future research will need to verify these findings using longitudinal data. Scientists need to track individuals over time to see how gene expression interacts with environmental stressors to reshape the brain. The team suggests that future studies should also incorporate environmental data. Factors such as inflammation or stress exposure could modify how these risk genes affect brain structure.

This study represents a step forward in integrating different types of biological data. It moves beyond viewing depression as just a chemical imbalance or a structural deficit. Instead, it presents a cohesive model where genetic risks during development lead to specific structural and functional vulnerabilities. These physical changes then manifest as the emotional and cognitive symptoms of depression.

The study, “Neuroimaging-genetic integration reveals shared structural and functional brain alterations in major depressive disorder,” was authored by Ying Zhai, Jinglei Xu, Zhihui Zhang, Yue Wu, Qian Wu, Minghuan Lei, Haolin Wang, Qi An, Wenjie Cai, Shen Li, Quan Zhang, and Feng Liu.

Novel essential oil blend may enhance memory and alertness

30 January 2026 at 19:00

A recent study provides evidence that inhaling a specific blend of essential oils may improve cognitive performance in healthy adults. The research indicates that while this aromatic blend increases brain metabolism during mental tasks, these physiological changes do not directly explain the observed boost in memory and attention. These findings were published in the scientific journal Human Psychopharmacology: Clinical and Experimental.

The use of essential oils for psychological well-being is a practice with a long history, yet scientific validation for these effects varies across different substances. Previous investigations have identified that the aromas of single oils, such as rosemary and sage, appear to support memory retention and alertness. However, the practice of aromatherapy frequently relies on the blending of multiple oils to create potential synergistic effects.

Despite the popularity of these blends, the efficacy of combining oils has received limited empirical attention compared to single extracts. The creators of the “Genius” blend formulated it based on the purported cognitive benefits of ingredients like frankincense, cardamom, and patchouli. The researchers aimed to determine if this complex mixture could outperform a single oil known for its positive effects.

“I have been Interested in natural interventions to deliver cognitive enhancement for 30 years. For around 20 years, I have been looking at the effects of the aromas of essential oils on aspects of human behaviour including cognition, mood and stress,” said study author Mark Moss, a professor and member of the Brain Performance and Nutrition Research Centre at Northumbria University.

“Essential oils and aromas have been used in society since before the beginning of written records but the scientific investigation of their effects is lacking. I have an interest in conducting high quality research that can deliver reliable and valid findings in this area.”

The scientific team also sought to move beyond subjective reports and behavioral scores. A primary goal was to explore the biological mechanisms that might underpin these effects. Specifically, they investigated whether the inhalation of these aromas influences brain metabolism by measuring blood oxygenation levels during the performance of demanding mental tasks.

The study involved ninety healthy adult participants who were pseudo-randomly assigned to one of three experimental conditions. To ensure a balanced sample, the groups were matched for gender and age. One group was exposed to the aroma of the Genius essential oil blend, which includes patchouli, neroli, grapefruit, cardamom, frankincense, spikenard, rosemary, and lemongrass.

A second group was exposed to the aroma of sage essential oil to serve as a positive control, given its established reputation for cognitive enhancement. A third group sat in an environment with no added aroma to function as a standard control comparison. The study utilized a double-blind design where neither the researchers administering the tests nor the participants knew which aroma condition was active.

Participants completed a battery of computerized cognitive assessments designed to measure memory, attention, and computational skills. These tasks included word recall, where participants had to remember a list of words, and serial subtraction, which required them to repeatedly subtract specific numbers from a starting figure. Other tasks involved sequence memory challenges known as Corsi blocks.

While performing these mental exercises, participants wore a headband equipped with near-infrared spectroscopy technology. This non-invasive device projected light through the skull to measure changes in oxygenated and deoxygenated hemoglobin in the prefrontal cortex. This provided the researchers with real-time data on brain metabolism and oxygen utilization.

Following the completion of the cognitive battery, the participants rated their current mood states. They specifically evaluated their levels of alertness and mental fatigue on visual analogue scales. This allowed the researchers to correlate subjective feelings of well-being with objective performance metrics.

The data analysis revealed significant improvements in performance for the group exposed to the Genius blend compared to the no-aroma control. These improvements were particularly notable in tasks requiring memory and executive function. For instance, participants in the blend condition performed better on word recall and numeric working memory tasks.

The blend also demonstrated superior effects compared to the sage essential oil condition in several performance metrics. This provides some evidence supporting the theory of synergy, where the combined effect of multiple oils may exceed the impact of a single component. The magnitude of the improvement was considered statistically significant.

Regarding subjective experience, participants in the Genius condition reported feeling significantly more alert by the end of the testing session. Perhaps most notably, they reported feeling significantly less fatigued than those in the control group. This buffering against mental exhaustion suggests that the aroma may help maintain stamina during cognitive exertion.

The physiological data gathered via the spectroscopy headbands showed that both aroma conditions led to increased oxygen extraction in the brain during tasks. The level of deoxygenated hemoglobin was significantly higher in the Genius aroma condition compared to the control. This indicates that the brain was extracting and utilizing more oxygen from the blood while the participants were inhaling the blend.

Despite these clear physiological changes, the researchers found no statistical correlation between the increased brain metabolism and the improved cognitive scores. The participants who showed the greatest increase in oxygen utilization were not necessarily the ones who performed best on the tests. This disconnect suggests that while the aroma increases brain energy usage, this mechanism does not directly account for the better test results.

The lack of correlation implies that other mechanisms may be driving the cognitive improvements. One possibility is a pharmacological effect, where chemical compounds from the oils are absorbed into the bloodstream through the lungs and cross into the brain. Another potential pathway is direct stimulation of the olfactory bulb, which has neural connections to brain areas involved in memory and emotion.

“The overall message is that aromas of essential oils can provide cheap, safe and accessible options for personal benefit,” Moss told PsyPost. “Inhalation of the ambient aroma of the essential oils we employed here (pure sage and a blend of oils) can positively affect cognition and mood, although only to a relatively small degree.

“Interestingly the reasons why these effects occur are not well understood at this time, and this study looked at one particular possibility. The brain uses a lot of energy when we apply it to completing tests of memory and similar. It is possible that breathing aromas could help the brain in delivering more energy to the tasks in hand. Although we found that increased energy production appeared to take place this was not related to levels of performance on the tasks. Other possible explanations are still to be tested in depth.”

The study, like all research, includes some caveats. The method of delivering the aroma involved a diffuser in a testing cubicle, which means the exact dose inhaled by each participant could vary based on their breathing patterns. This lack of standardization makes it difficult to establish precise dose-response relationships.

Additionally, the study focused on acute effects observed during a single session. It remains unknown whether these benefits would persist with long-term use or if users might develop a tolerance to the aromas.

“Next steps include finding good ways to standardise aroma delivery,” Moss explained. “Currently, it is all rather vague as people breathe at different rates and to different depths. It is hard to know exactly how much aroma is being delivered and this would be very useful to enable dose-response relationships to be identified. I am generally interested in continuing to apply scientific method to investigate effects that often exist as received wisdom.”

The researchers add that while essential oils offer a safe and accessible option for personal benefit, they function best as a complementary aid rather than a standalone medical treatment.

“The effects of aromas are generally relatively small, but beneficial. Don’t over interpret the findings of aroma research,” Moss said. “Aromas are not a panacea. They can be beneficial, generally within a framework of general healthy living. They can be beneficial in healthcare as part of an integrated healthcare system.”

The study, “Aroma of Genius Essential Oil Blend Significantly Enhances Cognitive Performance and Brain Metabolism in Healthy Adults,” was authored by Mark Moss, Jake Howarth, and Holly Moss.

A dream-like psychedelic might help traumatized veterans reset their brains

30 January 2026 at 17:00

A new study suggests that the intensity of spiritual or “mystical” moments felt during psychedelic treatment may predict how well veterans recover from trauma symptoms. Researchers found that soldiers who reported profound feelings of unity and sacredness while taking ibogaine experienced lasting relief from post-traumatic stress disorder. These findings were published in the Journal of Affective Disorders.

For decades, medical professionals have sought better ways to assist military personnel returning from combat. Many veterans suffer from post-traumatic stress disorder, or PTSD, as well as traumatic brain injuries caused by repeated exposure to blasts. These conditions often occur together and can be resistant to standard pharmaceutical treatments. The lack of effective options has led some researchers to investigate alternative therapies derived from natural sources.

One such substance is ibogaine. This psychoactive compound comes from the root bark of the Tabernanthe iboga shrub, which is native to Central Africa. Cultures in that region have used the plant for centuries in healing and spiritual ceremonies. In recent years, it has gained attention in the West for its potential to treat addiction and psychiatric distress. Unlike some other psychedelics, ibogaine often induces a dream-like state where users review their memories.

Despite anecdotal reports of success, the scientific community still has a limited understanding of how ibogaine works in the human brain. Most prior research focused on classic psychedelics like psilocybin or MDMA. The specific psychological mechanisms that might allow ibogaine to alleviate trauma symptoms remain largely unexplored.

Randi E. Brown, a researcher at the Stanford University School of Medicine and the VA Palo Alto Health Care System, led a team to investigate this question. They worked in collaboration with the late Nolan R. Williams and other specialists in psychiatry and behavioral sciences. The team sought to determine if the subjective quality of the drug experience mattered for recovery. They hypothesized that a “mystical experience” might be a key driver of therapeutic change.

The concept of a mystical experience in psychology is specific and measurable. It refers to a sensation of unity with the universe, a transcendence of time and space, and deeply felt peace or joy. It also includes a quality known as ineffability, meaning the experience is too profound to be described in words. The researchers wanted to know if veterans who felt these sensations more strongly would see better clinical results.

The study analyzed data from thirty male Special Operations Veterans. All participants had a history of traumatic brain injury and combat exposure. Because ibogaine is not approved for medical use in the United States, the veterans traveled to a clinic in Mexico for the treatment. This setup allowed the researchers to observe the effects of the drug in a clinical setting outside the U.S.

The treatment protocol involved a single administration of the drug. The medical staff combined ibogaine with magnesium sulfate. This addition is intended to protect the heart, as ibogaine can sometimes disrupt cardiac rhythms. The veterans received the medication orally after a period of fasting. They spent the session lying down with eyeshades, generally experiencing the effects internally rather than interacting with others.

To measure the psychological impact of the session, the researchers administered the Mystical Experiences Questionnaire. This survey asks participants to rate the intensity of various feelings, such as awe or a sense of sacredness. The researchers collected these scores immediately after the treatment concluded.

The team also assessed the veterans’ PTSD severity using a standardized clinical interview. They took these measurements before the treatment, immediately after, and again one month later. This allowed them to track changes in symptom severity over time. Additionally, the researchers used electroencephalography, or EEG, to record electrical activity in the brain.

The analysis revealed a clear statistical association between the survey responses and the clinical outcomes. Veterans who reported more intense mystical experiences showed larger reductions in PTSD severity. This pattern held true immediately after the treatment. It also persisted when the researchers checked on the participants one month later.

The researchers observed similar trends for other mental health measures. Higher scores on the mystical experience survey correlated with greater improvements in depression and anxiety. These findings align with previous research on other psychedelics, such as psilocybin, which has linked spiritual breakthroughs to improved mental health.

The study also identified changes in brain physiology. The researchers focused on a specific brain wave measurement called peak alpha frequency. This measurement reflects the speed of the brain’s electrical cycles when a person is resting but awake. High arousal states, often seen in PTSD, can be linked to faster alpha frequencies.

The data showed that more intense mystical experiences were associated with a slowing of this alpha frequency one month after treatment. This reduction suggests a shift away from the hyper-aroused state that characterizes trauma. The brain appeared to move toward a more relaxed mode of functioning.

This physiological change supports the idea that the treatment effects are biological and not just psychological. The slowing of brain rhythms may represent a lasting neural adaptation. It implies that the intense subjective experience of the drug might trigger neuroplastic changes that help the brain reset.

Brown and her colleagues suggest that the “ego death” often reported during mystical experiences may play a role. This phenomenon involves a temporary loss of the sense of self. It may allow individuals to detach from rigid, negative beliefs about themselves formed during trauma. When the sense of self returns, it may do so without the heavy burden of past guilt or fear.

The authors noted several limitations to their work. The study used an open-label design, meaning there was no placebo group for comparison. All participants knew they were receiving ibogaine. It is possible that their expectation of healing contributed to the positive results.

The sample size was also relatively small, consisting of only thirty individuals. Furthermore, the group was entirely male and composed of Special Operations Veterans. This specific demographic means the results may not apply to women or the general public. The unique training and resilience of these veterans might influence how they respond to such treatments.

The researchers also pointed out that the study relies on correlation. While the link between mystical experiences and recovery is strong, it does not prove causation. It is possible that a third, unmeasured factor causes both the mystical experience and the symptom improvement.

Despite these caveats, the research provides a foundation for future investigation. The authors recommend that subsequent studies use randomized, controlled designs to verify these effects. They also suggest exploring whether these psychological and physiological changes endure beyond the one-month mark.

Future research could also investigate the role of psychotherapy combined with the drug. In this study, the veterans received coaching but not intensive therapy during the dosing session. Combining the biological reset of ibogaine with structured psychological support might enhance the benefits.

This study adds to a growing body of evidence supporting the potential of psychedelic therapies. It highlights the importance of the subjective experience in the healing process. For veterans struggling with the aftermath of war, these findings offer a preliminary hope that treatments addressing both the brain and the spirit may offer relief.

The study, “Mystical experiences during magnesium-Ibogaine are associated with improvements in PTSD symptoms in veterans,” was authored by Randi E. Brown, Jennifer I. Lissemore, Kenneth F. Shinozuka, John P. Coetzee, Afik Faerman, Clayton A. Olash, Andrew D. Geoly, Derrick M. Buchanan, Kirsten N. Cherian, Anna Chaiken, Ahmed Shamma, Malvika Sridhar, Saron A. Hunegnaw, Noriah D. Johnson, Camarin E. Rolle, Maheen M. Adamson, and Nolan R. Williams.

Fathers’ boredom proneness associated with his children’s ADHD tendencies

30 January 2026 at 15:00

New research suggests that the psychological traits of mothers and fathers may influence their children’s attention-deficit/hyperactivity disorder tendencies and boredom levels in distinct ways. The findings indicate that while genetic predispositions play a significant role, specific parenting styles, such as maternal control, could help manage boredom in young children. This study was published in Scientific Reports.

Psychological research has long established a connection between high levels of boredom and various negative behavioral outcomes. Frequent boredom is often linked to issues such as pathological gambling, substance abuse, and problematic internet use. Despite these known risks, science has not fully explained the developmental mechanisms behind boredom or how it might be regulated during childhood.

Attention-deficit/hyperactivity disorder, or ADHD, frequently co-occurs with a high susceptibility to boredom. Both conditions share characteristics such as impulsivity and difficulty maintaining attention.

While ADHD is generally viewed as a neurodevelopmental trait with a strong genetic component, boredom proneness may be more responsive to environmental factors. This distinction suggests that the family environment could play a significant role in shaping how children experience and manage boredom.

“Although high boredom proneness is associated with various maladaptive behaviors, little is known about its developmental mechanisms or how such behaviors can be regulated,” said study author Izumi Uehara of Ochanomizu University.

“Given evidence that children with ADHD experience heightened boredom, and that boredom proneness may be more environmentally malleable than ADHD symptoms, this study examined how parental ADHD tendencies, boredom proneness, and parenting styles relate to children’s ADHD tendencies and boredom proneness. This work represents an initial step toward understanding how early environmental factors may shape children’s capacity to regulate boredom-related behaviors.”

Most prior research on these topics in Japan has also focused almost exclusively on mothers. This focus has left a gap in understanding how fathers contribute to these developmental patterns.

The researchers aimed to address this by examining how the traits of both parents associate with their children’s behaviors. They sought to understand if the biological traits of parents or their parenting styles were stronger predictors of a child’s tendencies.

The research team recruited participants through an internet survey company. They specifically targeted families with children in the first through third grades of elementary school. This age range is considered a critical period for the emergence of academic and social habits. The final analysis included data from 301 pairs of parents, consisting of both a mother and a father, along with information about one child per couple.

The participants were predominantly from the middle class. Most parents were in their 30s or 40s. The researchers sent questionnaire packets via postal mail to families where both parents agreed to participate. This ensured that the data reflected the perspectives of both parental figures regarding the same child.

Parents completed several standardized psychological questionnaires. They rated their own tendencies toward ADHD using the Adult ADHD Self-Report Scale. They also assessed their own susceptibility to boredom using the Boredom Proneness Scale.

In addition, parents reported on their parenting styles using the Japanese Parenting Style Scale. This scale divides parenting into two main dimensions: responsiveness and control. Responsiveness refers to emotional warmth and support. Control refers to discipline and the regulation of behavior.

For the children, parents provided ratings using a standard ADHD rating scale designed for young children. Because there is no widely accepted scale for boredom proneness in this age group, parents rated their child’s daily boredom levels on a single-item scale. Parents also provided an assessment of their child’s academic performance.

The researchers found that ADHD tendencies and boredom proneness were closely linked within families. Parents who reported higher levels of ADHD traits also tended to report higher susceptibility to boredom. This pattern of overlapping traits was mirrored in their children. However, the study found distinct differences in how mothers and fathers appeared to influence their offspring.

Unexpectedly, a child’s ADHD traits were best predicted by a combination of the father’s ADHD tendencies and the father’s proneness to boredom. This suggests that a father’s susceptibility to boredom may have a unique association with the development of attention difficulties in his children.

“One surprising finding was that, despite mothers spending more time on childcare on average in Japan, children’s traits were specifically associated with fathers’ characteristics—most notably, a significant association between fathers’ boredom proneness and children’s ADHD tendencies,” Uehara told PsyPost.

The researchers also examined how parenting behaviors interacted with these biological traits. They utilized statistical regression models to determine which factors remained significant when all variables were considered. For the majority of children, parental traits were the primary predictors. However, a different pattern emerged for a subgroup of children who exhibited the highest levels of ADHD tendencies.

For this specific subgroup of high-scoring children, maternal responsiveness was identified as a strong explanatory factor. High levels of maternal responsiveness were associated with higher ADHD tendencies in these children.

“Higher levels of ADHD-related behaviors and boredom susceptibility in children with greater genetic risk for ADHD were associated with parental overreactivity,” Uehara explained. “Taken together with the finding that maternal control was linked to reduced child boredom proneness, these results highlight the importance of balanced parental engagement.”

But this does not necessarily mean that maternal warmth causes ADHD. It is possible that mothers become more responsive and attentive in an effort to support a child who is already exhibiting challenging behaviors. “Child-driven effects cannot be ruled out,” Uehara said.

Regarding childhood boredom, the researchers found that a child’s own ADHD tendencies were the strongest predictor. Children with higher attention deficits were more likely to be bored. Maternal boredom proneness was also a direct predictor of the child’s boredom.

The study highlighted a potential protective role for maternal control. Children whose mothers exercised more structural control and discipline tended to exhibit lower levels of boredom. This implies that parental guidance and the setting of boundaries may help children regulate their need for stimulation.

A different interaction was observed regarding fathers. When fathers exhibited high levels of responsiveness, children with high ADHD tendencies showed increased levels of boredom. This suggests that while warmth is generally positive, excessive responsiveness from fathers might not effectively help these specific children manage their boredom.

The study also looked at the long-term implications of these traits by examining the parents’ socioeconomic status. Adults with lower boredom proneness reported significantly higher levels of education and income.

This association with socioeconomic status was found for boredom proneness but not for ADHD tendencies. This finding suggests that the ability to manage boredom may be a distinct factor in achieving long-term educational and financial success.

“Our findings suggest that maternal and paternal characteristics may influence children’s boredom and ADHD tendencies in different ways,” Uehara said. “Notably, the link between maternal control and lower levels of children’s boredom suggests that boredom is not fixed and can be modifiable through everyday parenting. Because boredom has been linked to problematic internet use, these results highlight how parents’ own habits and involvement may help reduce children’s risk of internet addiction.”

As with all research, there are limitations to consider. The study relied entirely on parent reports for both their own traits and their children’s behavior. This reliance could introduce bias, as parents might perceive their children through the lens of their own tendencies. Additionally, the study was cross-sectional, meaning it captured data at a single point in time.

Because of this design, the research cannot prove causality. It is unclear whether parenting styles shape children’s traits or if children’s traits elicit specific parenting responses. For example, the link between maternal responsiveness and child ADHD could represent a mother reacting to her child’s needs rather than causing the symptoms.

“These effects should be interpreted as modest and indicative rather than definitive. However, given evidence among parents that lower boredom proneness is associated with higher educational attainment and income, the present findings suggest meaningful practical implications. They point to the potential for children to develop strategies to regulate boredom, which may help reduce the risk of maladaptive behaviors, including problematic internet use.”

Future studies should aim to include a more diverse range of participants and employ longitudinal designs. Following families over time would help clarify the direction of the relationships between parenting styles and child outcomes. The researchers also suggest that future work should focus on identifying how to help children regulate boredom.

“Our next steps focus on identifying concrete strategies that help children regulate boredom and examining how these early regulation processes relate to boredom management across the lifespan,” Uehara told PsyPost. “Specifically, we aim to investigate contexts in which children are most prone to boredom, typical behavioral responses, and activities that effectively alleviate boredom in both childhood and adulthood. Ultimately, this line of work may offer insights into lifelong mental health and adaptive self-regulation.

“While high levels of boredom proneness are associated with maladaptive behaviors across societies, it is important to recognize cultural differences in how boredom is perceived and experienced.”

“In Western intellectual traditions, boredom has often been discussed in relation to existential emptiness or loss of meaning, whereas in Japan, feelings of emptiness or impermanence have historically been more readily accepted and not necessarily experienced as aversive,” Uehara explained. “Although coping skills for extreme boredom are likely important across cultures, examining how people manage the mild, everyday boredom common in daily life—within different cultural frameworks—may represent a promising direction for future research.”

The study, “Differential associations of parents’ ADHD tendencies, boredom proneness, and parenting styles with children’s ADHD tendencies and boredom proneness,” was authored by Tianyi Zhang, Yuji Ikegaya, and Izumi Uehara.

Cannabis beverages may help people drink less alcohol

30 January 2026 at 03:00

Recent survey data suggests that cannabis-infused beverages may serve as an effective tool for individuals looking to curb their alcohol consumption. People who incorporated these drinks into their routines reported reducing their weekly alcohol intake and engaging in fewer episodes of binge drinking. The findings were published in the Journal of Psychoactive Drugs.

Alcohol consumption is a well-documented public health concern. It is linked to nearly 200 different health conditions. These include liver disease, cardiovascular issues, and various forms of cancer.

While total abstinence is the most effective way to eliminate these risks, many adults choose not to stop drinking entirely. This reality has led public health experts to explore harm reduction strategies. The goal of harm reduction is to minimize the negative consequences of substance use without necessarily demanding complete sobriety.

Cannabis is increasingly viewed through this harm reduction lens. It generally presents fewer physiological risks to the user compared to alcohol. The legalization of cannabis in many U.S. states has diversified the market beyond traditional smokable products. Consumers can now purchase cannabis-infused seltzers, sodas, and tonics. These products are often packaged in cans that resemble beer or hard seltzer containers.

This similarity in packaging and consumption method is notable. It allows users to participate in the social ritual of holding and sipping a drink without consuming ethanol. Jessica S. Kruger, a clinical associate professor of community health and health behavior at the University at Buffalo, led an investigation into this phenomenon. She collaborated with researchers Nicholas Felicione and Daniel J. Kruger. The team sought to understand if these new products are merely a novelty or if they serve a functional role in alcohol substitution.

The researchers designed a study to capture the behaviors of current cannabis users. They distributed an anonymous survey between August and December of 2022. Recruitment took place through various channels to reach a broad audience.

The team placed recruitment cards with QR codes in licensed dispensaries. They also utilized email lists from these businesses. Additionally, they posted links to the survey on nearly 40 cannabis-related communities on the social media platform Reddit.

The final analytic sample consisted of 438 adults. All participants had used cannabis within the past year. The survey incorporated questions from the Behavioral Risk Factor Surveillance System. This is a standard tool used by the Centers for Disease Control and Prevention to track health-related behaviors. The researchers used these questions to assess alcohol consumption frequency and intensity.

The study aimed to compare the behaviors of those who drank cannabis beverages against those who used other forms of cannabis. It also sought to compare alcohol habits before and after individuals began consuming cannabis drinks. Roughly one-third of the respondents reported using cannabis beverages. These users typically consumed one infused drink per session.

The researchers found differences in substitution behaviors between groups. Participants who consumed cannabis beverages were more likely to report substituting cannabis for alcohol than those who did not drink them. The data showed that 58.6 percent of beverage users reported this substitution. In contrast, 47.2 percent of non-beverage users reported doing so.

The study provided specific data regarding changes in alcohol intake levels. The researchers asked beverage users to recall their alcohol consumption habits prior to adopting cannabis drinks. Before trying these products, the group reported consuming an average of roughly seven alcoholic drinks per week. After they started using cannabis beverages, that average dropped to approximately 3.35 drinks per week.

Binge drinking rates also saw a decline. The researchers defined a binge drinking episode based on standard gender-specific thresholds. Before initiating cannabis beverage use, about 47 percent of the group reported binge drinking less than once a month or never. After incorporating cannabis drinks, the proportion of people reporting this low frequency of binge drinking rose to nearly 81 percent.

Most participants did not replace alcohol entirely. The survey results indicated that 61.5 percent of beverage users reduced their alcohol intake. Only about 1 percent reported stopping alcohol consumption completely.

A small minority, roughly 3 percent, reported increasing their alcohol use. This suggests that for most users, cannabis beverages act as a moderator for alcohol rather than a complete replacement.

The study also examined the potency of the beverages being consumed. Most respondents chose products with lower doses of Tetrahydrocannabinol (THC). Two-thirds of the users drank beverages containing 10 milligrams of THC or less. This dosage allows for a milder experience compared to high-potency edibles. It may facilitate a more controlled social experience similar to drinking a glass of wine or a beer.

Daniel J. Kruger, a co-author of the study, noted the potential reasons for these findings. He suggests that the similarity in the method of administration plays a role. People at parties or bars are accustomed to having a drink in their hand. A cannabis beverage allows them to maintain that behavior. It fits into the social context more seamlessly than smoking a joint or taking a gummy.

There are limitations to this research that require consideration. The study relied on retrospective self-reports. Participants had to recall their past alcohol consumption. This relies on memory and can be subject to bias. The sample was also a convenience sample rather than a nationally representative one. Many respondents were recruited from New York State dispensaries or specific online communities.

The researchers also point out potential risks associated with these products. Cannabis beverages and edibles have a slower onset of effects compared to inhalation. It takes time for the digestive system to process the cannabinoids. This delay can lead inexperienced users to consume more than intended. Accidental overconsumption can result in negative physical and mental health outcomes.

Furthermore, there is the issue of dual use. Most participants continued to drink alcohol, albeit in smaller quantities. Combining alcohol and cannabis can intensify impairment. The authors note that this interaction needs further study to ensure public safety.

Future research is necessary to validate these preliminary findings. The authors suggest that longitudinal studies would be beneficial. Such studies would track individuals over time rather than relying on past recall. This would provide a clearer picture of whether the reduction in alcohol use is sustained in the long term.

Public education will be key as this market expands. Consumers need to understand the differences between alcohol and cannabis impairment. They also need accurate information regarding dosing and onset times. Policies that ensure clear labeling and child-proof packaging remain essential for harm reduction.

Despite the caveats, the study offers a new perspective on alcohol harm reduction. It highlights a potential avenue for individuals seeking to lower their alcohol intake. As the market for these beverages grows, understanding their role in consumer behavior becomes increasingly important for public health officials.

The study, “The Exploration of Cannabis Beverage Substitution for Alcohol: A Novel Harm Reduction Strategy,” was authored by Jessica S. Kruger, Nicholas Felicione, and Daniel J. Kruger.

New maps of brain activity challenge century-old anatomical boundaries

30 January 2026 at 01:00

New research challenges the century-old practice of mapping the brain based on how tissue looks under a microscope. By analyzing electrical signals from thousands of neurons in mice, scientists discovered that the brain’s command center organizes itself by information flow rather than physical structure. These findings appear in the journal Nature Neuroscience.

The prefrontal cortex acts as the brain’s executive hub. It manages complex processes such as planning, decision-making, and reasoning. Historically, neuroscientists defined the boundaries of this region by studying cytoarchitecture. This method involves staining brain tissue and observing the arrangement of cells. The assumption has been that physical differences in cell layout correspond to distinct functional jobs.

However, the connection between these static maps and the dynamic electrical firing of neurons remains unproven. A research team led by Marie Carlén at the Karolinska Institutet in Sweden sought to test this long-standing assumption. Pierre Le Merre and Katharina Heining served as the lead authors on the paper. They aimed to create a functional map based on what neurons actually do rather than just where they sit.

To achieve this, the team performed an extensive analysis of single-neuron activity. They focused on the mouse brain, which serves as a model for mammalian neural structure. The researchers implanted high-density probes known as Neuropixels into the brains of awake mice. These advanced sensors allowed them to record the electrical output of more than 24,000 individual neurons.

The study included recordings from the prefrontal cortex as well as sensory and motor areas. The investigators first analyzed spontaneous activity. This refers to the electrical firing that occurs when the animal is resting and not performing a specific task. Spontaneous activity offers a window into the intrinsic properties of a neuron and its local network.

The team needed precise ways to describe this activity. Simply counting the number of electrical spikes per second was insufficient. They introduced three specific mathematical metrics to characterize the firing patterns. The first metric was the firing rate, or how often a neuron sends a signal.

The second metric was “burstiness.” This describes the irregularity of the intervals between spikes. A neuron with high burstiness fires in rapid clusters followed by silence. A neuron with low burstiness fires with a steady, metronomic rhythm.

The third metric was “memory.” This measures the sequential structure of the firing. It asks whether the length of one interval between spikes predicts the length of the next one. Taken together, these three variables provided a unique “fingerprint” for every recorded neuron.

The researchers used a machine learning technique called a Self-Organizing Map to sort these fingerprints. This algorithm grouped neurons with similar firing properties together. It allowed the scientists to visualize the landscape of neuronal activity without imposing human biases.

The analysis revealed a distinct signature for the prefrontal cortex. Neurons in this area predominantly displayed low firing rates and highly regular rhythms. They did not fire in erratic bursts. This created a “low-rate, regular-firing” profile that distinguished the prefrontal cortex from other brain regions.

The team then projected these activity profiles back onto the physical map of the brain. They compared the boundaries of their activity-based clusters with the traditional cytoarchitectural borders. The two maps did not align.

Regions that looked different under a microscope often contained neurons with identical firing patterns. Conversely, regions that looked the same structurally often hosted different types of activity. The distinct functional modules of the prefrontal cortex ignored the classical boundaries drawn by anatomists.

Instead of anatomy, the activity patterns aligned with hierarchy. In neuroscience, hierarchy refers to the order of information processing. Sensory areas that receive raw data from the eyes or ears are at the bottom of the hierarchy. The prefrontal cortex, which integrates this data to make decisions, sits at the top.

The researchers correlated their activity maps with existing maps of brain connectivity. They found that regions higher up in the hierarchy consistently displayed the low-rate, regular-firing signature. This suggests that the way neurons fire is determined by their place in the network, not by the local architecture of the cells.

This finding aligns with theories about how the brain processes information. Sensory areas need to respond quickly to changing environments, requiring fast or bursty firing. High-level areas need to integrate information over time to maintain stable plans. A slow, regular rhythm is ideal for holding information in working memory without being easily distracted by noise.

The study then moved beyond resting activity to examine goal-directed behavior. The mice performed a task where they heard a tone or saw a visual stimulus. They had to turn a wheel to receive a water reward. This allowed the researchers to see how the functional map changed during active decision-making.

The team identified neurons that were “tuned” to specific aspects of the task. Some neurons responded only to the sound. Others fired specifically when the mouse made a choice to turn the wheel.

When they mapped these task-related neurons, they again found no relation to the traditional anatomical borders. The functional activity formed its own unique territories. One specific finding presented a paradox.

The researchers had established that the hallmark of the prefrontal cortex was slow, regular firing. However, the specific neurons that coded for “choice”—the act of making a decision—tended to have high firing rates. These “decider” neurons were chemically and spatially mixed in with the “integrator” neurons but behaved differently.

This implies a separation of duties within the same brain space. The general population of neurons maintains a slow, steady rhythm to provide a stable platform for cognition. Embedded within this stable network are specific, highly excitable neurons that trigger actions.

The overlap of these two populations suggests that connectivity shapes the landscape. The high-hierarchy network supports the regular firing. Within that network, specific inputs drive the high-rate choice neurons.

These results suggest that intrinsic connectivity is the primary organizing principle of the prefrontal cortex. The physical appearance of the tissue is a poor predictor of function. “Our findings challenge the traditional way of defining brain regions and have major implications for understanding brain organisation overall,” says Marie Carlén.

The study does have limitations. It relied on data from mice. While mouse and human brains share many features, the human prefrontal cortex is far more complex. Additionally, the recordings focused primarily on the deep layers of the cortex. These layers are responsible for sending output signals to other parts of the brain.

The activity in the surface layers, which receive input, might show different patterns. The study also looked at a limited set of behaviors. Future research will need to explore whether these maps hold true across different types of cognitive tasks.

Scientists must also validate these metrics in other species. If the pattern holds, it could provide a new roadmap for understanding brain disorders. Many psychiatric conditions involve dysfunction in the prefrontal cortex. Understanding the “normal” activity signature—slow and regular—could help identify what goes wrong in disease.

This data-driven approach offers a scalable framework. It moves neuroscience away from subjective visual descriptions toward objective mathematical categorization. It suggests that to understand the brain, we must look at the invisible traffic of electricity rather than just the visible roads of tissue.

The study, “A prefrontal cortex map based on single-neuron activity,” was authored by Pierre Le Merre, Katharina Heining, Marina Slashcheva, Felix Jung, Eleni Moysiadou, Nicolas Guyon, Ram Yahya, Hyunsoo Park, Fredrik Wernstal & Marie Carlén.

Diet quality of children improved after five months of gardening and nutrition sessions

29 January 2026 at 23:00

A study conducted in Jordan found that primary school children’s dietary quality improved after 5 months of weekly gardening sessions and nutrition education. Their fiber intake increased, saturated fat intake decreased, and their overall knowledge of nutrition improved. The paper was published in Nutrients.

Childhood obesity has increased markedly over the past few decades, becoming a major public health concern worldwide. Rates have risen in both high-income and low- and middle-income countries, indicating that the trend is global rather than region-specific.

One of the strongest contributors to this increase is a shift in children’s diets toward energy-dense, nutrient-poor foods. Diets high in ultra-processed foods, added sugars, and saturated fats are associated with excess calorie intake and weight gain. Sugary drinks play a particularly important role, as they add substantial calories without promoting satiety. At the same time, consumption of fruits, vegetables, whole grains, and fiber-rich foods has declined in many populations. Larger portion sizes and more frequent snacking have also normalized higher energy intake among children.

Study author Nour Amin Elsahoryi and her colleagues wanted to explore the effects of a five-month school-based vegetable gardening and education intervention on the body composition, dietary intake, and knowledge, attitudes, and practices regarding vegetable consumption of primary school students (4th – 6th grade). They hypothesized that the gardening intervention would improve children’s dietary intake, body composition, and knowledge and attitudes about vegetable consumption.

Study participants were 216 4th – 6th grade students from two primary schools in Amman, Jordan. Their average age was 10 years. 88 of them were boys. 121 participants were from one school, and 95 were from the other school.

Students from one school were assigned to the intervention group, while those from the other participating school served as the control group. The intervention group participated in weekly 1-hour gardening exercises in a 1,000-square-meter garden built on land owned by the school where the intervention was taking place.

The garden contained self-irrigating raised beds with indigenous herbs and vegetables, and a separate storage shed to store tools and teaching materials. To facilitate the work, the school received the necessary gardening equipment, such as rakes, watering hoses, benches, gardening gloves, and composting bins, as well as educational material, tables, whiteboards, portable handwashing stations, and basic cooking instruments. Immediately after each gardening session, students participated in one-hour culturally adapted nutrition education sessions. These sessions were conducted by professionals trained in child-oriented nutrition education and behavioral modification.

Before and after the intervention, study authors measured participating students’ height and weight, asked them to report their dietary intake from the previous 24 hours, and assessed their knowledge, attitudes, and practices related to vegetable intake.

Results showed that the intervention group lost 1.88 kg of weight, on average, while the control group showed minimal weight increases. The dietary quality of the intervention group improved. More specifically, the intervention group increased fiber intake (by 2.36 grams per day) and reduced saturated fat consumption (by 9.24 grams per day). The intervention group also showed better nutrition knowledge compared to the control group.

“This intervention effectively improved body composition, dietary quality, and nutrition knowledge among Jordanian primary school children. These findings provide evidence for implementing culturally adapted school gardening programs as childhood obesity prevention interventions in Middle Eastern settings, though future programs should incorporate family engagement strategies to enhance behavioral sustainability,” study authors concluded.

The study contributes to the scientific understanding of the potential effects of gardening interventions. However, it should be noted that dietary changes were self-reported, which left room for recall bias to have affected the results.

The paper, “A School-Based Five-Month Gardening Intervention Improves Vegetable Intake, BMI, and Nutrition Knowledge in Primary School Children: A Controlled Quasi-Experimental Trial,” was authored by Nour Amin Elsahoryi, Omar A. Alhaj, Ruba Musharbash, Fadia Milhem, Tareq Al-Farah, and Ayoub Al Jawaldeh.

Researchers identify the psychological mechanisms behind the therapeutic effects of exercise

29 January 2026 at 21:00

New research suggests that a structured exercise program improves mental health by altering how individuals process stress and intrusive thoughts. Published in Psychological Medicine, the study indicates that physical activity reduces overall psychiatric symptoms by lowering perceived stress and interrupting repetitive negative thinking patterns. These findings provide evidence that the psychological benefits of exercise are driven by specific changes in cognitive and emotional processing.

Scientific literature has established that physical activity can help manage symptoms of specific mental health conditions, such as depression and anxiety. But the specific psychological pathways that lead to symptom improvement remain unclear.

The authors of the new study aimed to identify the mechanisms that explain why exercise is effective by conducting a secondary analysis of the data collected during the ImPuls trial, a randomized controlled trial involving 399 adults.

“The idea for the primary study emerged from a growing body of research, including numerous empirical studies and review articles, demonstrating that exercise is an effective therapeutic approach for a range of mental disorders,” said study author Anna Katharina Frei, a PhD candidate at the University of Tübingen.

“However, at least within outpatient care in Germany, this potential has not yet been sufficiently utilized — despite, for example, long waiting times for psychotherapy and/or the side effects associated with psychopharmacological treatments.”

“With the ImPuls study, our aim was therefore not only to demonstrate that a transdiagnostic exercise intervention is effective in reducing overall symptom burden, but also to show that its implementation in an outpatient setting is feasible. This formed the basis of the primary study.”

“Although the beneficial effects of exercise on mental health have been demonstrated repeatedly, the underlying mechanisms are often not well understood,” Frei said. “The aim of the secondary analysis was to contribute to the existing literature by examining three processes that are common to various mental disorders and may mediate treatment effects: perceived stress, repetitive negative thinking, and sleep quality.”

Participants were originally recruited from ten different outpatient treatment centers across Germany. To be eligible for the study, individuals had to be physically inactive and diagnosed with at least one of several conditions. These conditions included depressive disorders, agoraphobia, panic disorder, post-traumatic stress disorder, or nonorganic primary insomnia.

In the original trial, participants were randomly assigned to one of two groups. The first group served as the control and received “treatment as usual,” which included standard outpatient therapies such as medication or psychotherapy.

The second group received treatment as usual combined with a specialized exercise intervention called ImPuls. The ImPuls program was a six-month intervention designed to foster a long-term physical activity habit.

The exercise intervention began with a four-week supervised phase. During this time, participants attended group sessions two to three times per week, engaging in moderate-to-vigorous aerobic exercise, specifically outdoor running. These sessions also included behavioral coaching strategies, such as goal setting and barrier management, to help participants stay motivated.

Following this initial phase, participants continued to exercise independently for five months. They received support through regular telephone calls to monitor their activity levels and address any challenges.

The researchers collected data at three specific time points: at the beginning of the study, after six months, and after twelve months. They used validated questionnaires to measure several psychological factors.

The primary outcome was global symptom severity, assessed using the Global Severity Index. This measure evaluates overall psychological distress across dimensions of somatization, depression, and anxiety.

In this secondary analysis, the team specifically examined data regarding the three proposed mechanisms of change. Perceived stress was assessed using a scale that asks individuals how unpredictable or overwhelming they find their lives.

Repetitive negative thinking was measured by asking participants about their tendency to have intrusive, unproductive thoughts that are difficult to stop. Finally, sleep quality was evaluated using a comprehensive index that accounts for sleep duration, disturbances, and daytime dysfunction.

The results confirmed that the exercise intervention was effective in reducing global symptom severity, replicating the primary trial’s conclusion. Participants in the ImPuls group experienced greater improvements in their mental health compared to those who received only standard treatment. This positive effect was observed at the six-month mark and persisted at the twelve-month follow-up.

The researchers then used statistical modeling to determine which factors were responsible for this improvement. Their analysis revealed that the reduction in global symptoms was fully mediated by changes in perceived stress and repetitive negative thinking.

This means that the beneficial effect of the exercise program on mental health was entirely explained by the fact that it lowered participants’ stress levels and reduced their engagement in negative thought loops.

Contrary to some expectations, changes in sleep quality did not mediate the treatment effects. Although sleep is often a target in mental health treatment, the statistical models indicated that improved sleep was not the driver of the symptom reduction in this specific study context. The benefits were driven by cognitive and emotional changes rather than changes in sleep patterns.

The findings align with the “cross-stressor adaptation hypothesis.” This theory suggests that because exercise places a physiological load on the body, regular physical activity helps the biological stress response system adapt.

Over time, this adaptation may make individuals less reactive to other forms of emotional or psychological stress. By regularly engaging in the physical stress of running, participants may have built a resilience that translated into a lower perception of life stress.

The results also support the “distraction hypothesis” regarding repetitive negative thinking. Individuals with mental health disorders often suffer from rumination, where they dwell on negative emotions and problems.

Exercise requires focus and energy, which may force a break in this cycle of negative thoughts. This temporary distraction can provide relief and allow individuals to regain a more balanced perspective.

“Exercise can be an effective way to reduce overall psychological symptom severity by decreasing repetitive negative thinking and perceived stress,” Frei told PsyPost. “In other words, engaging in regular physical activity may help people cope better with everyday stressors and interrupt repetitive negative thinking patterns, which are common across many mental health conditions. These findings highlight exercise as a valuable and accessible complement to existing mental health treatments.”

As with all research, there are some limitations. Because the control group in the original trial received treatment as usual rather than an active control intervention, it is difficult to rule out the possibility that the benefits were due to nonspecific factors.

These factors could include the social support from the group or the attention received from study staff. It is possible that simply meeting with a group and having a shared goal contributed to the improvements.

The study sample was comprised largely of individuals with depressive disorders, who made up about 72% of the participants. While the study was transdiagnostic, the dominance of depression diagnoses means the findings may be most applicable to that condition.

The mechanisms might differ for a population primarily composed of individuals with anxiety disorders or PTSD. Future research should investigate whether these findings hold true in samples with different diagnostic compositions.

Another limitation involves the measurement of the data. The mediators and the outcomes were assessed at the same time points. This simultaneous measurement restricts the ability to make definitive claims about causality. While the statistical models support the idea that reduced stress caused the symptom improvement, it is theoretically possible that feeling better led to reduced stress.

Future research should explore the day-to-day dynamics of these effects. Using methods that track participants in real-time could reveal how a specific session of exercise impacts mood and thinking patterns in the hours that follow. Understanding the immediate temporal relationship between physical activity and thought processes would provide stronger evidence for the causal mechanisms.

The study, “Changes in repetitive negative thinking and stress perception mediate treatment effects of a transdiagnostic exercise intervention,” was authored by Anna Katharina Frei, Thomas Studnitz, Britta Seiffer, Jana Welkerling, Johanna-Marie Zeibig, Eva Herzog, Mia Maria Günak, Thomas Ehring, Keisuke Takano, Tristan Nakagawa, Leonie Sundmacher, Sebastian Himmler, Stefan Peters, Anna Lena Flagmeier, Lena Zwanzleitner, Ander Ramos-Murguialday, Gorden Sudeck, and Sebastian Wolf.

Alzheimer’s patients show reduced neural integration during brain stimulation

29 January 2026 at 19:00

New research suggests that the electrical complexity of the brain diminishes in early Alzheimer’s disease, potentially signaling a breakdown in the neural networks that support conscious awareness. By stimulating the brain with magnetic pulses and recording the response, scientists found distinct differences between healthy aging adults and those with mild dementia. These findings appear online in the journal Neuroscience of Consciousness.

The human brain operates on multiple levels of awareness. Alzheimer’s disease is widely recognized for eroding memory, but the specific type of memory loss offers clues about the nature of the condition. Patients typically lose the ability to consciously recall events, facts, and conversations. This is known as explicit memory.

Yet, these same individuals often retain unconscious capabilities, such as the ability to walk, eat, or play a musical instrument. This preservation of procedural or implicit memory suggests that the disease targets the specific neural architecture required for conscious processing while leaving other automatic systems relatively intact.

Andrew E. Budson, a professor of neurology at Boston University Chobanian & Avedisian School of Medicine, has proposed that these “cortical dementias” should be viewed as disorders of consciousness. According to this theory, consciousness developed as part of the explicit memory system. As the disease damages the cerebral cortex, the physical machinery capable of sustaining complex conscious thought deteriorates. This deterioration eventually leads to a state where the individual is awake but possesses a diminishing capacity for complex awareness.

To investigate this theory, a research team led by Brenna Hagan, a doctoral candidate in behavioral neuroscience at the same institution, sought a biological marker that could quantify this decline. They turned to a metric originally developed to assess patients with severe brain injuries, such as those in comas or vegetative states. This metric is called the perturbation complexity index, specifically an analysis of state transitions.

The measurement acts somewhat like a sonar system for the brain. In a healthy, conscious brain, a stimulus should trigger a complex, long-lasting chain reaction of electrical activity that ripples across various neural networks. In a brain where consciousness is compromised, the response is expected to be simpler, local, and short-lived. The researchers hypothesized that even in the early stages of Alzheimer’s, this capacity for complex electrical integration would be reduced compared to healthy aging.

The study included 55 participants in total. The breakdown consisted of 28 individuals diagnosed with early-stage Alzheimer’s disease or mild cognitive impairment and 27 healthy older adults who served as controls. The research team employed a technique known as transcranial magnetic stimulation, or TMS, paired with electroencephalography, or EEG.

During the experiment, participants sat comfortably while wearing a cap fitted with 64 electrodes designed to detect electrical signals on the scalp. The researchers placed a magnetic coil against the participant’s head. This coil delivered a brief, focused pulse of magnetic energy through the skull and into the brain tissue. This pulse is the “perturbation” in the index’s name. It effectively rings the brain like a bell.

The researchers targeted two specific areas of the brain. The first was the left motor cortex, which controls voluntary movement on the right side of the body. The second was the left inferior parietal lobule, a region involved in integrating sensory information and language. By stimulating these distinct sites, the team hoped to determine if the loss of complexity was specific to certain areas or if it represented a global failure of the brain’s networks.

As the magnetic pulse struck the cortex, the EEG electrodes recorded the brain’s immediate reaction. This recording captured the “echo” of the stimulation as it propagated through the neural circuits. The researchers then used a complex mathematical algorithm to analyze these echoes. They looked for the number of “state transitions,” which are shifts in the spatial pattern of the electrical activity. A higher number of state transitions indicates a more complex, integrated response, implying a healthier and more connected brain.

The analysis revealed a clear distinction between the two groups. The participants with Alzheimer’s disease displayed a reduced level of brain complexity compared to the healthy controls. The average complexity score for the Alzheimer’s group was 20.1. In contrast, the healthy group averaged 28.2. This downward shift suggests that the neural infrastructure required for high-level conscious thought is compromised in the disease.

The reduction in complexity was consistent regardless of which brain area was stimulated. The scores obtained from the motor cortex were nearly identical to those from the parietal lobe. This suggests that the loss of neural complexity in Alzheimer’s is a widespread, global phenomenon rather than a problem isolated to specific regions. The disease appears to affect the brain’s overall ability to sustain complex patterns of communication.

The researchers also examined whether these complexity scores correlated with standard clinical measures. They compared the EEG data to scores from the Montreal Cognitive Assessment, a paper-and-pencil test commonly used to screen for dementia.

Within the groups, there was no strong statistical relationship between a person’s cognitive test score and their brain complexity score. This lack of correlation implies that the magnetic stimulation technique measures a fundamental physiological state of the brain that is distinct from behavioral performance on a test.

“Despite their impaired conscious memory, individuals with Alzheimer’s disease may be able to use intact implicit, unconscious forms of memory, such as procedural memory (often termed ‘muscle memory’) to continue their daily routines at home,” Budson explains. He adds that when patients leave familiar settings, “their home routines are not helpful and their dysfunctional conscious memory can lead to disorientation and distress.”

There are caveats to these findings that warrant attention. While the difference between the groups was clear, the absolute scores raised questions. A surprising number of participants in both groups scored below the threshold typically used to define consciousness in coma studies. Specifically, 70 percent of the Alzheimer’s patients and 29 percent of the healthy volunteers fell into a range usually associated with unconsciousness or minimally conscious states.

This does not mean these individuals are unconscious. Instead, it indicates that the mathematical cutoffs established for traumatic brain injury may not directly apply to neurodegenerative diseases or aging populations. The metric likely exists on a spectrum. The physiological changes in an aging brain might lower the baseline for complexity without extinguishing consciousness entirely.

The study opens new paths for future research. Scientists can now explore how this loss of complexity relates to the progression of the disease. It may be possible to use this metric to track the transition from mild impairment to severe dementia. The lack of correlation with behavioral tests suggests that this method could provide an objective, biological way to assess brain function that does not rely on a patient’s ability to speak or follow instructions.

This perspective also informs potential therapeutic strategies. If the disease is viewed as a progressive loss of conscious processing, treatments could focus on maximizing the use of preserved unconscious systems. Therapies might emphasize habit formation and procedural learning to help patients maintain independence.

“This research opens the avenue for future studies in individuals with cortical dementia to examine the relationship between conscious processes, global measures of consciousness, and their underlying neuroanatomical correlates,” Budson says. The team hopes that future work will clarify the biological mechanisms driving this loss of complexity and lead to better diagnostic tools.

The study, “Evaluating Alzheimer’s disease with the TMS-EEG perturbation complexity index,” was authored by Brenna Hagan, Stephanie S. Buss, Peter J. Fried, Mouhsin M. Shafi, Katherine W. Turk, Kathy Y. Xie, Brandon Frank, Brice Passera, Recep Ali Ozdemir, and Andrew E. Budson.

Women’s libido drops during a specific phase of the menstrual cycle

29 January 2026 at 17:00

New research suggests that women experience a distinct decrease in sexual motivation during a specific phase of the menstrual cycle known as the implantation window. This reduction in desire may serve an evolutionary function by lowering the risk of infection during a time when the body’s immune system is naturally suppressed. The study was published in the journal Evolution and Human Behavior.

Scientists initiated this investigation to explore potential functional reasons for fluctuations in sexual desire across the menstrual cycle. Biology dictates that for a pregnancy to be established, a fertilized egg must successfully attach to the lining of the uterus.

This process requires the mother’s immune system to lower its defenses locally within the reproductive tract. This immunosuppression prevents the body from attacking the embryo as if it were a foreign invader.

This necessary biological adjustment creates a period of increased vulnerability. The suppression of immune cells makes the reproductive tract more susceptible to sexually transmitted infections.

Pathogens can enter the uterus more easily during this time. The physiological mechanisms that help sperm reach the egg, such as uterine contractions, can inadvertently transport bacteria or viruses into the upper reproductive tract.

The authors hypothesized that evolution might have shaped human psychology to mitigate this risk. If sexual activity poses a greater cost to health during this specific window, natural selection may have favored mechanisms that reduce the drive for sex.

A temporary dip in libido would theoretically limit exposure to pathogens when the body is least equipped to fight them. This theory builds on the concept of motivational priorities. It suggests that the brain balances the reproductive benefits of sex against potential survival costs.

“The conjunction of two patterns motivated the hypotheses tested in the paper. First, evidence that immune responses may vary across the menstrual cycle was intriguing and led me to read more about the specific effects that have been documented,” explained study author James R. Roney, a professor and acting chair of the Department of Psychological and Brain Sciences at the University of California, Santa Barbara.

“Suppression of immune responses in the endometrium during the implantation window could increase susceptibility to sexually transmitted infections at that time, and other evidence did in fact support such increased susceptibility.”

“Second, I had previously noticed visual patterns in my own data and those depicted in the figures of other studies in which measures of women’s sexual motivation appeared to be especially low during the mid-luteal cycle phase region that encompasses the human implantation window.”

“Putting the two patterns together suggested that reduced sexual motivation might be a response that evolved to mitigate infection risk at that time. That led us to formally statistically test whether measures of sexual motivation were lower during the implantation window than at other times in the cycle using data from three large, daily diary studies that had been completed in my lab.”

The combined dataset included over 2,500 daily observations from undergraduate women. The researchers restricted their analysis to participants who were not using hormonal contraceptives. They also excluded cycles where pregnancy occurred or where cycle regularity was compromised.

Participants in all three studies completed online surveys every morning. They reported on their experiences and behaviors from the previous day. The primary measure of interest was a self-reported rating of sexual desire. Participants rated how much they desired sexual contact on a scale ranging from one to seven. A second key measure asked participants simply whether they had masturbated that day.

The researchers needed to map these behavioral reports onto the participants’ menstrual cycles with high precision. In two of the studies, participants used daily urine tests to detect surges in luteinizing hormone. In the other study, participants provided saliva samples to measure hormone levels. These biological markers allowed the team to pinpoint the day of ovulation for each cycle.

The implantation window was defined as the period from five to nine days after ovulation. This timeframe corresponds to the mid-luteal phase when progesterone levels are typically at their peak. It is the specific window when the uterine lining is receptive to an embryo.

The researchers used multi-level regression models to analyze the relationship between this window and sexual motivation. This statistical method accounts for the fact that each participant provided multiple days of data.

The analysis revealed consistent patterns across the three independent samples. Women reported significantly lower levels of sexual desire during the implantation window compared to other phases of their cycle.

This decline was statistically significant even when the researchers compared the implantation window to other non-fertile days. This suggests the drop is a distinct phenomenon rather than just a return to baseline following ovulation.

The researchers also examined frequencies of masturbation. The results showed that the odds of a woman masturbating were approximately one-third lower during the implantation window compared to the rest of the cycle. This indicates that the reduction in sexual motivation manifests in behavioral changes as well as psychological feelings.

“Because many variables (aside from hormonal influences) may influence sexual desire, it is difficult to say how much the effects that we detected would be noticed as practically significant in daily life,” Roney told PsyPost. “We do know that, on average, women did consciously report less desire at this time, and so our arguments provide a possible explanation for why women may notice lower desire specifically in the second half of the menstrual cycle during the implantation window.”

Further analysis compared the implantation window specifically to the fertile window. As seen in previous research, sexual desire peaked near ovulation when conception is possible. The drop in desire during the implantation window was distinct from this peak. The data indicates a specific suppression of motivation during the mid-luteal phase.

The researchers also investigated desire directed toward romantic partners. Among the subset of women in relationships, desire for their specific partner tended to decline during the implantation window. Interest in new or extra-pair partners also showed a decrease. These findings align with the theory that the body downregulates sexual interest generally to avoid pathogen exposure.

The researcher addressed whether the drop in desire was simply due to menstruation. Sexual activity often decreases during menstrual bleeding. However, the analysis showed that the drop in desire during the implantation window was significant even when compared only to days without menstrual bleeding. The effect was specific to the timeframe of endometrial receptivity.

These findings support the idea that the menstrual cycle involves a trade-off between reproductive opportunity and immune protection.

“Fairly strong evidence had supported the idea that, on average, women’s sexual desire may be relatively higher near ovulation on days when it is possible to conceive,” Roney explained. “Our findings suggest that, conversely, there may be a region of the menstrual cycle in which women’s desire tends to be especially suppressed.

“This region corresponds to the time when an embryo would attach to the uterine lining if conception had occurred. Immune responses are reduced during that implantation window to avoid attacking the embryo, but that immunosuppression may increase risk of contracting sexually transmitted infections at that time. Thus, the reduced sexual desire at this time may have evolved to reduce the risk of contracting pathogenic infections through sex.”

The study, like all research, does have some limitations. The sample consisted entirely of university students. These participants were young and mostly not cohabiting with long-term partners. Sexual patterns might differ in older populations or among couples trying to conceive. “It would be ideal to test replication of these patterns in other samples of women,” Roney said.

Future research could attempt to link these behavioral shifts to physiological signals. Measuring specific immune proteins or hormones associated with the implantation process could strengthen the evidence.

“We would like to rigorously investigate the physiological signals that may cause the reduced sexual motivation that we observed during the implantation window,” Roney said.

The study, “Decreased sexual motivation during the human implantation window,” was authored by James R. Roney, Zachary L. Simmons, Mei Mei, Rachel L. Grillot, and Melissa Emery Thompson.

Narcissism shows surprisingly consistent patterns across 53 countries, study finds

29 January 2026 at 15:00

New research conducted across more than 50 nations indicates that the demographic factors associated with narcissism are remarkably consistent around the globe. The findings suggest that younger adults, men, and individuals who perceive themselves as having high social status tend to display higher levels of narcissistic traits, regardless of their cultural background. The research was published in the journal Self and Identity.

Psychology has historically faced a significant limitation regarding the diversity of its study participants. The vast majority of existing knowledge about personality traits comes from research conducted in Western, Educated, Industrialized, Rich, and Democratic societies.

This geographic bias makes it difficult to determine whether psychological patterns are universal features of human nature or specific cultural byproducts. Scientists have debated whether the tendency for certain demographic groups to display higher narcissism is a global phenomenon or one unique to specific societies.

“Most of what we know about narcissism comes from studies conducted in the United States or a small handful of Western countries,” said study author William J. Chopik, an associate professor of psychology at Michigan State University.

“That makes it hard to know whether well-known patterns—like younger people, men, or higher-status individuals scoring higher in narcissism—are culturally specific or more universal. We wanted to address that gap by examining narcissism across 53 countries and asking not only whether levels differ across cultures, but whether the same demographic patterns hold up around the world.”

The researchers utilized a multidimensional framework for understanding narcissism rather than treating it as a single trait. They employed the Narcissistic Admiration and Rivalry Concept. This model distinguishes between two specific strategies individuals use to maintain a grandiose self-view.

The first strategy is narcissistic admiration. This aspect involves agentic self-promotion, striving for uniqueness, and seeking social praise. It is often associated with social potency and initial popularity. The second strategy is narcissistic rivalry. This aspect is more antagonistic and involves self-defense, devaluation of others, and striving for supremacy.

The researchers analyzed data from a massive international sample collected as part of the International Collaboration on Social and Moral Psychology. The final dataset included 45,800 participants from 53 different countries. The sample size per country ranged from 148 in Ecuador to 2,133 in Australia.

Participants completed the Narcissistic Admiration and Rivalry Questionnaire. This measure asked respondents to rate their agreement with statements designed to assess both the agentic and antagonistic aspects of narcissism. Examples include statements about enjoying being the center of attention or wanting rivals to fail.

To measure perceived social status, the study utilized the MacArthur Scale of Subjective Social Status. Participants were presented with an image of a ladder representing the social hierarchy of their society. They were asked to place themselves on the rung that best represented their standing in terms of money, education, and employment.

The researchers also incorporated country-level data to assess cultural context. They used Gross Domestic Product per capita to measure national economic prosperity. To measure cultural values, they utilized the Global Collectivism Index. This index assesses the degree to which a society prioritizes group cohesion and interdependence over individual autonomy.

The analysis revealed that demographic differences in narcissism were largely consistent across the 53 countries. Younger adults reported higher levels of both narcissistic admiration and rivalry compared to older adults. This finding aligns with developmental theories suggesting that narcissistic traits may help young adults establish autonomy and acquire resources.

As individuals age, they typically shift their focus toward prosocial goals and emotional stability. This maturation process appears to coincide with a reduction in narcissistic tendencies globally. The study provides evidence that this age-related decline is not specific to any single culture.

Gender differences also followed a consistent pattern worldwide. Men reported higher levels of narcissism than women across the majority of the nations surveyed. This gender gap was observed for both the admiration and rivalry dimensions of the trait.

Social role theories suggest that these differences may stem from societal expectations. Men are often socialized to be assertive and dominant, traits that overlap with narcissism. Women are frequently encouraged to be communal and nurturing, behaviors that conflict with self-absorption.

The researchers also found a robust link between perceived social status and narcissism. Individuals who placed themselves higher on the social ladder tended to report higher levels of narcissism. This association was observed consistently across the different cultural contexts.

People with high levels of narcissism often feel entitled to special privileges and view themselves as superior. This self-view likely drives them to seek out high-status positions. Conversely, achieving a high perceived status may reinforce narcissistic tendencies by validating their feelings of superiority.

While the demographic patterns were consistent, the average levels of narcissism did vary by country. The data indicated that people living in nations with a higher Gross Domestic Product reported higher levels of narcissism. This was particularly true for the dimension of narcissistic admiration.

This finding supports the notion that economic prosperity may create an environment that encourages self-focus. In wealthier societies, there may be more opportunities and cultural permission to engage in self-promotion. However, the relationship between culture and narcissism proved to be more complex than simply linking it to wealth.

“Most of the effects we observed are modest in size, which is typical for large, cross-cultural studies of personality,” Chopik told PsyPost. “That said, even small differences can matter when they show up consistently across tens of thousands of people and dozens of countries.”

“And there are also a lot of within country differences, such that even when looking at one country, people might dramatically differ from one another (and sometimes two people within a country vary more than two people from different countries). The real contribution here isn’t about pinpointing ‘the most narcissistic country,’ but about understanding how stable patterns of personality relate to culture, age, gender, and social standing.”

A notable finding from the study challenges the traditional view that narcissism is strictly a product of individualistic cultures. The researchers found that participants from more collectivistic countries reported higher levels of narcissism.

“One of the more surprising findings was that people from more collectivistic countries sometimes reported higher, not lower, levels of narcissism—particularly on the more agentic, admiration side,” Chopik said. “This challenges the common assumption that narcissism is mainly a product of highly individualistic cultures. It suggests that narcissistic traits may serve different functions in different cultural contexts, such as navigating social hierarchies rather than standing out as unique.”

“There’s an emerging literature about how the individualism/collectivism distinction is not as clean as people think—that collectivistic countries are these Pollyanna-ish utopias where everyone gets along. Rather, there are some examples in which collectivistic cultures are more competitive and could be more attuned to themselves and the hierarchies they find themselves in.”

The researchers examined whether cultural factors changed the strength of the demographic associations. For instance, they tested if the gender gap in narcissism was smaller or larger in collectivistic countries. The analysis showed that culture did not significantly moderate these demographic differences.

This lack of moderation implies that the mechanisms driving demographic differences in narcissism are relatively universal. The developmental processes of aging and the societal shaping of gender roles appear to exert a similar influence on personality regardless of the specific cultural backdrop.

“One key takeaway is that narcissism isn’t just a ‘Western’ phenomenon, nor does it look wildly different across cultures,” Chopik told PsyPost. “Younger people, men, and those who see themselves as higher in social status tend to report higher narcissism almost everywhere we looked. At the same time, average levels of narcissism do vary by country, and those differences appear to be linked to broader cultural and economic contexts. So, culture certainly mattered, but not for everything—some patterns are relatively similar in different cultures.”

As with all research, there are some limitations. The data was cross-sectional, meaning it captured a single point in time. This makes it impossible to determine if the age differences are due to developmental changes or generational differences between cohorts.

Future research utilizing longitudinal designs is necessary to track how narcissism changes within individuals over time. This would help clarify whether people truly become less narcissistic as they age or if older generations were simply less narcissistic to begin with.

The authors also note that this study focused on broad cultural dimensions like collectivism and wealth. Other cultural factors, such as political systems, family structures, or religious beliefs, may also play a role in shaping narcissism. Future investigations could explore these additional variables to build a more complete picture.

Potential misinterpretations of these findings should be avoided. The results do not imply that entire nations can be categorized as “narcissistic.”

“A common misinterpretation is to treat these findings as rankings or judgments about entire countries or cultures,” Chopik noted. “That’s not what the data are meant to do. These are average differences with substantial overlap between countries, and individuals within any culture vary far more than cultures do from one another. So I understand the desire to describe the most and least narcissistic countries, but I actually think that’s a little less interesting, especially given that cultural differences aren’t that big.”

The study provides a comprehensive look at how personality traits interact with culture. It moves beyond the simple East-West dichotomy often used in psychology. By including a vast array of nations, the research offers a more nuanced understanding of the human experience.

“A natural next step is to move beyond mean differences and examine how narcissism operates in daily life across cultures—how it relates to relationships, work, and well-being in different contexts,” Chopik explained. “That might include how narcissism changes over time differently depending on the context. We’re also interested in understanding how cultural change, such as economic development or shifts toward individualism, might shape narcissism over time. Longitudinal and mixed-method approaches will be especially important for that.”

“One thing worth emphasizing is that narcissism isn’t inherently ‘good’ or ‘bad,'” the researcher added. “Some aspects, like admiration, can be linked to confidence and motivation, while others, like rivalry, are more clearly associated with interpersonal conflict. Studying narcissism across cultures helps us better understand when and where these traits might be adaptive—and when they might come at a cost.”

The study, “Cultural moderation of demographic differences in narcissism,” was authored by Macy M. Miscikowski, Rebekka Weidmann, Sara H. Konrath, and William J. Chopik.

How AI’s distorted body ideals could contribute to body dysmorphia

What does it look like to have an “athletic body?” What does artificial intelligence think it looks like to have one?

A recent study we conducted at the University of Toronto analyzed appearance-related traits of AI-generated images of male and female athletes and non-athletes. We found that we’re being fed exaggerated — and likely impossible — body standards.

Even before AI, athletes have been pressured to look a certain way: thin, muscular and attractive. Coaches, opponents, spectators and the media shape how athletes think about their bodies.

But these pressures and body ideals have little to do with performance; they’re associated with the objectification of the body. And this phenomenon, unfortunately, is related to a negative body image, poor mental health and reduced sport-related performance.

Given the growing use of AI on social media, understanding just how AI depicts athlete and non-athlete bodies has become critical. What it shows, or doesn’t, as “normal” is widely viewed and may soon be normalized.

Lean, young, muscular — and mostly male

As researchers with expertise in body image, sport psychology and social media, we grounded our study in objectification and social media theories. We generated 300 images using different AI platforms to explore how male and female athlete and non-athlete bodies are depicted.

We documented demographics, levels of body fat and muscularity. We assessed clothing fit and type, facial attractiveness like having neat and shiny hair, symmetrical features or clear skin and body exposure in each image. Indicators of visible disabilities, like mobility devices, were also noted. We compared the characteristics of male versus female images as well as the characteristics of athlete and non-athlete images.

The AI-generated male images were frequently young (93.3 per cent), lean (68.4 per cent) and muscular (54.2 per cent). The images of females depicted youth (100 per cent), thinness (87.5 per cent) and revealing clothing (87.5 per cent).

The AI-generated images of athletes were lean (98.4 per cent), muscular (93.4 per cent) and dressed in tight (92.5 per cent) and revealing (100 per cent) exercise gear.

Non-athletes were shown wearing looser clothing and displaying more diversity of body sizes. Even when we asked for an image of just “an athlete,” 90 per cent of the generated images were male. No images showed visible disabilities, larger bodies, wrinkles or baldness.

These results reveal that generative AI perpetuates stereotypes of athletes, depicting them as only fitting into a narrow set of traits — lacking impairment, attractive, thin, muscular, exposed.

The findings of this research illustrate the ways in which three commonly used generative AI platforms — DALL-E, MidJourney and Stable Diffusion — reinforce problematic appearance ideals for all genders, athletes and non-athletes alike.

The real costs of distorted body ideals

Why is this a problem?

More than 4.6 billion people use social media and 71 per cent of social media images are generated by AI. That’s a lot of people repeatedly viewing images that foster self-objectification and the internalization of unrealistic body ideals.

They may then feel compelled to diet and over-exercise because they feel bad about themselves — their body does not look like AI-fabricated images. Alternatively, they may also do less physical activity or drop out of sports altogether.

Negative body image not only affects academic performance for young people but also sport-related performance. While staying active can promote a better body image, negative body image does the exact opposite. It exacerbates dropout and avoidance.

Given that approximately 27 per cent of Canadians over the age of 15 have at least one disability, the fact that none of the generated images included someone with a visible disability is also striking. In addition to not showing disabilities when it generates images, AI has also been reported to erase disabilities on images of real people.

People with body fat, wrinkles or baldness were also largely absent.

Addressing bias in the next generation of AI

These patterns reveal that AI isn’t realistic or creative in its representations. Instead, it pulls from the massive database of media available online, where the same harmful appearance ideals dominate. It’s recycling our prejudices and forms of discrimination and offering them back to us.

AI learns body ideals from the same biased society that has long fuelled body image pressure. This leads to a lack of diversity and a vortex of unreachable standards. AI-generated images present exaggerated, idealized bodies that ultimately limit the diversity of humans and the lowered body image satisfaction that ensues is related greater loneliness.

And so, as original creators of the visual content that trains AI systems, society has a responsibility to ensure these technologies do not perpetuate ableism, racism, fatphobia and ageism. Users of generative AI must be intentional in how image prompts are written, and critical in how they are interpreted.

We need to limit the sort of body standards we internalize through AI. As AI-generated images continue to populate our media landscape, we must be conscious of our exposure to it. Because at the end of the day, if we want AI to reflect reality rather than distort it, we have to insist on seeing, and valuing, every kind of body.The Conversation

 

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Study links burnout and perfectionism to imposter phenomenon in psychiatrists

29 January 2026 at 03:00

A study of psychiatrists in Turkey found a strong correlation between the imposter phenomenon on one side, and burnout, maladaptive perfectionism, and compassion fatigue on the other. In other words, psychiatrists who experienced burnout, compassion fatigue, and maladaptive perfectionism were more likely to doubt their abilities and fear being exposed as frauds despite objective evidence of competence. The research was published in BMC Psychiatry.

The imposter phenomenon refers to a persistent feeling of intellectual or professional fraudulence despite clear evidence of competence and achievement. Individuals experiencing it tend to attribute their own success to luck, effort, or external factors rather than ability. It was first described by psychologists Pauline Clance and Suzanne Imes in the late 1970s.

The phenomenon is common among high-achieving individuals, particularly in competitive academic or professional environments. People with imposter feelings fear being exposed as incompetent by others. These feelings can coexist with objectively strong performance and external recognition. The imposter phenomenon is associated with anxiety, stress, and reduced job or academic satisfaction. It is not a mental disorder but a psychological pattern of self-evaluation. Social comparison, perfectionism, and minority or outsider status can intensify imposter experiences.

Study author Nur Nihal Türkel and her colleagues wanted to explore the relationship between the imposter phenomenon, burnout, and maladaptive perfectionism among mental health professionals. They note that because maladaptive perfectionism and the imposter phenomenon both stem from elevated expectations and feelings of inadequacy, they are likely to be related. Maladaptive perfectionism is a pattern of striving for unrealistically high standards accompanied by excessive self-criticism, fear of failure, and distress when those standards are not met.

Study participants were 160 psychiatrists from Turkey between 24 and 70 years of age. Study authors recruited them by sending emails to psychiatrists registered with the Turkey Psychiatric Association. The participants’ average age was approximately 34 years. 69% were women. 46% of them worked in university hospitals, and 37% worked in public hospitals.

Study participants completed an online survey that included assessments of burnout, compassion satisfaction, and compassion fatigue (the Professional Quality of Life Scale), perfectionism (the Almost Perfect Scale-Revised), and the imposter phenomenon (the Clance Imposter Scale).

Results showed that individuals with a more pronounced imposter phenomenon tended to have more pronounced maladaptive perfectionism, compassion fatigue, and burnout. They also tended to experience lower compassion satisfaction and to be younger on average.

“This study found that burnout and maladaptive perfectionism impact the imposter phenomenon in psychiatrists. To mitigate the effects of the imposter phenomenon on mental health professionals, societal norms that contribute to burnout and perfectionism must be reassessed,” the study authors concluded.

The study contributes to the scientific understanding of the psychological underpinnings of the imposter phenomenon. However, it should be noted that all study data was collected using self-reports, leaving room for reporting bias to have affected the results. Additionally, the cross-sectional design of the study does not allow any causal inferences to be derived from the results.

The paper, “The imposter phenomenon in psychiatrists: relationships among compassion fatigue, burnout, and maladaptive perfectionism,” was authored by Nur Nihal Türkel, Ahmet Selim Başaran, Hande Gazey, and İrem Ekmekçi Ertek.

Menopause is linked to reduced gray matter and increased anxiety

29 January 2026 at 01:00

New research suggests that menopause is accompanied by distinct changes in the brain’s structure and a notable increase in mental health challenges. While hormone replacement therapy appears to aid in maintaining reaction speeds, it does not seem to prevent the loss of brain tissue or alleviate symptoms of depression according to this specific dataset. These observations were published online in the journal Psychological Medicine.

Menopause represents a major biological transition marked by the cessation of menstruation and a steep decline in reproductive hormones. Women frequently report a variety of symptoms during this time, ranging from hot flashes to difficulties with sleep and mood regulation.

Many individuals turn to hormone replacement therapy to manage these physical and psychological obstacles. Despite the common use of these treatments, the medical community still has questions about how these hormonal shifts affect the brain itself. Previous research has yielded mixed results regarding whether hormone treatments protect the brain or potentially pose risks.

To clarify these effects, a team of researchers from the University of Cambridge undertook a large-scale analysis. Katharina Zuhlsdorff, a researcher in the Department of Psychology at the University of Cambridge, served as the lead author on the project.

She worked alongside senior author Barbara J. Sahakian and colleagues from the Departments of Psychiatry and Psychology. Their objective was to provide a clearer picture of how the end of fertility influences mental well-being, thinking skills, and the physical architecture of the brain.

The team utilized data from the UK Biobank, a massive biomedical database containing genetic and health information from half a million participants. For this specific investigation, they selected a sample of nearly 125,000 women.

The researchers divided these participants into three distinct groups to allow for comparison. These groups included women who had not yet gone through menopause, post-menopausal women who had never used hormone therapy, and post-menopausal women who were users of such therapies.

The investigation first assessed psychological well-being across the different groups. The data showed that women who had passed menopause reported higher levels of anxiety and depression compared to those who had not.

Sleep quality also appeared to decline after this biological transition. The researchers observed that women taking hormone replacement therapy actually reported more mental health challenges than those who did not take it. This group also reported higher levels of tiredness.

This result initially seemed counterintuitive, as hormone therapy is often prescribed to help with mood. To understand this, the authors looked backward at the medical history of the participants. They found that women prescribed these treatments were more likely to have had depression or anxiety before they ever started the medication. This suggests that doctors may be prescribing the hormones specifically to women who are already struggling with severe symptoms.

The study also tested how quickly the participants could think and process information. The researchers found that reaction times typically slow down as part of the aging process.

However, menopause seemed to speed up this decline in processing speed. In this specific domain, hormone therapy appeared to offer a benefit. Post-menopausal women taking hormones had reaction times that were faster than those not taking them, effectively matching the speeds of pre-menopausal women.

Dr. Katharina Zühlsdorff noted the nuance in these cognitive findings. She stated, “Menopause seems to accelerate this process, but HRT appears to put the brakes on, slowing the ageing process slightly.”

While reaction times varied, the study did not find similar differences in memory performance. The researchers administered tasks designed to test prospective memory, which is the ability to remember to perform an action later. They also used a digit-span task to measure working memory capacity. Across all three groups, performance on these memory challenges remained relatively comparable.

A smaller subset of about 11,000 women underwent magnetic resonance imaging scans to measure brain volume. The researchers focused on gray matter, the tissue containing the body of nerve cells. They specifically looked at regions involved in memory and emotional regulation. These included the hippocampus, the entorhinal cortex, and the anterior cingulate cortex.

The hippocampus is a seahorse-shaped structure deep in the brain that is essential for learning and memory. The entorhinal cortex functions as a gateway, channeling information between the hippocampus and the rest of the brain. The anterior cingulate cortex plays a primary role in managing emotions, impulse control, and decision-making.

The scans revealed that post-menopausal women had reduced gray matter volume in these key areas compared to pre-menopausal women. This reduction helps explain the higher rates of mood issues in this demographic. Unexpectedly, the group taking hormone therapy showed the lowest brain volumes of all. The treatment did not appear to prevent the loss of brain tissue associated with the end of reproductive years.

The specific regions identified in the study are often implicated in neurodegenerative conditions. Professor Barbara Sahakian highlighted the potential long-term importance of this observation. She explained, “The brain regions where we saw these differences are ones that tend to be affected by Alzheimer’s disease. Menopause could make these women vulnerable further down the line.”

While the sample size was large, the study design was observational rather than experimental. This means the researchers could identify associations but cannot definitively prove that menopause or hormone therapy caused the changes.

The UK Biobank population also tends to be wealthier and healthier than the general public, which may skew the results. Additionally, the study relied on self-reported data for some measures, which can introduce inaccuracies.

The finding regarding hormone therapy and lower brain volume is difficult to interpret without further research. It remains unclear if the medication contributes to the reduction or if the women taking it had different brain structures to begin with.

The researchers emphasize that more work is needed to disentangle these factors. Future studies could look at genetic factors or other health conditions that might influence how hormones affect the brain.

Despite these limitations, the research highlights the biological reality of menopause. It confirms that the transition involves more than just reproductive changes.

Christelle Langley emphasized the need for broader support systems. She remarked, “We all need to be more sensitive to not only the physical, but also the mental health of women during menopause, however, and recognise when they are struggling.”

The study, “Emotional and cognitive effects of menopause and hormone replacement therapy,” was authored by Katharina Zuhlsdorff, Christelle Langley, Richard Bethlehem, Varun Warrier, Rafael Romero Garcia, and Barbara J Sahakian.

Having a close friend with a gambling addiction increases personal risk, study finds

28 January 2026 at 23:00

Having a close relationship with someone who suffers from a gambling problem increases the likelihood that an individual will develop similar issues over time. A new longitudinal analysis published in the Journal of Gambling Studies has found that while strong family bonds can shield adults from this risk, close friendships do not appear to offer the same protection. These findings suggest that the social transmission of gambling behaviors operates differently depending on the nature of the relationship.

For decades, researchers have recognized that addiction often ripples through social networks. This phenomenon is well-documented in the study of alcohol and substance use. Scientists refer to this as the transmission of problem behavior. The impact of a person’s addiction extends beyond themselves, affecting family members, partners, and friends. In Finland, where this research took place, estimates suggest that approximately 20 percent of adults identify as “affected others” of someone else’s gambling. These individuals often bear significant emotional, financial, and health-related burdens.

Past inquiries into gambling transmission have predominantly focused on intergenerational lines. Studies have frequently examined how parents influence their children or how peer pressure impacts adolescents. Far less is known about how these dynamics function among adults. It has remained unclear whether adult gambling is primarily an individual trait or a behavior continuously shaped by social interactions. The protective potential of different types of social connections has also been an open question.

Emmi Kauppila, a doctoral researcher at the Faculty of Social Sciences at Tampere University in Finland, led the new investigation. She collaborated with a team of scholars from the University of Helsinki, the University of Turku, and the University of Bath in the United Kingdom. The researchers sought to determine if exposure to problem gambling in adulthood predicts an increase in one’s own gambling severity. They also aimed to test whether having strong, supportive relationships could act as a buffer against this potential harm.

The team employed a longitudinal survey design to answer these questions. They recruited 1,530 adults residing in mainland Finland to participate in the study. The data collection spanned from April 2021 to September 2024. Participants completed surveys across eight separate waves, with each wave occurring at six-month intervals. This repeated-measures design allowed the scientists to track changes within specific individuals over time, rather than relying on a single snapshot of the population.

The researchers assessed gambling severity using the Problem Gambling Severity Index. This is a standard screening tool where respondents rate their gambling behaviors and consequences on a scale from zero to 27. Higher scores indicate a greater risk of problem gambling. Participants also reported whether they had a family member or a close friend who had experienced gambling problems. To measure the quality of these relationships, the study used the Social and Emotional Loneliness Scale for Adults. This metric evaluated how connected and supported the participants felt by their families and friends.

To analyze the data, the team used a statistical technique known as hybrid multilevel regression modeling. This method is particularly useful for longitudinal data. It allows researchers to distinguish between differences among people and changes that happen to a specific person. The model could determine if a person’s gambling habits changed during the specific six-month periods when they reported exposure to a problem gambler.

The analysis revealed that exposure to problem gambling within a social circle predicted a rise in an individual’s own gambling issues. When a participant reported that a family member had a gambling problem, their own score on the severity index increased by a measurable margin. This “within-person” effect suggests that the change in the social environment directly influenced the individual’s behavior. A similar pattern was observed regarding friends. Individuals who had friends with gambling problems tended to have higher severity scores themselves.

However, a distinct difference emerged when the researchers examined the protective role of relationship quality. The data showed that positive family relationships moderated the risk. Participants who reported strong, supportive connections with their family members were less likely to see their gambling increase, even when a family member had a gambling problem. The emotional support and connectedness provided by the family unit appeared to act as a buffer. This suggests that a supportive family environment can mitigate the transmission of harmful behaviors.

The same protective effect was not found for friendships. Strong emotional bonds with friends did not reduce the risk of acquiring gambling problems from a peer. The analysis indicated that close friendships did not buffer the impact of exposure. In some cases, high-quality friendships with problem gamblers were associated with higher risks for the individual. The researchers propose several explanations for this discrepancy.

One possibility is that peer groups often normalize risky behaviors. If gambling is a shared activity among friends, it may be viewed as a standard form of social interaction. In such contexts, a close friendship might reinforce the behavior rather than discourage it. This mirrors findings in alcohol research, where “drinking buddies” may encourage consumption. The authors also suggest that individuals might select friends who share similar attitudes toward risk. Consequently, the social environment maintains the habit rather than disrupting it.

Another interpretation involves social withdrawal. People who are affected by a loved one’s gambling often experience shame or stigma. This can lead them to isolate themselves from broader social support networks. They might feel that friends would not understand their situation. This isolation can prevent friends from acting as a protective resource. In contrast, family members are often already embedded in the dynamic and may be better positioned to offer support or monitoring.

Richard Velleman, an emeritus professor at the University of Bath and co-author of the paper, highlighted the broader implications of these results. He stated, “It has long been known that alcohol-related problems run in families – this study demonstrates that this is also the case with gambling.” He noted the importance of recognizing the severity of the issue. Velleman added, “This is an important discovery, as many people don’t see gambling problems as equivalent to alcohol or drug problems, as gamblers don’t ‘ingest’ anything, yet gambling can equally lead to serious problems which cause serious harm to individuals and families.”

The findings support the idea that gambling harm is not solely an individual pathology. It is a systemic issue that clusters in social networks. Emmi Kauppila noted, “In this paper, we demonstrate that gambling-related problems cluster within families and close relationships in ways similar to alcohol- and other substance-related harms.” She emphasized that the mechanism of transmission involves “shared environments, stressors and social dynamics.”

This perspective suggests that prevention and treatment strategies need to evolve. Interventions that focus exclusively on the individual gambler may miss a vital component of the recovery process. The study advocates for family-oriented approaches. Therapies that include family members could help strengthen the protective bonds that buffer against transmission. By addressing the needs of “affected others,” clinicians may be able to break the cycle of harm.

There are limitations to the study that contextually frame the results. The research was conducted in Finland, a nation with a specific cultural relationship to gambling. Gambling is widely accepted in Finland and is integrated into the funding of the welfare state. This cultural normalization might influence how gambling behaviors are shared and perceived. The results might differ in countries with more restrictive gambling laws or different cultural attitudes.

Additionally, the study relied on participants to report the gambling problems of their family and friends. These reports reflect the participants’ perceptions and were not clinically verified diagnoses. It is possible that some participants overestimated or underestimated the severity of their loved ones’ problems. The data also did not specify which family member was the source of the exposure. The influence of a partner might differ from that of a parent or sibling. The sample size for specific family roles was too small to analyze separately.

Future research could benefit from a more granular approach. Identifying specific family roles would clarify the transmission dynamics. Verifying the gambling status of the social network members would also strengthen the evidence. Comparative studies in other countries would help determine if these patterns are universal or culture-specific.

Despite these caveats, the study provides robust evidence that adult gambling behavior is deeply intertwined with social relationships. It challenges the view of the solitary gambler. The people surrounding an individual play a role in either amplifying risk or providing protection. Recognizing the power of these social bonds may be key to developing more effective harm reduction strategies.

The study, “Problem Gambling Transmission. An Eight-wave Longitudinal Study on Problem Gambling Among Affected Others,” was authored by Emmi Kauppila, Sari Hautamäki, Iina Savolainen, Sari Castrén, Richard Velleman and Atte Oksanen.

Vulnerable narcissism is strongly associated with insecure attachment, study finds

28 January 2026 at 21:00

A new meta-analysis provides evidence that the quality of emotional bonds formed in adulthood is connected to specific types of narcissism. The findings indicate that insecure attachment styles are strong risk factors for vulnerable narcissism, whereas grandiose narcissism appears largely unrelated to these attachment patterns. This research was published in the journal Personality and Individual Differences.

Psychologists classify narcissism into two primary subtypes that share antagonistic traits but differ in their expression. Grandiose narcissism is characterized by extraversion, aggression, and a dominant interpersonal style. Individuals with these traits tend to have an inflated sense of self-importance and often seek to control others.

Vulnerable narcissism presents a different profile marked by introversion and high neuroticism. People with high levels of vulnerable narcissism possess a fragile sense of self and are hypersensitive to the opinions of others. They often display a defensive form of grandiosity that masks deep-seated feelings of inadequacy.

Narcissistic traits are associated with various negative outcomes in life, particularly within interpersonal relationships. Romantic partnerships involving narcissistic individuals often suffer from a lack of commitment and higher rates of infidelity. These relationships can be characterized by manipulation and aggression during conflicts.

To understand the origins of these maladaptive patterns, researchers often look to attachment theory. This theory posits that early experiences with caregivers shape “internal working models” of the self and others. These models persist into adulthood and influence how individuals navigate romantic intimacy and emotional dependency.

Previous research on the link between attachment and narcissism has produced inconsistent results. Some studies have suggested links between narcissism and anxious attachment, while others have pointed toward avoidant styles. The authors of the current study aimed to resolve these inconsistencies by systematically reviewing and synthesizing data from existing literature.

“Our interest came from wanting to better understand developmental risk factors that might help explain how narcissistic traits emerge. The existing literature was inconsistent and often treated narcissism as a single construct, so we conducted a meta-analysis to clarify how different attachment styles relate to different forms of narcissism. This allowed us to bring together a large body of evidence and resolve some of that inconsistency,” said study author Megan Willis, an associate professor at Australian Catholic University.

The researchers searched five major academic databases for studies published up to May 2024. To be included in the review, studies had to be written in English and utilize validated measures of both adult attachment and trait narcissism.

The review focused exclusively on non-clinical adult samples to understand these traits in the general population. The researchers utilized a tool called AXIS to assess the quality and potential bias of the selected studies. This process resulted in a final selection of 33 studies.

The combined sample across these studies included 10,675 participants. The researchers used statistical software to calculate the overall strength of the relationships between narcissism subtypes and four distinct attachment styles. These styles are secure, preoccupied, dismissive, and fearful.

Secure attachment is defined by a positive view of both oneself and others. People with this style are generally comfortable with intimacy and independence. Preoccupied attachment involves a negative view of the self but a positive view of others, leading to anxiety and a need for reassurance.

Dismissive attachment is characterized by a positive view of the self but a negative view of others. Individuals with this style tend to avoid intimacy and prioritize self-reliance. Fearful attachment involves negative views of both the self and others, resulting in a desire for contact paired with a fear of rejection.

The meta-analysis revealed that the relationship between attachment and narcissism depends heavily on the specific subtype of narcissism involved. Vulnerable narcissism showed a significant positive relationship with all three forms of insecure attachment. The strongest association was found between vulnerable narcissism and preoccupied attachment.

This finding suggests that vulnerable narcissism is closely linked to anxiety regarding abandonment and a dependence on external validation. Individuals with these traits may use narcissistic behaviors as a compensatory strategy. They may seek excessive reassurance to regulate a fragile self-esteem that relies on others’ approval.

A moderate positive relationship was also observed between vulnerable narcissism and fearful attachment. This attachment style is often rooted in inconsistent or rejecting caregiving. The link implies that vulnerable narcissism may involve defensive withdrawal and hypervigilance in relationships.

“In many ways the findings were consistent with what we expected, particularly the link between insecure attachment and vulnerable narcissism,” Willis told PsyPost. “What did surprise us was the strength of those relationships, especially for preoccupied and fearful attachment. The effects were stronger than I would have predicted going into the study.”

The researchers also found a weak but significant relationship between vulnerable narcissism and dismissive attachment. This indicates that while these individuals may crave validation, they also employ strategies to maintain emotional distance. Consistent with these findings, vulnerable narcissism was negatively associated with secure attachment.

The results for grandiose narcissism presented a sharp contrast. The analysis showed no significant relationship between grandiose narcissism and any of the insecure attachment styles. There was a negligible positive relationship with secure attachment, but it was not strong enough to be considered practically meaningful.

These findings challenge the idea that all forms of narcissism stem from deep-seated insecurity or attachment wounds. Grandiose narcissism appears to be distinct from the anxiety and avoidance that characterize vulnerable narcissism. Some theories suggest grandiose traits may stem from parental overvaluation rather than lack of warmth.

“The key takeaway is that attachment styles — particularly fearful and preoccupied attachment — are important risk factors for vulnerable narcissism,” Willis explained. “This suggests that fostering secure attachment in childhood and helping people work through attachment wounds later in life may reduce the risk of these patterns developing or persisting.”

As with all research, there are some limitations. The data analyzed was cross-sectional, meaning it captures a snapshot in time. This prevents researchers from determining whether insecure attachment causes narcissism or if narcissistic traits lead to insecure attachment.

“These findings are correlational, so we cannot say attachment causes narcissism,” Willis noted. “They also do not mean that everyone with insecure attachment will develop vulnerable narcissism. What our results do suggest, however, is that for people who are high in vulnerable narcissism, insecure attachment may be an important risk factor.”

The reliance on self-report measures is another constraint. Individuals with narcissistic traits may lack the self-awareness or willingness to report their behaviors accurately. This is especially true for grandiose narcissists who may exaggerate their sense of security.

Future research should focus on longitudinal studies that track individuals from childhood through adulthood. This would help clarify the causal pathways between early caregiving experiences and the development of narcissistic traits. Researchers also recommend investigating how these dynamics might differ across various cultures and genders.

“A key long-term goal is to increase understanding and education about the importance of attachment in childhood and how early relationships can have lifelong effects,” Willis said. “I’m particularly interested in how parenting and early caregiving shape emotion regulation and interpersonal functioning. In my current work, I’m examining whether difficulties with emotion regulation help explain the link between vulnerable narcissism and intimate partner violence. This may help inform more targeted prevention and intervention strategies.”

The study, “The relationship between attachment styles and narcissism: a systematic and meta-analytic review,” was authored by Jamie Mohay, Kadie Cheng, Xochitl de la Piedad Garcia, and Megan L. Willis.

The psychology behind why we pay to avoid uncertainty

28 January 2026 at 19:00

Most people are familiar with the feeling of anxiety while waiting for the result of a medical test or a job interview. A new study suggests that this feeling of dread is far more powerful than the excitement of looking forward to a positive outcome.

The research indicates that the intensity of this dread drives people to avoid risks and demand immediate results. This behavior explains why impatience and risk-avoidance often appear together in the same individuals. The findings were published in the journal Cognitive Science.

Economists have traditionally viewed risk-taking and patience as separate character traits. A person could theoretically be a daring risk-taker while also being very patient. However, researchers have frequently observed that these two traits tend to correlate. People who are unwilling to take risks are often the same people who are unwilling to wait for a reward.

Chris Dawson of the University of Bath and Samuel G. B. Johnson of the University of Waterloo sought to explain this connection. They proposed that the link lies in the emotions people feel while waiting for an outcome. They distinguished between the feelings experienced after an event occurs and the feelings experienced beforehand.

When an event happens, we feel “reactive” emotions. We feel pleasure when we win money or displeasure when we lose it. But before the event occurs, we engage in “anticipatory” emotions. We might savor the thought of a win or dread the possibility of a loss.

The researchers hypothesized that these anticipatory emotions are not symmetrical. They suspected that the dread of a future loss is much stronger than the savoring of a future gain. If this is true, it would create a psychological cost to waiting.

To test this theory, Dawson and Johnson analyzed a massive dataset from the United Kingdom. They used the British Household Panel Survey and the Understanding Society study. These surveys followed approximately 14,000 individuals over a period spanning from 1991 to 2024.

The team needed a way to measure dread and savoring without asking participants directly. They developed a novel method using data on financial expectations and general well-being. The survey asked participants if they expected their financial situation to get better or worse over the next year.

The researchers then looked at how these expectations affected the participants’ current happiness. If a person expected to be worse off and their happiness dropped, that drop represented dread. If they expected to be better off and their happiness rose, that rise represented savoring.

The analysis revealed a dramatic imbalance between these two emotional states. The negative impact of anticipating a loss was more than six times stronger than the positive impact of anticipating a gain. This suggests that the human brain weighs future pain much more heavily than future pleasure.

The researchers also measured “reactive” emotions using the same method. They looked at how participants felt after they actually experienced a financial loss or gain. As expected, losses hurt more than gains felt good.

However, the imbalance in reactive emotions was much smaller than the imbalance in anticipatory emotions. Realized losses were about twice as impactful as realized gains. The anticipatory dread was three times more lopsided than the reactive experience.

This finding implies that the waiting period itself is a major source of distress. The researchers describe this phenomenon as “dread aversion.” It is distinct from the more famous concept of loss aversion.

The study then connected these emotional patterns to economic preferences. The survey included questions about the participants’ willingness to take risks in general. It also measured their patience through a delayed gratification scale.

The results showed a strong correlation between high levels of dread and risk-avoidance. People who experienced intense dread were much less likely to take risks. This makes sense within the researchers’ framework.

Taking a gamble creates a situation where a negative outcome is possible. This possibility triggers dread. By avoiding the risk entirely, the individual removes the source of the dread.

The results also showed a strong connection between dread and impatience. People who felt high levels of dread were less willing to wait for rewards. This also aligns with the researchers’ model.

Waiting for an uncertain outcome prolongs the experience of dread. A person who hates waiting may simply be trying to shorten the time they spend feeling anxious. They choose immediate rewards to stop the emotional wheel from spinning.

The study found that savoring plays a much smaller role in decision-making. The pleasure of imagining a good outcome is generally weak. This may be because positive anticipation is often mixed with the fear that the good event might not happen.

The authors checked to see if these results were simply due to personality traits. For example, a person with high neuroticism might naturally be both anxious and risk-avoidant. The researchers controlled for the “Big Five” personality traits in their analysis.

Even after accounting for neuroticism and other traits, the effect of dread remained. This suggests that the asymmetry of anticipatory emotions is a distinct psychological mechanism. It is not just a symptom of being a generally anxious person.

This research offers a unified explanation for economic behavior. It suggests that risk preferences and time preferences are not independent. They are both shaped by the desire to manage anticipatory emotions.

The authors use the analogy of a roulette wheel to explain their findings. When a person bets on roulette, they are not just weighing the odds of winning or losing. They are also deciding if they can endure the feeling of watching the wheel spin.

If the dread of losing is overwhelming, the person will not bet at all. If they do bet, they will want the wheel to stop as quickly as possible. The act of betting creates a stream of emotional discomfort that lasts until the result is known.

There are some limitations to this study. It relies on observational data rather than a controlled experiment. The researchers inferred emotions from survey responses rather than measuring them physiologically.

Additionally, the study assumes that changes in well-being are caused by financial expectations. It is possible that other unmeasured factors influenced both happiness and expectations. However, the use of longitudinal data helps to account for stable individual differences.

The findings have implications for various sectors. In healthcare, patients might avoid screening tests because the dread of a bad result outweighs the benefit of knowing. Reducing the waiting time for results could encourage more people to get tested.

In finance, investors might choose low-return savings accounts over stocks to avoid the anxiety of market fluctuations. This “dread premium” could explain why safe assets are often overvalued. Investors pay a price for emotional tranquility.

Future research could investigate how to modify these anticipatory emotions. If people can learn to reduce their dread, they might make better long-term decisions. Techniques from cognitive behavioral therapy could potentially help investors and patients manage their anxiety.

The study provides a new lens through which to view human irrationality. We often make choices that look bad on paper because we are optimizing for our current emotional state. We are willing to pay a high price to avoid the shadow of the future.

The study, “Asymmetric Anticipatory Emotions and Economic Preferences: Dread, Savoring, Risk, and Time,” was authored by Chris Dawson and Samuel G. B. Johnson.

❌
❌