Normal view

Today — 4 February 2026Main stream

Shared viewing of erotic webcams is rare but may enhance relationship intimacy

4 February 2026 at 01:00

Couples seeking to reinvigorate their romantic lives often turn to novel experiences, ranging from travel to shared hobbies. A new study suggests that for some partners, this exploration has moved into the digital realm of erotic webcam sites. The research indicates that while using these platforms with a partner is relatively rare, those who do so often report positive outcomes for their relationship. These findings were published recently in the Journal of Social and Personal Relationships.

The integration of technology into human intimacy is a growing field of inquiry for social scientists. Erotic webcam modeling websites, or “camsites,” allow users to view and interact with live performers. Historically, researchers have viewed the consumption of online erotic content as a solitary activity. This new investigation shifts that focus to explore how romantic partners utilize these platforms together.

Jessica T. Campbell, a researcher at The Kinsey Institute at Indiana University, led the study. She collaborated with colleagues Ellen M. Kaufman, Margaret Bennett-Brown, and Amanda N. Gesselman. The team sought to apply the “self-expansion model” of relationships to digital intimacy. This psychological theory suggests that individuals in relationships are motivated to expand their sense of self. They often achieve this expansion by including their partner in new and challenging activities.

The researchers posit that shared participation in camsite viewing could serve as one of these expanding activities. Previous academic work has looked at couples who watch pre-recorded pornography together. Those studies have generally found links between shared viewing and increased sexual communication. Campbell and her team aimed to see if the interactive nature of camsites produced similar results.

To gather data, the research team recruited participants directly through an advertisement on LiveJasmin.com, a major webcam platform. The banner ad invited site visitors to complete a survey about their experiences. This method allowed the researchers to access an active community of users rather than relying on a general population sample. The initial pool included more than 5,000 participants.

From this large group, the investigators filtered for specific criteria to form their final dataset. They isolated a subsample of 312 participants who were in romantic relationships. These participants also indicated that their partners were aware of their camsite usage. The demographic profile of this group was specific. The majority of respondents were white, heterosexual, cisgender men who reported being in committed, exclusive relationships.

The study aimed to quantify how often these couples engaged in the activity together. The results showed that shared usage is not the norm for most camsite users. Only about 35 percent of the partnered subsample had ever viewed a cam show with their significant other. When looking at the total initial sample of over 5,000 users, only 2.1 percent engaged in this behavior. This suggests that for the vast majority of users, camming remains a private activity.

However, the data revealed a pattern of repeated behavior among the minority who did participate together. Among the couples who had used a camsite together, 56 percent reported doing so multiple times. Roughly one in four of these participants stated they had engaged in the activity more than 20 times. This frequency implies that for those couples who cross the initial threshold, the experience often becomes a recurring part of their sexual repertoire.

The researchers also investigated the motivations behind this shared digital consumption. The survey provided a range of options for why couples chose to log on together. The primary driver for these couples was a desire to introduce novelty into their dynamic. Approximately 36 percent of respondents indicated they wanted to spice up their relationship or try something new.

Fulfilling specific fantasies or desires was another leading motivation, selected by nearly 28 percent of the group. A similar percentage cited curiosity or entertainment as their main reason. Less frequently, participants mentioned using the sites to learn about sex or to engage with specific kinks. These responses align with the self-expansion model, as couples appeared to use the technology to broaden their sexual horizons.

The study then assessed how these experiences impacted the relationship itself. The findings defied the stereotype that online erotica necessarily creates distance between partners. A significant portion of the respondents reported neutral or positive effects. About 27 percent said the activity had no impact on their relationship at all.

Conversely, nearly a quarter of the participants felt the experience enhanced their relationship overall. When asked about specific benefits, 37 percent reported that it improved their communication regarding sex. Twenty-eight percent said it helped them understand their partner’s sexual interests better. Others noted that the shared activity helped reduce awkwardness or discomfort around sexual topics.

Negative outcomes were reported by a very small minority of the sample. Only about 5 percent of respondents indicated that using the camsite with their partner had a negative impact on their relationship. This low figure suggests that for the specific demographic surveyed, the activity was generally safe for the relationship. The high likelihood of repeat usage supports this conclusion. Sixty-four percent of the participants said they were likely or very likely to use a camsite with their partner again.

These findings build upon and add nuance to previous research regarding technology and intimacy. Earlier studies on shared pornography consumption have shown that it can foster intimacy when both partners are willing participants. This new study extends that logic to live, interactive platforms. It suggests that the interactive element of camsites may offer unique opportunities for couples to articulate their desires in real-time.

The results also complement recent work regarding the educational potential of adult platforms. A separate study published in Sexuality & Culture found that users of OnlyFans often reported learning new things about their own preferences and sexual health. Similarly, the participants in Campbell’s study indicated that camsites served as a venue for learning and exploration. This counters the narrative that such platforms are solely sources of passive entertainment.

However, the current study contrasts somewhat with research focusing on the solitary use of these platforms. A study published in Computers in Human Behavior highlighted that some solo viewers experience feelings of guilt or isolation. The dynamic appears to change when the activity becomes a shared pursuit. By bringing a partner into the digital space, the secrecy that often fuels feelings of shame is removed.

It is important to consider the demographics of the current study when interpreting the results. The sample consisted almost entirely of men. This means the data reflects the male partner’s perception of the shared experience. The researchers did not survey the female partners to verify if they shared the same positive outlook. It is possible that the non-responding partners might have felt differently about the activity.

The method of recruitment also introduces a degree of selection bias. By advertising on the camsite itself, the researchers naturally selected individuals who were already comfortable enough with the platform to be online. Couples who tried the activity once, had a terrible experience, and vowed never to return would likely not be present to take the survey. This may skew the results toward a more positive interpretation of the phenomenon.

Additionally, the study notes that some participants were partnered with cam models. For these specific individuals, “using” the site together might simply mean supporting their partner’s work. This is a fundamentally different dynamic than two laypeople watching a third party. The researchers acknowledge that the motivations for this subgroup would differ from the general trend.

Future research will need to address these gaps to provide a more complete picture. Obtaining data from both members of the couple would be a vital next step. This would allow scientists to see if the reported improvements in communication are mutual. It would also help to determine if one partner is merely complying with the other’s desires.

Researchers also suggest exploring how different demographics engage with this technology. The current study was heavily skewed toward white, heterosexual couples. It remains unclear if LGBTQ+ couples or couples from different cultural backgrounds experience similar outcomes. Different relationship structures, such as polyamory, might also interact with these platforms in unique ways.

Despite these limitations, the study offers a rare glimpse into a private behavior. It challenges the assumption that digital erotica is inherently isolating. Instead, it proposes that for some couples, the screen can serve as a bridge. By navigating the virtual sexual landscape together, these partners appear to find new ways to connect in the real world.

The study, “Connected, online and off: Romantic partnered experiences on erotic webcam sites,” was authored by Jessica T. Campbell, Ellen M. Kaufman, Margaret Bennett-Brown and Amanda N. Gesselman.

Yesterday — 3 February 2026Main stream

What your fears about the future might reveal about your cellular age

3 February 2026 at 21:00

A new study published in Psychoneuroendocrinology indicates that women who experience high levels of anxiety regarding their declining health tend to age faster at a molecular level compared to those who do not.

The concept of aging is often viewed simply as the passage of time marked by birthdays. However, scientists increasingly view aging as a biological process of wear and tear that varies from person to person.

Two individuals of the same chronological age may possess vastly different biological ages based on their cellular health. To measure this, researchers look at the epigenome. The epigenome consists of chemical compounds and proteins that can attach to DNA and direct such actions as turning genes on or off, controlling the production of proteins in particular cells.

One specific type of epigenetic modification is called DNA methylation. As people age, the patterns of methylation on their DNA change in predictable ways. Scientists have developed algorithms known as “epigenetic clocks” to analyze these patterns.

These clocks can estimate a person’s biological age and the pace at which they are aging. When a person’s biological clock runs faster than their chronological time, it is often a harbinger of poor health outcomes and earlier mortality.

Researchers have previously established that general psychological distress can accelerate these biological clocks. However, less is known about the specific impact of aging anxiety. This form of anxiety is a multifaceted stressor. It encompasses fears about losing one’s attractiveness, the inability to reproduce, and the deterioration of physical health. Women often face unique societal pressures regarding these aspects of life.

Mariana Rodrigues, a researcher at the NYU School of Global Public Health, led a team to investigate this issue. Rodrigues and her colleagues sought to understand if these specific anxieties became biologically embedded in women. They hypothesized that the stress of worrying about aging acts as a persistent signal to the body. They believed this signal might trigger physiological responses that degrade cells over time.

To explore this connection, the team utilized data from the Midlife in the United States (MIDUS) study. This is a large, national longitudinal study that focuses on the health and well-being of U.S. adults. The researchers analyzed data from 726 women who participated in the biomarker project of the study. These participants provided blood samples and completed detailed questionnaires about their psychological state.

The researchers assessed aging anxiety across three distinct domains. First, they asked women about their worry regarding declining attractiveness. Second, they assessed anxiety related to declining health and illness. Third, they asked about worries concerning reproductive aging, such as being too old to have children.

The study employed two advanced epigenetic clocks to measure biological aging from the blood samples. The first clock, known as GrimAge2, estimates cumulative biological damage. It is often used to predict mortality risk by looking at a history of exposure to stressors.

The second clock, DunedinPACE, functions differently. Instead of measuring total accumulated damage, DunedinPACE acts like a speedometer. It measures the current pace of biological aging at the time the blood sample was taken.

The researchers used statistical models to test the relationship between the different types of anxiety and the two epigenetic clocks. They accounted for various factors that could skew the results. These included sociodemographic factors like age, race, and income. They also controlled for marital status and whether the women had entered menopause.

The analysis revealed distinct patterns in how different worries affect the body. The researchers found that anxiety about declining health was linked to a faster pace of aging as measured by DunedinPACE.

Women who reported higher levels of worry about illness and physical decline showed signs that their bodies were aging more rapidly than women with lower anxiety. This association persisted even when the researchers adjusted for the number of chronic health conditions the women already had.

This suggests that the worry itself, rather than just the presence of disease, plays a role in accelerating the aging process. However, the connection weakened when the researchers factored in health behaviors.

When they accounted for smoking, alcohol consumption, and body mass index, the statistical link between health anxiety and faster aging diminished. This reduction indicates that lifestyle behaviors likely mediate the relationship. Women who are anxious about their health might engage in coping behaviors that are detrimental to their physical well-being.

The study did not find the same results for the other domains of anxiety. Worries about declining attractiveness showed no statistical association with accelerated aging. Similarly, anxiety about reproductive aging was not linked to the epigenetic clocks. This lack of connection may be due to the fact that appearance and fertility concerns often fade as women grow older. Health concerns, by contrast, tend to persist or increase with age.

The researchers also combined the scores to look at cumulative aging anxiety. They found that the total burden of aging worries was associated with a faster pace of aging. Like the findings for health anxiety, this association was largely explained by health behaviors and existing chronic conditions.

It is worth noting that the findings were specific to the DunedinPACE clock. The researchers did not observe statistically significant associations between any form of aging anxiety and the GrimAge2 clock.

This discrepancy highlights the difference between the two measures. DunedinPACE captures the current speed of decline, which may be more sensitive to ongoing psychological stressors like anxiety. GrimAge2 reflects accumulated damage over a lifetime, which might not be as responsive to current subjective worries.

The authors propose that health-related anxiety operates as a chronic cycle. Fear of health decline leads to heightened body monitoring. This vigilance creates psychological distress. That distress triggers physiological stress responses, such as inflammation. Over time, these responses contribute to the wear and tear observed in the epigenetic data.

There are limitations to this study that affect how the results should be interpreted. The data was cross-sectional, meaning it captured a snapshot in time. Because of this design, the researchers cannot definitively prove that anxiety causes accelerated aging.

It is possible that the relationship works in the opposite direction. Perhaps women who are biologically aging faster feel physically worse, leading to increased anxiety.

Additionally, the measures for aging anxiety were based on single items in a questionnaire. This might not capture the full depth or nuance of a woman’s experience. The sample also consisted of English-speaking adults in the United States. Cultural differences in how aging is perceived and experienced could lead to different results in other populations.

Future research is needed to clarify the direction of these associations. Longitudinal studies that follow women over many years would help determine if anxiety precedes the acceleration of biological aging. Tracking changes in anxiety levels and epigenetic markers over time would provide stronger evidence of a causal link.

The study supports a biopsychosocial model of health. This model suggests that our subjective experiences and fears are not isolated in the mind. Instead, they interact with our biology to shape our long-term health. The findings suggest that addressing psychological distress about aging could be a potential avenue for improving physical health.

The study, “Aging anxiety and epigenetic aging in a national sample of adult women in the United States,” was authored by Mariana Rodrigues, Jemar R. Bather, and Adolfo G. Cuevas.

The hidden role of vulnerable dark personality traits in digital addiction

3 February 2026 at 19:00

Recent research indicates that specific personality traits marked by emotional fragility and impulsivity are strong predictors of addictive behaviors toward smartphones and social media. The findings suggest that for insecure individuals, social media applications frequently serve as a psychological gateway that leads to broader, compulsive phone habits. This investigation was published in the journal Personality and Individual Differences.

Psychologists have recognized for years that personality plays a role in how people interact with technology. Much of the previous work in this area focused on the “Big Five” personality traits, such as neuroticism or extraversion. Other studies looked at the “Dark Tetrad,” a cluster of traits including classic narcissism, Machiavellianism, psychopathy, and sadism.

These darker traits are typically associated with callousness, manipulation, and a lack of empathy. However, less attention has been paid to the “vulnerable” side of these darker personalities. This oversight leaves a gap in understanding how emotional instability drives digital compulsion.

Marco Giancola, a researcher at the University of L’Aquila in Italy, sought to address this gap. He and his colleagues designed a project to examine the “Vulnerable Dark Triad.” This specific personality taxonomy consists of three distinct components.

The first is Factor II Psychopathy, which is characterized by high impulsivity and reckless behavior rather than calculated manipulation. The second is Vulnerable Narcissism, which involves a fragile ego, hypersensitivity to criticism, and a constant need for reassurance. The third is Borderline Personality, marked by severe emotional instability and a fear of abandonment.

The researchers aimed to understand how these specific traits correlate with Problematic Smartphone Use (PSU) and Problematic Social Media Use (PSMU). They based their approach on the I-PACE model. This theoretical framework suggests that a person’s core characteristics interact with their emotional needs to shape how they use technology.

The team posited that people with vulnerable dark traits might not use technology to exploit others. Instead, these individuals might turn to digital devices to regulate their unstable moods or satisfy unmet needs for social validation.

The investigation consisted of two distinct phases. The first study involved 298 adult participants. The researchers administered a series of detailed questionnaires to assess personality structures. They also measured the participants’ levels of addiction to smartphones and social media platforms.

The team utilized statistical regression analysis to isolate the specific effects of the Vulnerable Dark Triad. They adjusted the data to account for sociodemographic factors like age and gender. They also controlled for standard personality traits and the antagonistic “Dark Tetrad” traits.

The results from this first study highlighted distinct patterns. Factor II Psychopathy emerged as the strongest and most consistent predictor of both smartphone and social media problems. This suggests that the impulsivity and lack of self-control inherent in this trait make it difficult for individuals to resist digital distractions. The inability to delay gratification appears to be a central mechanism here.

The analysis also revealed nuanced differences between the other traits. Vulnerable Narcissism was more strongly linked to generalized Problematic Smartphone Use. Individuals with this trait often harbor deep insecurities and a hidden sense of entitlement. They may use the smartphone as a safety blanket to avoid real-world social risks while seeking validation from a distance. The device allows them to construct a protected self-image that shields their fragile ego.

Conversely, Borderline Personality traits were more closely tied to Problematic Social Media Use. This makes sense given the interpersonal nature of the condition. People with these traits often struggle with intense fears of rejection. Social media platforms provide a space where they can constantly monitor relationships and seek signs of acceptance. The instantaneous feedback loop of likes and comments may temporarily soothe their anxiety about abandonment.

The researchers did not stop at identifying these associations. They conducted a second study with a larger sample of 586 participants to understand the sequence of these behaviors. The goal was to test a “bridge” hypothesis. The team suspected that these personality traits do not immediately cause a generalized phone addiction. They theorized that the addiction starts specifically with social media.

In this model, social media acts as the primary hook. The emotionally vulnerable individual turns to these apps to cope with negative feelings or to seek connection. Over time, this specific compulsion generalizes. The user begins to check the phone constantly, even when not using social media. The specific habit bleeds into a broader dysregulation of technology use.

The data from the second study supported this mediation model. The statistical analysis showed that Problematic Social Media Use effectively bridged the gap between the Vulnerable Dark Triad and general Problematic Smartphone Use. This was true for all three traits investigated. The path was indirect but clear. The vulnerability leads to social media compulsion, which in turn leads to a generalized dependency on the smartphone.

Factor II Psychopathy and Borderline Personality traits showed no direct link to general phone addiction in the second model. Their influence was entirely channeled through social media use. This indicates that for impulsive or emotionally unstable people, the social aspect of the technology is the primary driver. The device is merely the delivery mechanism for the social reinforcement they crave.

Vulnerable Narcissism showed a slightly different pattern. It had both a direct link to smartphone use and an indirect link through social media. This suggests a more complex relationship for this trait. These individuals likely use the phone for purposes beyond just social networking. They might engage in other validating activities like gaming or content consumption that prop up their self-esteem.

These findings offer a fresh perspective on digital addiction. They challenge the notion that “dark” personalities use the internet solely for trolling or cyberbullying. The research highlights a group of users who are internally suffering. Their online behavior is a coping mechanism for profound insecurity and emotional dysregulation.

The study aligns with the Problem Behavior Theory. This theory posits that maladaptive behaviors rarely occur in isolation. They tend to cluster together and reinforce one another. In this context, the smartphone provides an environment rich in rewards. It offers constant opportunities for mood modification. For someone with low impulse control or high emotional pain, the device becomes a necessary crutch.

There are important caveats to consider regarding this research. Both studies relied on self-reported data. Participants described their own behaviors and feelings. This method can introduce bias, as people may not assess their own addiction levels accurately.

Additionally, the research design was cross-sectional. The data captured a snapshot in time rather than tracking changes over a long period. While the statistical models suggest a direction of effect, they cannot definitively prove causation.

The sample collection method also presents a limitation. The researchers used a snowball sampling technique where participants recruited others. This approach can sometimes result in a pool of subjects that is not fully representative of the general population. The study was also conducted in Italy, which may limit how well the findings apply to other cultural contexts.

Future research should aim to address these shortcomings. Longitudinal studies are needed to track individuals over months or years. This would help confirm whether the personality traits definitively precede the addiction.

It would also be beneficial to use objective measures of screen time rather than relying solely on questionnaires. Seeing exactly which apps are used and for how long would provide a more granular picture of the behavior.

This research has practical implications for mental health and education. It suggests that treating technology addiction requires looking at the underlying personality structure. A one-size-fits-all approach to “digital detox” may not work.

Interventions might need to target the specific emotional deficits of the user. For instance, helping someone manage fear of abandonment or improve impulse control could be more effective than simply taking the phone away.

Understanding the “vulnerable” side of dark personality traits helps humanize those struggling with digital dependency. It shifts the narrative from one of bad habits to one of unmet psychological needs. As digital lives become increasingly intertwined with psychological well-being, this nuance is essential for developing better support systems.

The study, “The vulnerable side of technology addiction: Pathways linking the Vulnerable Dark Triad to problematic smartphone and social media use,” was authored by Marco Giancola, Laura Piccardi, Raffaella Nori, Simonetta D’Amico, and Massimiliano Palmiero.

Half of the racial mortality gap is explained by stress and inflammation

3 February 2026 at 03:00

Disparities in life expectancy between Black and White populations in the United States remain a persistent public health crisis. A new analysis suggests that a lifetime of accumulated stress and resulting bodily inflammation drives a large portion of this racial mortality gap. The findings appeared in a paper published in JAMA Network Open.

Researchers have sought to understand why Black Americans experience higher rates of chronic illness and earlier death. One prevailing theory involves the concept of “weathering.” This hypothesis posits that constant exposure to social and economic adversity physically erodes health over time. Black Americans often face systemic disadvantages and discrimination that generate chronic psychological pressure. This burden is thought to disrupt the immune system and accelerate aging.

Isaiah D. Spears, a graduate student at Washington University in St. Louis, led the new investigation. Spears worked alongside senior author Ryan Bogdan, who directs the BRAIN lab within the university’s Department of Psychological and Brain Sciences. They aimed to move beyond looking at single stressful events. Instead, they sought to measure the total weight of stress a person carries from childhood into old age.

Spears noted the motivation behind the work in a statement. He said he “saw the stark difference between the rate in which our Black participants in the sample have been dying relative to the white participants.” This observation prompted the team to investigate the biological mechanisms that might connect social experience to physical survival.

The team analyzed data from the St. Louis Personality and Aging Network (SPAN). This longitudinal project recruited late middle-aged adults from the St. Louis metropolitan area. The researchers followed these individuals for a period stretching up to seventeen years. The total sample included 1,554 participants. Approximately one-third of the group identified as Black, and two-thirds identified as White.

The researchers created a cumulative stress score for each person to capture the breadth of their life experiences. This score was not based on a single survey. It combined answers from multiple assessments regarding adverse life events. The team looked at exposure to maltreatment during childhood. They included traumatic events experienced during adulthood. They also accounted for specific stressful life episodes and reported experiences of discrimination.

Socioeconomic status served as another component of the stress score. The researchers factored in household income and education levels. They also looked at the education levels of the participants’ parents. This approach allowed the team to build a comprehensive model of the strain placed on an individual throughout their entire lifespan.

The study also required biological evidence of physical wear and tear. The researchers analyzed blood samples collected from the participants. They specifically looked for two biomarkers of inflammation. One is called C-reactive protein, or CRP. The other is Interleukin-6, or IL-6. These proteins are immune system messengers.

High levels of these markers indicate that the body is in a state of chronic inflammation. Short-term inflammation helps the body heal from injury or fight infection. Chronic inflammation, however, damages tissues and organs over time. It is a known risk factor for heart disease, cancer, and other age-related conditions.

The researchers then consulted the National Death Index to track mortality. They recorded which participants died during the study period and the cause of death. This allowed them to calculate survival times for Black and White participants.

The data revealed a clear pattern regarding survival. Black participants in the study died sooner than White participants. This aligned with national trends regarding excess death in minority populations. The Black participants also had higher scores for cumulative lifespan stress. Their blood tests showed higher levels of the inflammatory markers CRP and IL-6.

The researchers used statistical models to test whether these factors were connected. They found that the higher stress levels and subsequent inflammation were not merely coincidental. These factors statistically explained a large amount of the difference in survival rates.

The model suggested a specific pathway. Identifying as Black was associated with higher cumulative stress. This stress was associated with higher inflammation. Finally, that inflammation was associated with an increased risk of earlier death.

The combined effect of lifespan stress and inflammation accounted for 49.3 percent of the racial disparity in mortality. This means that roughly half of the excess mortality risk observed in Black participants could be attributed to these specific biological and environmental factors. The researchers found that stress alone and inflammation alone also played roles, but the combined pathway was the most explanatory.

Ryan Bogdan explained the biological logic in a press statement. He noted that “If stress becomes chronic, that could be incorporated into one’s homeostasis; you may become less able to mount your biological systems to respond to acute stress challenges and your may be less able to return to a bodily state that promotes regeneration and restoration.”

The study supports the idea that social inequality becomes biological reality. The stress measured in the study likely stems from structural racism. This includes factors such as unequal access to resources, neighborhood segregation, and economic barriers. These systemic issues create a constant background hum of adversity for many Black Americans.

Spears emphasized the physical toll of this environment. He stated, “Over time continued chronic exposure to stress leads to dysregulation and an earlier breakdown of some of the biological systems in the human body.” This breakdown manifests as the chronic diseases that disproportionately kill Black adults.

The authors noted several limitations to their work. The study took place in the St. Louis region. The specific social dynamics and health disparities there might not perfectly represent every part of the United States. The results might differ in regions with different economic or social structures.

The researchers also pointed out that their study is observational. They used statistical methods to infer a pathway from race to stress to death. However, they cannot definitively prove causation. Other unmeasured variables could be influencing the results.

The findings leave approximately 50 percent of the mortality gap unexplained. The authors suggest that other factors must be involved. These could include exposure to environmental toxicants like air pollution or lead. Differences in access to quality healthcare or trust in medical institutions could also play a role. Genetic or epigenetic factors that are influenced by ancestral stress might also contribute.

The study has implications for public policy and healthcare. It suggests that medical interventions alone cannot solve racial health disparities. Treating the downstream effects, such as high blood pressure or heart disease, is necessary but insufficient. The root causes of stress must be addressed.

Bogdan suggested that the work points toward the need for broader societal changes. He said, “Addressing large-scale societal issues requires concerted efforts enacted over time. That needle can be extremely hard to move.”

Policies that reduce structural discrimination could lower the stress burden on Black communities. This might involve economic reforms, housing policies, or changes in the criminal justice system. Reducing the sources of stress could prevent the chronic inflammation that leads to early death.

The researchers also see a need for better medical treatments for stress. Interventions that help the body manage the physiological response to adversity could save lives. This would be valuable while long-term societal changes are being implemented. Bogdan noted, “Stress exposure will always be there – so we need to devote more efforts to understand the mechanisms through which stress contributes to adverse health outcomes so that factors could be targeted to minimize health risks among those exposed.”

The study, “Cumulative Lifespan Stress, Inflammation, and Racial Disparities in Mortality Between Black and White Adults,” was authored by Isaiah D. Spears, Aaron J. Gorelik, Sara A. Norton, Michael J. Boudreaux, Megan W. Wolk, Jayne Siudzinski, Sarah E. Paul, Mary A. Cox, Cynthia E. Rogers, Thomas F. Oltmanns, Patrick L. Hill, and Ryan Bogdan.

For romantic satisfaction, quantity of affection beats similarity

3 February 2026 at 01:00

A new study suggests that the total amount of warmth shared between partners matters more than whether they express it equally. While similarity often breeds compatibility in many areas of life, researchers found that maximizing affectionate communication yields better relationship quality than simply matching a partner’s lower output. These results were recently published in the journal Communication Studies.

Relationship science often relies on two competing ideas regarding how couples succeed. One concept, known as assortative mating, suggests that people gravitate toward partners with similar traits, backgrounds, and behaviors. This principle implies that a reserved partner might feel most comfortable with an equally quiet companion.

Under that theory, a mismatch in expressiveness could lead to friction or misunderstanding. The logic holds that if one person is highly demonstrative and the other is stoic, the gap could cause dissatisfaction.

Conversely, a framework called affection exchange theory posits that expressing fondness is a fundamental human need that directly fuels bonding. This theory argues that affection acts as a resource that promotes survival and procreation capabilities.

Kory Floyd, a researcher at Washington State University, led the investigation to resolve which mechanism plays a larger role in romantic satisfaction. Floyd and his colleagues sought to determine if mismatched couples suffer from imbalance or if the sheer volume of warmth compensates for disparity.

The research team recruited 141 heterosexual couples from across the United States to participate in the study. These pairs represented a diverse range of ages, ethnic backgrounds, and socioeconomic levels. The researchers looked at the couple as a unit, rather than just surveying isolated individuals.

Each participant completed detailed surveys designed to measure their typical behaviors and feelings. They reported their “trait” affectionate communication, which refers to their general tendency to express and receive warmth. This included verbal affirmation, nonverbal gestures like holding hands, and acts of support.

Participants also rated the quality of their relationship across several specific dimensions. These metrics included feelings of trust, intimacy, passion, and overall satisfaction. The researchers then utilized complex statistical models to analyze how these factors influenced one another.

They examined “actor effects,” which measure how a person’s own behavior influences their own happiness. The analysis revealed that for both men and women, being affectionate predicted higher personal satisfaction. When an individual expressed more warmth, they generally felt better about the relationship.

The team also looked for “partner effects,” determining how one person’s actions change their partner’s experience. The study produced evidence that an individual’s expressions of warmth positively impacted their partner’s view of the relationship in about half of the categories tested.

However, the primary focus was comparing the absolute level of affection against the relative similarity of affection. The researchers created a mathematical comparison to pit the “birds of a feather” hypothesis against the “more is better” hypothesis.

The data showed that the absolute level of affectionate communication was a far stronger predictor of relationship health than the relative difference between partners. In simpler terms, a couple where one person is highly demonstrative and the other is moderate scores higher on satisfaction than a couple where both are equally reserved.

While similarity did not drag relationship scores down, it simply did not provide the same boost as high overall warmth. The results indicated that for most metrics of quality, the total volume of affection matters more than who fills the bucket.

This challenges the notion that finding a “mirror image” partner is the key to happiness. Colin Hesse, a co-author from Oregon State University, noted the distinction in the team’s press release.

Hesse stated, “The study does not discount the importance of similarity in many aspects of romantic relationships but instead highlights once again the specific importance of affectionate communication to the success and development of those relationships.”

The benefits appear to stem from the stress-relieving properties of positive touch and verbal affirmation. A high-affection environment creates a buffer against conflict and builds a reservoir of goodwill.

Hesse explained, “Generally speaking, affectionate communication is beneficial both for the partner who gives it and the partner receiving it.” This suggests that even if one partner does the heavy lifting, the union still thrives.

The findings offer reassurance to couples who worry about having different love languages or expressive styles. If one partner enjoys public displays of affection and the other prefers quiet support, the relationship is likely still healthy as long as the total affection remains high.

There were, however, specific exceptions in the data regarding feelings of love and commitment. For these two specific variables, the total amount of affection was not more influential than the similarity between partners. This nuance suggests that while satisfaction and passion are driven by volume, the core sense of commitment might operate differently.

While the study offers strong evidence for the power of affection, there are limitations to consider. The sample consisted entirely of heterosexual couples, meaning the dynamics might differ in LGBTQ+ relationships. The researchers relied on self-reported perceptions, which can sometimes be biased by a person’s current mood or memory.

Additionally, the study captures a snapshot in time rather than following couples over years. Future research could investigate how these dynamics shift over decades of marriage. It would be useful to see if the need for matched affection levels increases as a relationship matures.

Scientists might also look at specific types of affection to see if verbal or physical expressions carry different weights. For now, the message to couples is that increasing warmth is rarely a bad strategy.

Hesse concluded in the press release, “We would not prescribe specific affectionate behaviors but would in general counsel people to engage in affectionate communication.”

The study, “Affectionate Communication in Romantic Relationships: Are Relative Levels or Absolute Levels More Consequential?,” was authored by Kory Floyd, Lisa van Raalte, and Colin Hesse.

Before yesterdayMain stream

Brain scans reveal neural connectivity deficits in Long COVID and ME/CFS

2 February 2026 at 19:00

New research suggests that the brains of people with Long COVID and Myalgic Encephalomyelitis/Chronic Fatigue Syndrome (ME/CFS) struggle to communicate effectively during mentally tiring tasks. While healthy brains appear to tighten their neural connections when fatigued, these patients show disrupted or weakened signals between key brain areas. This study was published in the Journal of Translational Medicine.

ME/CFS and Long COVID are chronic conditions that severely impact the quality of life for millions of people. Patients often experience extreme exhaustion and “brain fog,” which refers to persistent difficulties with memory and concentration.

A defining feature of these illnesses is post-exertional malaise. This describes a crash in energy and a worsening of symptoms that follows even minor physical or mental effort. Doctors currently lack a definitive biological test to diagnose these conditions. This makes it difficult to distinguish them from one another or from other disorders with similar symptoms.

The research team sought to identify objective biological markers of these illnesses. Maira Inderyas, a PhD candidate at the National Centre for Neuroimmunology and Emerging Diseases at Griffith University in Australia, led the investigation. She worked alongside senior researchers including Professor Sonya Marshall-Gradisnik. They aimed to understand how the brain behaves when pushed to the limit of its cognitive endurance.

Professor Marshall-Gradisnik noted the shared experiences of these patient groups. “The symptoms include cognitive difficulties, such as memory problems, difficulties with attention and concentration, and slowed thinking,” Professor Marshall-Gradisnik said. The team hypothesized that these subjective feelings of brain fog would correspond to visible changes in brain activity.

To test this, the researchers utilized a 7 Tesla MRI scanner. This device is much more powerful than the standard scanners found in most hospitals. The high magnetic field allows for extremely detailed imaging of deep brain structures. It can detect subtle changes in blood flow that weaker scanners might miss.

The study involved nearly eighty participants. These included thirty-two individuals with ME/CFS and nineteen with Long COVID. A group of twenty-seven healthy volunteers served as a control group for comparison.

While inside the scanner, participants performed a cognitive challenge known as the Stroop task. This is a classic psychological test that requires focus and impulse control. Users must identify the color of a word’s ink while ignoring the actual word written. For example, the word “RED” might appear on the screen written in blue ink. The participant must select “blue” despite their brain automatically reading the word “red.”

“The task, called a Stroop task, was displayed to the participants on a screen during the scan, and required participants to ignore conflicting information and focus on the correct response, which places high demands on the brain’s executive function and inhibitory control,” Ms. Inderyas said.

The researchers structured the test to induce mental exhaustion. Participants performed the task in two separate sessions. The first session was designed to build up cognitive fatigue. The second session took place ninety seconds later, after fatigue had fully set in. This “Pre” and “Post” design allowed the scientists to see how the brain adapts to sustained mental effort.

The primary measurement used in this study was functional connectivity. This concept refers to how well different regions of the brain synchronize their activity. When two brain areas activate at the same time, it implies they are communicating or working together.

The results revealed clear differences between the healthy volunteers and the patient groups. In healthy participants, the brain responded to the fatigue of the second session by increasing its connectivity. Connections between deep brain regions and the cerebellum became stronger. This suggests that a healthy brain actively recruits more resources to maintain performance when it gets tired. It becomes more efficient and integrated under pressure.

The pattern was markedly different for patients with Long COVID. They displayed reduced connectivity between the nucleus accumbens and the cerebellum. The nucleus accumbens is a central part of the brain’s reward and motivation system. A lack of connection here might explain the sense of apathy or lack of mental drive patients often report.

Long COVID patients also showed an unusual increase in connectivity between the hippocampus and the prefrontal cortex. The researchers interpret this as a potential compensatory mechanism. The brain may be trying to bypass damaged networks to keep functioning. It is attempting to use memory centers to help with executive decision-making.

Patients with ME/CFS showed their own distinct patterns of dysfunction. They exhibited increased connectivity between specific areas of the brainstem, such as the cuneiform nucleus and the medulla. These regions are responsible for controlling automatic body functions. This finding aligns with the autonomic nervous system issues frequently seen in ME/CFS patients.

The researchers also looked at how these brain patterns related to the patients’ medical history. In the ME/CFS group, the length of their illness correlated with specific connectivity changes. As the duration of the illness increased, communication between the hippocampus and cerebellum appeared to weaken. This suggests a progressive change in brain function over time.

Direct comparisons between the groups highlighted the extent of the impairment. When compared to the healthy controls, both patient groups showed signs of neural disorganization. The healthy brain creates a “tight” network to handle stress. The patient brains appeared unable to form these robust connections.

Instead of tightening up, the networks in sick patients became looser or dysregulated. This failure to adapt dynamically likely contributes to the cognitive dysfunction known as brain fog. The brain cannot summon the necessary energy or coordination to process information efficiently.

“The scans show changes in the brain regions which may contribute to cognitive difficulties such as memory problems, difficulty concentrating, and slower thinking,” Ms. Inderyas said. This provides biological validation for symptoms that are often dismissed as psychological.

The study does have some limitations that must be considered. The number of participants in each group was relatively small. This is common in studies using such advanced and expensive imaging technology. However, it means the results should be replicated in larger groups to ensure accuracy.

The researchers also noted that they lacked complete medical histories regarding prior COVID-19 infections for the ME/CFS group. It is possible that some ME/CFS patients had undiagnosed COVID-19 in the past. This could potentially blur the lines between the two conditions.

Future studies will need to follow patients over a longer period. Longitudinal research would help determine if these brain changes evolve or improve over time. It would also help clarify if these connectivity issues are a cause of the illness or a result of it.

Despite these caveats, the use of 7 Tesla fMRI offers a promising new direction for research. It has revealed abnormalities that standard imaging could not detect. These findings could eventually lead to new diagnostic tools. Identifying specific broken circuits may also help researchers target treatments more effectively.

The study, “Distinct functional connectivity patterns in myalgic encephalomyelitis and long COVID patients during cognitive fatigue: a 7 Tesla task-fMRI study,” was authored by Maira Inderyas, Kiran Thapaliya, Sonya Marshall-Gradisnik & Leighton Barnden.

New findings challenge assumptions about men’s reading habits

2 February 2026 at 01:00

A longstanding belief in the publishing world suggests that men avoid reading fiction that centers on the lives of women. However, new research indicates that a protagonist’s gender has almost no impact on whether a man wants to continue reading a story. These findings appear in the Anthology of Computers and the Humanities.

The literary marketplace has historically skewed heavily toward men. For roughly two centuries, men wrote the majority of published novels. These books focused their narrative attention primarily on male characters.

That dynamic has shifted in recent years. Women now constitute the majority of published authors. In addition, women are now more likely to purchase and read books than men are.

This demographic change has sparked concern among some cultural commentators. There is an anxiety that literary fiction is becoming a pursuit exclusive to women. This worry often centers on the idea that boys and men are losing interest in reading as the representation of women increases.

Data from the industry shows a strong division between authors and readers based on gender. Men tend to read books written by men. Conversely, women tend to read books written by women.

Industry stakeholders often attribute this separation to a specific reader preference. They assume men are simply less willing to read books featuring women protagonists. This assumption suggests that publishers should release more stories centering on men to maximize their potential audience.

Federica Bologna, a doctoral student in information science at Cornell University, led a team to investigate this assumption. Co-authors included Ian Lundberg from the University of California, Los Angeles, and Matthew Wilkens from Cornell University. They noted that previous research on this topic was scarce.

Earlier studies on reader preferences often relied on small groups or interviews rather than large-scale data. Some of these smaller studies suggested that men prefer male protagonists. Others suggested that women were indifferent to character gender.

Bologna and her colleagues sought to determine if the gender of a character actually causes a reader to stop reading. They designed an experiment to isolate gender as a single variable. The team recruited approximately 3,000 participants living in the United States.

The participant pool was evenly split between men and women to ensure balanced data. The researchers excluded participants who identified as non-binary due to data limitations. The resulting sample size provided high statistical power for the analysis.

Participants read two short stories written specifically for the study. The researchers created original fiction to ensure no participant had seen the text before. One story focused on a character named Sam who goes hiking in the desert.

The second story depicted a character named Alex sketching in a coffee shop. The authors chose the names Sam and Alex because they are gender-neutral. This allowed the researchers to swap the genders of the characters without changing their names.

Crucially, the team randomized the pronouns used in each version of the stories. Half the participants read a version where Sam the hiker was a woman using “she/her” pronouns. In this version, Alex the artist was a man using “he/him” pronouns.

The other half of the participants read a version where the genders were swapped. For them, Sam was a man and Alex was a woman. This design ensured that the plot, setting, and dialogue remained identical for all readers.

Only the perceived gender of the main character changed between the groups. This approach is known as a vignette experiment. It allows researchers to attribute any difference in reader response directly to the specific variable they manipulated.

After reading the passages, participants had to answer comprehension questions. This step verified that they had actually read and understood the text. They were then asked to choose which of the two stories they would prefer to continue reading.

The researchers compared the probability of a reader selecting a story based on the protagonist’s gender. If the industry assumption were correct, men would be much less likely to choose the story when the protagonist was a woman. The results contradicted this prevailing wisdom.

When the protagonist was a woman, men chose the hiking story 76 percent of the time. When the protagonist was a man, men selected the hiking story 75 percent of the time. The statistical difference between these two numbers was effectively zero.

The presence of a female protagonist did not reduce the men’s desire to read the story. Being randomly assigned a female character increased the probability of a man choosing that story by only 0.8 percentage points. This result was not statistically distinguishable from having no effect at all.

Matthew Wilkens, an associate professor of information science, noted the clarity of the result. “This supposed preference among men for reading about men as characters just isn’t true. That doesn’t exist,” said Wilkens.

He emphasized that these findings challenge the anecdotes often cited in the publishing world. “That is contrary to the limited existing literature and contrary to widespread industry assumptions,” Wilkens added.

Women participants showed a different pattern than the men. They displayed a modest preference for stories featuring women. Women selected the hiking story 77 percent of the time when it featured a woman.

This probability dropped to 70 percent when the character was a man. The data suggests that while women leaned toward characters of their own gender, men remained indifferent. The gender of the character did not appear to be a deciding factor for male readers.

The authors acknowledged certain limitations in their experimental design. The study relied on just two specific short stories. It is possible that the genre of the story influences reader preferences in ways this experiment did not capture.

For instance, men might read more mysteries or thrillers. Those genres often feature male protagonists. If the study had used a different genre, the results might have differed.

Future research would need to randomize genre to see if that changes the outcome. Additionally, the use of unpublished fiction limits how well the study mimics real-world bookstores. In a bookstore, fame and marketing play a large role in what people choose.

However, using unpublished text provided strong internal validity. It prevented participants from recognizing the story or guessing the study’s intent. This ensures the responses were genuine reactions to the text itself.

Another limitation involved the demographics of the participants. The researchers excluded respondents with gender identities other than man or woman. This was necessary because they could not gather enough data on those groups to reach a statistical conclusion.

Bologna and her colleagues hope to include nonbinary readers in future work. Understanding how gender-nonconforming readers interact with character gender is a gap in the current science.

The study leaves open the question of why men predominantly read books by men. Since character gender is not the cause, other factors must be at play. The authors suggest that socialization or gendered expectations may influence reading habits.

Society may condition boys to view reading as a feminine activity. This could discourage them from reading at rates equal to girls. Alternatively, men may simply prefer the specific topics or writing styles found in books authored by men.

Despite these open questions, the study offers a clear message to publishers. The fear that writing about women will alienate male readers appears unfounded. Fiction editors need not reserve female protagonists for books marketed solely to women.

“Readers are pretty flexible,” Wilkens said. “Give them interesting stories, and they will want to read them.”

Bologna hopes this work will encourage the publishing industry to promote more books with a variety of girl and women characters. The team suggests that the industry creates a self-fulfilling prophecy by assuming men will not read about women. By breaking this cycle, publishers could offer a more diverse range of stories to all readers.

In future work, the researchers hope to explore whether these findings apply to other media. They question whether similar assumptions drive creators to avoid female protagonists in video games. If the same pattern holds, it would suggest that content creators across media are underestimating their male audience.

The study, “Causal Effect of Character Gender on Readers’ Preferences,” was authored by Federica Bologna, Ian Lundberg, and Matthew Wilkens.

What brain scans reveal about people who move more

1 February 2026 at 21:00

New research indicates that physical movement may help preserve the ability to recall numbers over short periods by maintaining the structural integrity of the brain. These findings highlight potential biological pathways connecting an active lifestyle to cognitive health in later life. The analysis was published in the European Journal of Neuroscience.

As the global population ages, the prevalence of cognitive impairment and dementia has emerged as a primary public health concern. Memory decline compromises daily independence and social engagement. Medical experts have identified physical inactivity as a modifiable risk factor for this deterioration.

Prior investigations have consistently linked exercise to better cognitive performance. Researchers have found that older adults who maintain active lifestyles often exhibit preserved memory and executive function. However, the biological mechanisms driving this protective effect remain only partially understood.

The brain undergoes physical changes as it ages. These changes often include a reduction in volume and the accumulation of damage. Neuroscientists categorize brain tissue into gray matter and white matter.

Gray matter consists largely of neuronal cell bodies and is essential for processing information. White matter comprises the nerve fibers that transmit signals between different brain regions. The integrity of these tissues is essential for optimal cognitive function.

Another marker of brain health is the presence of white matter hyperintensities. These are small lesions that appear as bright spots on magnetic resonance imaging scans. They frequently indicate disease in the small blood vessels of the brain and are associated with cognitive decline.

Previous studies attempting to link activity with brain structure often relied on self-reported data. Surveys asking participants to recall their exercise habits are prone to inaccuracies and bias. People may not remember their activity levels correctly or may overestimate their exertion.

To address these limitations, a team of researchers conducted a large-scale analysis using objective data. The study was led by Xiaomin Wu and Wenzhe Yang from the Department of Epidemiology and Biostatistics at Tianjin Medical University in China. They utilized data from the UK Biobank, a massive biomedical database containing genetic and health information.

The researchers aimed to determine if objectively measured physical activity was associated with specific memory functions. They also sought to understand if structural markers in the brain could explain this relationship statistically. They focused on a sample of middle-aged and older adults.

The final analysis included 19,721 participants. The subjects ranged in age from 45 to 82 years. The study population was predominantly white and had a relatively high level of education.

Physical activity was measured using wrist-worn accelerometers. Participants wore these devices continuously for seven days. This method captured all movement intensity, frequency, and duration without relying on human memory.

The researchers assessed memory function using three distinct computerized tests. The first was a numeric memory test. Participants had to memorize a string of digits and enter them after they disappeared from the screen.

The second assessment was a visual memory test involving pairs of cards. Participants viewed the cards briefly and then had to match pairs from memory. The third was a prospective memory test, which required participants to remember to perform a specific action later in the assessment.

A subset of 14,718 participants also underwent magnetic resonance imaging scans. These scans allowed the researchers to measure total brain volume and the volumes of specific tissues. They specifically examined gray matter, white matter, and the hippocampus.

The hippocampus is a seahorse-shaped structure deep in the brain known to be vital for learning and memory. The researchers also quantified the volume of white matter hyperintensities. They then used statistical models to look for associations between activity, brain structure, and memory.

The study found a clear positive association between physical activity and performance on the numeric memory test. Individuals who moved more tended to recall longer strings of digits. This association held true even after adjusting for factors like age, education, and smoking status.

The results for the other memory tests were less consistent. Physical activity was not strongly linked to prospective memory. The link to visual memory was weak and disappeared in some sensitivity analyses.

When examining brain structure, the researchers observed that higher levels of physical activity correlated with larger brain volumes. Active participants had greater total brain volume. They also possessed higher volumes of both gray and white matter.

The scans also revealed that increased physical activity was associated with a larger hippocampus. This was observed in both the left and right sides of this brain region. Perhaps most notably, higher activity levels were linked to a lower volume of white matter hyperintensities.

The researchers then performed a pathway analysis to understand the mechanism. This statistical method estimates how much of the link between two variables is explained by a third variable. They tested whether the brain structures mediated the relationship between activity and numeric memory.

The analysis showed that brain structural markers explained a substantial portion of the memory benefits. Total brain volume, white matter volume, and gray matter volume all acted as mediators. White matter hyperintensities played a particularly strong role.

Specifically, the reduction in white matter hyperintensities accounted for nearly 30 percent of the total effect of activity on memory. This suggests that physical activity may protect memory partly by maintaining blood vessel health in the brain. Preventing small vessel damage appears to be a key pathway.

The findings indicate that physical activity helps maintain the overall “hardware” of the brain. By preserving the volume of processing tissue and connection fibers, movement supports the neural networks required for short-term memory. The preservation of white matter integrity seems particularly relevant.

The researchers encountered an unexpected result regarding the hippocampus. Although physical activity was linked to a larger hippocampus, this volume increase did not explain the improvement in numeric memory. The pathway analysis did not find a significant mediating effect for this specific structure.

The authors suggest this may be due to the nature of the specific memory task. Recalling a string of numbers is a short-term working memory task. This type of cognitive effort relies heavily on frontoparietal networks rather than the hippocampus.

The hippocampus is more closely associated with episodic memory, or the recollection of specific events and experiences. The numeric test used in the UK Biobank may simply tap into different neural circuits. Consequently, the structural benefits to the hippocampus might benefit other types of memory not fully captured by this specific test.

The study provides evidence that the benefits of exercise are detectable in the physical structure of the brain. It supports the idea that lifestyle choices can buffer against age-related degeneration. The protective effects were observed in a non-demented population, suggesting benefits for generally healthy adults.

There are several important caveats to consider regarding this research. The study was cross-sectional in design. This means data on activity, brain structure, and memory were collected at roughly the same time.

Because of this design, the researchers cannot definitively prove causality. It is possible that people with healthier brains find it easier to be physically active. Longitudinal studies tracking changes over time are necessary to confirm the direction of the effect.

Another limitation is the composition of the study group. The UK Biobank participants tend to be healthier and wealthier than the general population. This “healthy volunteer” bias might limit how well the findings apply to broader, more diverse groups.

The measurement of physical activity, while objective, was limited to a single week. This snapshot might not perfectly reflect a person’s long-term lifestyle habits. However, it is generally considered more reliable than retrospective questionnaires.

Future research should explore these relationships in more diverse populations. Studies including participants with varying levels of cardiovascular health would be informative. Additionally, using a wider array of memory tests could help map specific brain changes to specific cognitive domains.

Despite these limitations, the study reinforces the importance of moving for brain health. It suggests that physical activity does not just improve mood or heart health. It appears to physically preserve the brain tissue required for cognitive function.

The preservation of white matter and the reduction of vascular damage markers stand out as key findings. These structural elements provide the connectivity and health necessary for the brain to operate efficiently. Simple daily movement may serve as a defense against the structural atrophy that often accompanies aging.

The study, “Association Between Physical Activity and Memory Function: The Role of Brain Structural Markers in a Cross-Sectional Study,” was authored by Xiaomin Wu, Wenzhe Yang, Yu Li, Luhan Zhang, Chenyu Li, Weili Xu, and Fei Ma.

Alcohol shifts the brain into a fragmented and local state

1 February 2026 at 17:00

A standard glass of wine or beer does more than just relax the body; it fundamentally alters the landscape of communication within the brain. New research suggests that acute alcohol consumption shifts neural activity from a flexible, globally integrated network to a more segmented, local structure. These changes in brain architecture appear to track with how intoxicated a person feels. The findings were published in the journal Drug and Alcohol Dependence.

For decades, neuroscientists have worked to map how alcohol affects human behavior. Traditional studies often look at specific brain regions in isolation. Researchers might observe that activity in the prefrontal cortex dampens, which explains why inhibition lowers. Alternatively, they might see changes in the cerebellum, which accounts for the loss of physical coordination.

However, the brain does not operate as a collection of independent islands. It functions as a massive, interconnected web. Information must travel constantly between different areas to process sights, sounds, and thoughts. Understanding how alcohol impacts the traffic patterns of this web requires a different mathematical approach known as graph theory.

Graph theory allows scientists to treat the brain like a vast map of cities and highways. The “cities” are distinct brain regions, referred to as nodes. The “highways” are the functional connections between them, known as edges. By analyzing the flow of traffic across these highways, researchers can determine how efficiently the brain is sharing information.

Leah A. Biessenberger and her colleagues at the University of Minnesota and the University of Florida sought to apply this network-level analysis to social drinkers. Biessenberger, the study’s lead author, worked alongside senior author Jeff Boissoneault and a wider team. They aimed to fill a gap in the scientific literature regarding acute alcohol use.

While previous research has examined how chronic, heavy drinking reshapes the brain over years, less is known about the immediate network effects of a single drinking session. The researchers wanted to observe the brain in a “resting state.” This is the baseline activity that occurs when a person is awake but not performing a specific task.

To investigate this, the team recruited 107 healthy adults between the ages of 21 and 45. The participants were social drinkers without a history of alcohol use disorder. The study utilized a double-blind, placebo-controlled design. This method is the gold standard for removing bias from clinical experiments.

Each participant visited the laboratory for two separate sessions. During one visit, they consumed a beverage containing alcohol mixed with a sugar-free mixer. The dose was calculated to bring their breath alcohol concentration to 0.08 grams per deciliter, which is the legal driving limit in the United States.

During the other visit, they received a placebo drink. This beverage contained only the mixer but was misted with a small amount of alcohol on the surface and rim to mimic the smell and taste of a real cocktail. Neither the participants nor the research staff knew which drink was administered on a given day.

Approximately 30 minutes after drinking, the participants entered an MRI scanner. They were instructed to keep their eyes open and let their minds wander. The scanner recorded the blood oxygen levels in their brains, which serves as a proxy for neural activity.

The researchers then used computational tools to analyze the functional connectivity between 106 different brain regions. They looked for specific patterns in the data described by graph theory metrics. These metrics included “global efficiency” and “local efficiency.”

Global efficiency measures how easily information travels across the entire network. A network with high global efficiency has many long-distance shortcuts, allowing distant regions to communicate quickly. Local efficiency measures how well neighbors talk to neighbors. It reflects the tendency of brain regions to form tight-knit clusters that process information among themselves.

The analysis revealed distinct shifts in the brain’s topology following alcohol consumption. When participants drank alcohol, their brains moved toward a more “grid-like” state. The network became less random and more clustered.

Specifically, the study found that global efficiency decreased in several areas. This was particularly evident in the occipital lobe, the part of the brain responsible for processing vision. The reduction suggests that alcohol makes it harder for visual information to integrate with the rest of the brain’s operations.

Simultaneously, local efficiency increased. Regions in the frontal and temporal cortices began to communicate more intensely with their immediate neighbors. The brain appeared to fracture into smaller, self-contained communities. This structure requires less energy to maintain but hinders the rapid integration of complex information.

The researchers also examined a metric called “clustering coefficient.” This value reflects the likelihood that a node’s neighbors are also connected to each other. Alcohol increased the clustering coefficient across the network. This further supports the idea that the intoxicated brain relies more on local processing than global integration.

The team also looked at the “insula,” a region deeply involved in sensing the body’s internal state. Under the influence of alcohol, the insula showed increased connections with its local neighbors. It also displayed greater activity in communicating with the broader network compared to the placebo condition.

These architectural changes were not merely abstract mathematical observations. The researchers found a statistical link between the network shifts and the participants’ subjective experiences. Before the scan, participants rated how intoxicated they felt on a scale of 0 to 100.

The results showed that the degree of network reorganization predicted the intensity of the subjective “buzz.” Participants whose brains showed the largest drop in global efficiency and the largest rise in local clustering tended to report feeling the most intoxicated. The structural breakdown of long-range communication tracked with the feeling of impairment.

This correlation offers new insight into why individuals react differently to the same amount of alcohol. Even at the same blood alcohol concentration, people experience varying levels of intoxication. The study suggests that individual differences in how the brain network fragments may underlie these varying subjective responses.

The findings also highlighted disruptions in the visual system. The decrease in efficiency within the occipital regions was marked. This aligns with well-known effects of drunkenness, such as blurred vision or difficulty tracking moving objects. The network analysis provides a neural basis for these sensory deficits.

While the study offers robust evidence, the authors note certain limitations. The MRI scans did not capture the cerebellum consistently for all participants. The cerebellum is vital for balance and motor control. Because it was not included in the analysis, the picture of alcohol’s effect on the whole brain remains incomplete.

Additionally, the study focused on young, healthy adults. The brain changes observed here might differ in older adults or individuals with a history of substance abuse. Aging brains already show some reductions in global efficiency. Alcohol could compound these effects in older populations.

The researchers also point out that the participants were in a resting state. The brain rearranges its network when actively solving problems or processing emotions. Future research will need to determine if these topological shifts persist or worsen when an intoxicated person tries to perform a complex task, like driving.

This investigation provides a nuanced view of acute intoxication. It moves beyond the idea that alcohol simply “dampens” brain activity. Instead, it reveals that alcohol forces the brain into a segregated state. Information gets trapped in local cul-de-sacs rather than traveling the superhighways of the mind.

By connecting these mathematical patterns to the subjective feeling of being drunk, the study helps bridge the gap between biology and behavior. It illustrates that the sensation of intoxication is, in part, the feeling of a brain losing its global coherence.

The study, “Acute alcohol intake disrupts resting state network topology in healthy social drinkers,” was authored by Leah A. Biessenberger, Adriana K. Cushnie, Bethany Stennett-Blackmon, Landrew S. Sevel, Michael E. Robinson, Sara Jo Nixon, and Jeff Boissoneault.

Memories of childhood trauma may shift depending on current relationships

1 February 2026 at 05:00

Most people assume their memories of growing up are fixed, much like a file stored in a cabinet, but new research suggests the way we remember our childhoods might actually shift depending on how we feel about our relationships today. A study published in Child Abuse & Neglect reveals that young adults report fewer adverse childhood experiences during weeks when they feel more supported by their parents. This suggests that standard measures of early trauma may reflect a person’s current state of mind as much as their historical reality.

Adverse childhood experiences, or ACEs, refer to traumatic events such as abuse, neglect, and household dysfunction that occur before the age of 18. Medical professionals and psychologists frequently use questionnaires to tally these events because a high number of ACEs is associated with poor mental and physical health outcomes later in life. These screenings rely on the assumption that an adult’s memory of the past is stable and reliable over time.

However, human memory is not a static playback device. It is a reconstructive process that can be influenced by current moods, identity development, and social contexts. This is particularly true for emerging adults, who are navigating the transition from dependence on parents to establishing their own independent identities. This developmental period often requires young people to re-evaluate their family dynamics.

Annika Jaros, a researcher at Michigan State University, led an investigation into this phenomenon alongside co-author William Chopik. They sought to determine if fluctuations in current social relationships or stress levels corresponded with changes in how young adults remembered early adversity. They hypothesized that recollections of the past might wax and wane alongside the quality of a person’s present-day interactions.

The team recruited 938 emerging adults, largely undergraduate students, to complete three identical surveys. These surveys were spaced four weeks apart over a two-month period. At each interval, participants completed the Childhood Trauma Questionnaire, a standard tool used to identify histories of emotional, physical, and sexual abuse, as well as physical and emotional neglect.

In addition to recalling the past, participants rated the current quality of their close relationships. They reported on levels of support and strain with their parents, friends, and romantic partners. They also rated their current levels of academic stress to see if general life pressure affected their memories.

The researchers used statistical models to separate the data into two distinct categories of variance. They looked at differences between people, such as whether a person with a generally happy childhood reports better adult relationships. They also looked at variations within the same person over the course of the eight weeks.

The results showed that reports of childhood adversity were largely consistent over the two months. However, there was measurable variability in the answers provided by the same individuals from month to month. The analysis revealed that this variability was not random but tracked with changes in parental relationships.

When participants reported receiving higher-than-usual support from their parents, they reported fewer instances of childhood adversity. Conversely, during weeks when parental strain was higher than their personal average, recollections of emotional abuse, sexual abuse, and emotional neglect increased. This pattern suggests that a positive shift in a current relationship can soften the recollection of past transgressions.

The influence of friends and romantic partners was less pronounced than that of parents. While supportive friendships were generally associated with fewer reported ACEs on average, changes in friendship quality did not strongly predict fluctuations in memory from week to week. Romantic partners showed a similar pattern, where high support correlated with fewer retrospective reports of sexual abuse, but the effect was limited.

Academic stress also played a minor role in how participants viewed their pasts. While higher stress was linked to slight increases in reports of emotional abuse and physical neglect, the impact was small compared to the influence of family dynamics. The primary driver of change in these memories appeared to be the quality of the bond with caregivers.

The authors noted several limitations to the study that contextualize the results. The sample consisted primarily of university students, meaning the results may not apply to older adults or those with different socioeconomic backgrounds. The study covered only an eight-week period, leaving it unclear if these fluctuations persist or change over years.

There was also a pattern of attrition that affected the data. Participants with more severe histories of trauma were more likely to stop responding to the surveys over time. This may have reduced the study’s ability to capture the full range of variability in how trauma is recalled by those with the most difficult histories.

Despite these caveats, the findings have practical implications for therapists and researchers. A single screening for childhood adversity may capture a snapshot influenced by the patient’s current state of mind rather than a definitive history. Assessing these experiences multiple times could provide a more accurate picture of a patient’s background and current psychological state.

The study challenges the idea that retrospective reports are purely factual records. Instead, they appear to be dynamic interpretations that serve a function in the present. As young adults work to integrate their pasts into their life stories, their memories seem to breathe in time with their current emotional needs.

“People are generally consistent in how they recall their past, but the small shifts in reporting are meaningful,” said Chopik. “It doesn’t mean people are unreliable, it means that memory is doing what it does — integrating past experiences with present meaning.”

The study, “Record of the past or reflection of the present? Fluctuations in recollections of childhood adversity and fluctuations in adult relationship circumstances,” was authored by Annika Jaros and William J. Chopik.

Long-term antidepressant effects of psilocybin linked to functional brain changes

31 January 2026 at 23:00

A new study suggests that the long-term antidepressant effects of psychedelics may be driven by persistent changes in how neurons fire rather than by the permanent growth of new brain cell connections. Researchers found that a single dose of psilocybin altered the electrical properties of brain cells in rats for months, even after physical changes to the neurons had disappeared. These findings were published in the journal Neuropsychopharmacology.

Depression is a debilitating condition that is often treated with daily medications. These standard treatments can take weeks to work and do not help every patient. Psilocybin, a compound found in certain mushrooms, has emerged as a potential alternative therapy. Clinical trials indicate that one or two doses of psilocybin can alleviate symptoms of depression for months or even years. However, scientists do not fully understand the biological mechanisms that allow a single treatment to produce such enduring results.

Researchers have previously focused on the concept of neuroplasticity to explain these effects. This term generally refers to the brain’s ability to reorganize itself. One specific type is structural plasticity, which involves the physical growth of new connection points between neurons, known as dendritic spines. Short-term studies conducted days or weeks after drug administration often show an increase in these spines. The question remained whether these physical structures persist long enough to account for relief lasting several months.

To investigate this, a team of researchers led by Hannah M. Kramer, Meghan Hibicke, and Charles D. Nichols at LSU Health Sciences Center designed an experiment using rats. They chose Wistar Kyoto rats for the study. This specific breed is often used in research because the animals naturally exhibit behaviors analogous to stress and depression in humans.

The investigators sought to compare the effects of psilocybin against another compound called 25CN-NBOH. Psilocybin interacts with various serotonin receptors in the brain. In contrast, 25CN-NBOH is a synthetic drug designed to target only one specific receptor known as the 5-HT2A receptor. This is the receptor believed to be primarily responsible for the psychedelic experience. By using both drugs, the team hoped to isolate the role of this specific receptor in creating long-term behavioral changes.

The study began with the administration of a single dose of either psilocybin, 25CN-NBOH, or a saline placebo to the male rats. The researchers then waited for a substantial period before testing the animals. They assessed the rats’ behavior at five weeks and again at twelve weeks after the injection. This timeline allowed the team to evaluate effects that persist well beyond the immediate aftermath of the drug experience.

The primary method for assessing behavior was the forced swim test. In this standard procedure, rats are placed in a tank of water from which they cannot escape. Researchers measure how much time the animals spend swimming versus floating motionless. In this context, high levels of immobility are interpreted as a passive coping strategy, which is considered a marker for depressive-like behavior. Antidepressant drugs typically cause rats to spend more time swimming and struggling.

The behavioral results indicated a lasting change. Rats treated with either psilocybin or 25CN-NBOH showed reduced immobility compared to the control group. This antidepressant-like effect was evident at the five-week mark. It remained equally strong at the twelve-week mark. The persistence of the effect suggests that the single dose induced a stable, long-term shift in behavior.

After the twelve-week behavioral tests, the researchers examined the brains of the animals. They focused on the medial prefrontal cortex. This brain region is involved in mood regulation and decision-making. The team utilized high-resolution microscopy to count the density of dendritic spines on the neurons. They specifically looked for the physical evidence of new connections that previous short-term studies had identified.

The microscopic analysis revealed that the number of dendritic spines in the treated rats was no different from that of the control group. The structural growth seen in other studies shortly after treatment appeared to be transient. The physical architecture of the neurons had returned to its baseline state after three months. The researchers also analyzed the expression of genes related to synaptic structure. They found no difference in gene activity between the groups.

Since structural changes could not explain the lasting behavioral shift, the team investigated functional plasticity. This refers to changes in how neurons process and transmit electrical signals. They prepared thin slices of the rats’ brain tissue. Using a technique called electrophysiology, they inserted microscopic glass pipettes into individual neurons to record their electrical activity.

The researchers classified the neurons into two types based on their firing patterns: adapting neurons and bursting neurons. Adapting neurons typically slow down their firing rate after an initial spike. Bursting neurons fire in rapid clusters of signals. The recordings showed that the drugs had altered the intrinsic electrical properties of these cells.

In the group treated with psilocybin, adapting neurons sat at a resting voltage that was closer to the threshold for firing. This state is known as depolarization. It means the cells are primed to activate more easily. The bursting neurons in psilocybin-treated rats also showed increased excitability. They required less input to trigger a signal and fired at faster rates than neurons in untreated rats.

The rats treated with 25CN-NBOH also exhibited functional changes, though the specific electrical alterations differed slightly from the psilocybin group. For instance, the bursting neurons in this group were not as easily triggered as those in the psilocybin group. However, the overall pattern confirmed that the drug had induced a lasting shift in neuronal function.

These electrophysiological findings provide a potential explanation for the behavioral results. While the physical branches of the neurons may have pruned back to normal levels, the cells “remembered” the treatment through altered electrical tuning. This functional shift allows the neural circuits to operate differently long after the drug has left the body.

The study implies that the 5-HT2A receptor is sufficient to trigger these long-term changes. The synthetic drug 25CN-NBOH produced lasting behavioral effects similar to psilocybin. This suggests that activating this single receptor type can initiate the cascade of events leading to persistent antidepressant-like effects.

There are limitations to this study that provide context for the results. The researchers used only male rats. Female rats may exhibit different biological responses to psychedelics or stress. Future research would need to include both sexes to ensure the findings are universally applicable.

Additionally, the forced swim test is a proxy for human depression but does not capture the full complexity of the human disorder. While it is a standard tool for screening antidepressant drugs, it measures a specific type of coping behavior. The translation of these specific neural changes to human psychology remains a subject for further investigation.

The researchers also noted that while spine density returned to baseline, this does not mean structural plasticity plays no role. It is possible that a rapid, temporary growth of connections acts as a trigger. This early phase might set the stage for the permanent electrical changes that follow. The exact molecular switch that locks in these functional changes remains to be identified.

Future studies will likely focus on the period between the initial dose and the three-month mark. Scientists need to map the transition from structural growth to functional endurance. Understanding this timeline could help optimize how these therapies are delivered.

The study, “Psychedelics produce enduring behavioral effects and functional plasticity through mechanisms independent of structural plasticity,” was authored by Hannah M. Kramer, Meghan Hibicke, Jason Middleton, Alaina M. Jaster, Jesper L. Kristensen and Charles D. Nichols.

Scientists identify key brain structure linked to bipolar pathology

31 January 2026 at 19:00

Recent analysis of human brain tissue suggests that a small and often overlooked region deep within the brain may play a central role in bipolar disorder. Researchers found that neurons in the paraventricular thalamic nucleus are depleted and genetically altered in people with the condition. These results point toward potential new targets for diagnosis and treatment. The findings were published in the journal Nature Communications.

Bipolar disorder is a mental health condition characterized by extreme shifts in mood and energy levels. It affects approximately one percent of the global population and can severely disrupt daily life. While medications such as lithium and antipsychotics exist, they do not work for every patient. These drugs also frequently carry difficult side effects that cause patients to stop taking them. To develop better therapies, medical experts need a precise map of what goes wrong in the brain.

Past research has largely focused on the outer layer of the brain known as the cortex. This area is responsible for higher-level thinking and processing. However, brain scans using magnetic resonance imaging have hinted that deeper structures also shrink in size during the course of the illness. One such structure is the thalamus. This central hub acts as a relay station for sensory information and emotional regulation.

Within the thalamus lies a specific cluster of cells called the paraventricular thalamic nucleus. This area is rich in chemical messengers and has connections to parts of the brain involved in emotion. Despite these clues, the molecular details of this region remained largely unmapped in humans. A team led by Masaki Nishioka and Tadafumi Kato from Juntendo University Graduate School of Medicine in Tokyo launched an investigation to bridge this gap. They collaborated with researchers including Mie Sakashita-Kubota to analyze postmortem brain tissue.

The researchers aimed to determine if the genetic activity in this deep brain region differed from healthy brains. They examined brain samples from 21 individuals who had been diagnosed with bipolar disorder and 20 individuals without psychiatric conditions. They looked at two specific areas: the frontal cortex and the paraventricular thalamic nucleus. To do this, they used a technique called single-nucleus RNA sequencing.

This technology allows researchers to catalog the genetic instructions being used by individual cells. By analyzing thousands of nuclei, the team could identify different cell types and see which genes were active or inactive. This provided a high-resolution view of the cellular landscape. They compared the data from the thalamus against the data from the cortex to see which region was more affected.

The analysis revealed that the thalamus had undergone substantial changes. Specifically, the paraventricular thalamic nucleus contained far fewer excitatory neurons in the samples from people with bipolar disorder. The researchers estimated a reduction of roughly 50 percent in these cells compared to the control group. This loss was specific to the neurons that send stimulating signals to other parts of the brain.

In contrast, the changes observed in the frontal cortex were much more subtle. While there were some alterations in the cortical cells, they were not as extensive as those seen in the deep brain. This suggests that the thalamus might be a primary site of pathology in the disorder. The team validated these findings by staining proteins in the tissue to visually confirm the lower cell density.

Inside the remaining thalamic neurons, the genetic machinery was also behaving differently. The study identified a reduced activity of genes responsible for maintaining connections between neurons. These genes are essential for the flow of chemical and electrical signals. Among the affected genes were CACNA1C and SHISA9. These specific segments of DNA have been flagged in previous genetic studies as potential risk factors for the illness.

Another gene called KCNQ3, which helps regulate electrical channels in cells, was also less active. These channels act like gates that let electrically charged potassium or calcium atoms flow in and out of the cell. This flow is what allows a neuron to fire a signal. When the genes controlling these gates are turned down, the neuron may become unstable or fail to communicate.

The specific combination of affected genes suggests a vulnerability in how these cells handle calcium and electrical activity. High-frequency firing of neurons requires tight regulation of calcium levels. If the proteins that manage this process are missing, the cells might become damaged over time. This could explain why so many of these neurons were missing in the patient samples.

The team also looked at non-neuronal cells called microglia. These are the immune cells of the brain that help maintain healthy synapses. Synapses are the junction points where neurons pass signals to one another. The data showed that the communication between the thalamic neurons and these immune cells was disrupted.

A specific pattern of gene expression that usually coordinates the interaction between excitatory neurons and microglia was weaker in the bipolar disorder samples. This breakdown could contribute to the loss of synapses or the death of neurons. It represents a failure in the support system that keeps brain circuits healthy. The simultaneous decline in both neuron and microglia function suggests a coordinated failure in the region.

The researchers note that the paraventricular thalamic nucleus is distinct from other brain regions. It contains a high density of receptors for dopamine, a neurotransmitter involved in reward and motivation. This makes it a likely target for antipsychotic medications that act on the dopamine system. The specific genetic profile of these neurons aligns with biological processes previously linked to the disorder.

There are limitations to consider regarding these results. The study relied on postmortem tissue, so it represents a snapshot of the brain at the end of life. It is difficult to know for certain if the cell loss caused the disorder or if the disorder caused the cell loss. The sample size was relatively small, with only 41 donors in total.

Additionally, the patients had been taking various medications throughout their lives. These drugs can influence gene expression. The researchers checked for medication effects and found little overlap between drug signatures and their findings. However, they could not rule out medication influence entirely.

Looking ahead, the authors suggest that the paraventricular thalamic nucleus could be a target for new drugs. Therapies that aim to protect these neurons or restore their function might offer relief where current treatments fail. Advanced imaging could also focus on this region to help diagnose the condition earlier.

Associate Professor Nishioka emphasized the importance of looking beyond the usual suspects in brain research. “This study highlights the need to extend research to the subcortical regions of the brain, which may harbor critical yet underexplored components of BD pathophysiology,” Nishioka stated. The team hopes that integrating these molecular findings with neuroimaging will lead to better patient outcomes.

Professor Kato added that the findings could reshape how scientists view the origins of the illness. “We finally identified that PVT is the brain region causative for BD,” Kato said. “This discovery will lead to the paradigm shift of BD research.”

The study, “Disturbances of paraventricular thalamic nucleus neurons in bipolar disorder revealed by single-nucleus analysis,” was authored by Masaki Nishioka, Mie Sakashita-Kubota, Kouichirou Iijima, Yukako Hasegawa, Mizuho Ishiwata, Kaito Takase, Ryuya Ichikawa, Naguib Mechawar, Gustavo Turecki & Tadafumi Kato.

New research links psychopathy to a proclivity for upskirting

31 January 2026 at 03:00

The unauthorized taking of intimate images, a practice often referred to as “upskirting,” has emerged as a distinct form of sexual abuse in the digital age. New research indicates that the likelihood of someone committing this offense, as well as how society judges the victims, is heavily influenced by demographic factors such as age and gender.

The study found that older individuals and men are generally more inclined to blame the victim and less likely to perceive the act as a serious criminal offense. These findings on the psychology behind image-based sexual abuse were published in the journal Sexual Abuse.

As smartphones with high-quality cameras have become ubiquitous, the barriers to committing digital sex crimes have lowered. One such offense is upskirting, which involves positioning a camera underneath a person’s clothing to photograph or film their genitals or buttocks without their consent.

This behavior is often done to obtain sexual gratification or to cause humiliation. While England and Wales formally criminalized this specific act under the Voyeurism (Offences) Act in 2019, legal frameworks around the world remain inconsistent. Some jurisdictions treat it as a breach of privacy rather than a sexual crime, while others lack specific legislation altogether.

To better address this issue, it is necessary to understand the psychological motivations of the perpetrators and the societal attitudes that might minimize the harm caused to victims. Dean Fido, a psychologist at the University of Derby, led a research team to investigate these factors.

Fido and his colleagues, Craig A. Harper, Simon Duff, and Thomas E. Page, aimed to identify which personality traits predict a willingness to commit upskirting. They also sought to determine if the physical characteristics of the victim affect how the public judges the severity of the crime.

The researchers recruited 490 participants from the United Kingdom to complete an online study. To assess social judgments, the team presented participants with a written vignette describing a fictional scenario at a spa.

In the story, a character named Taylor is relaxing on a poolside lounger. Taylor notices another character, named Ashley, lying on a lounger opposite. Taylor observes that Ashley’s robe has parted, revealing their genitals. Without Ashley noticing, Taylor uses a mobile phone to take a photograph of Ashley’s private area before leaving the premises.

The researchers manipulated the details of this story to create four different versions. In some versions, the victim, Ashley, was described as a woman, while in others, Ashley was a man.

Additionally, the researchers included a photograph of “Ashley” to manipulate perceived attractiveness. These photos depicted either an attractive or unattractive individual, based on ratings from previous psychological datasets. After reading the assigned scenario, participants answered questions about how much blame Ashley deserved, whether police intervention was necessary, and how much harm the incident caused.

The results revealed a distinct double standard regarding the gender of the victim. When the victim in the scenario was a woman, participants assigned significantly less blame compared to when the victim was a man.

Participants were also more likely to believe that the police should be involved and that the victim would suffer harm if the target was female. This aligns with broader patterns in society where sexual violence is often viewed primarily as a crime against women. The victimization of men in this context was viewed with less severity.

Physical appearance also influenced these judgments, particularly for male victims. The study found that when the male victim was depicted as attractive, participants perceived the lowest levels of victim harm. This suggests a specific bias where attractive men are less likely to be viewed as vulnerable or traumatized by non-consensual sexual attention.

For female victims, attractiveness did not play a statistically significant role in how much blame was assigned, contradicting some historical research suggesting attractive women are often blamed more for sexual victimization.

One of the strongest predictors of social attitudes was the age of the participant. The data showed that older participants consistently held more negative views toward the victim than younger participants did. Regardless of the victim’s gender or attractiveness, older respondents assigned more blame to the person who was photographed. They also perceived the act as less criminal and believed it caused less harm than their younger counterparts.

The researchers suggest this generational divide may stem from differences in technological familiarity. Younger generations, who have grown up with the internet and smartphones, may be more acutely aware of the permanence and reach of digital images. They may perceive the violation of digital privacy as a more profound threat. Conversely, older individuals might view the scenario through a different lens, potentially minimizing the severity of an act that does not involve physical contact.

Beyond judging the scenario, participants were asked about their own potential behavior. The survey included a question measuring proclivity, or willingness, to commit the crime. Participants were asked how likely they would be to take intimate pictures of an attractive person if they were guaranteed not to get caught. To understand who might answer “yes” to this question, the researchers administered standard psychological questionnaires measuring the “Dark Tetrad” of personality traits.

The Dark Tetrad consists of four distinct but related personality traits: narcissism, Machiavellianism, psychopathy, and sadism. Narcissism involves a sense of entitlement and grandiosity. Machiavellianism is characterized by manipulation and a focus on self-interest. Psychopathy involves a lack of empathy and high impulsivity. Sadism is the enjoyment of inflicting cruelty or suffering on others.

The study found that a willingness to engage in upskirting was not randomly distributed. Men were more likely to express a proclivity for the behavior than women.

Additionally, participants who admitted to past voyeuristic behaviors—such as secretly watching people undress—were more likely to say they would commit upskirting. Among the personality traits, higher levels of psychopathy emerged as a primary predictor. Individuals scoring high in psychopathy were more likely to endorse taking the non-consensual photos.

This connection to psychopathy makes theoretical sense. Upskirting requires a person to violate social norms and the rights of another person for immediate gratification, often without concern for the distress it causes the victim.

This aligns with the callousness and lack of empathy central to psychopathy. The researchers also noted that older age predicted a higher self-reported likelihood of committing the act, which mirrors the finding that older participants viewed the act as less criminal.

The study also measured “belief in a just world,” which is the psychological tendency to believe that people get what they deserve. In many studies on sexual violence, a strong belief in a just world correlates with victim blaming.

In this study, however, those with a stronger belief in a just world were less likely to express a willingness to commit upskirting. This suggests that for this specific crime, a belief in moral fairness might act as a deterrent against perpetration, even if it does not always prevent victim blaming.

There are limitations to this research that context is needed. The sample was drawn exclusively from the United Kingdom, meaning the results reflect British cultural and legal norms. Attitudes might differ in countries with different laws regarding privacy and sexual offenses. Additionally, the study relied on a single specific scenario in a spa. Upskirting frequently occurs in public spaces like public transit or escalators, and public perceptions might shift depending on the setting.

The measurement of proclivity relied on self-reports. Participants had to admit they might commit a crime, which can lead to underreporting due to social desirability bias. However, the anonymity of the online survey format was designed to encourage honest responses. The researchers also point out that while they found statistical links, they cannot definitively say one factor causes another, only that they are related.

Despite these caveats, the findings have implications for the legal and justice systems. The observation that older individuals are more likely to minimize the harm of upskirting and blame the victim is relevant for jury selection and judicial training. If older jurors or judges hold implicit biases that view this form of abuse as trivial, it could affect the outcomes of trials and the sentences handed down to offenders.

For mental health practitioners, the strong link between voyeurism and upskirting provides a pathway for intervention. Therapists working with individuals who have committed these offenses might focus on addressing underlying voyeuristic compulsions and deficits in empathy associated with psychopathic traits. Treating upskirting not just as a privacy violation but as a manifestation of voyeuristic disorder could lead to more effective rehabilitation strategies.

The study, “Understanding Social Judgments of and Proclivities to Commit Upskirting,” was authored by Dean Fido, Craig A. Harper, Simon Duff, and Thomas E. Page.

Genetic risk for depression maps to specific structural brain changes

30 January 2026 at 21:00

A new comprehensive analysis has revealed that major depressive disorder alters both the physical architecture and the electrical activity of the brain in the same specific regions. By mapping these overlapping changes, researchers identified a distinct set of genes that likely drives these abnormalities during early brain development. The detailed results of this investigation were published in the Journal of Affective Disorders.

Major depressive disorder is a pervasive mental health condition that affects millions of people globally. It is characterized by persistent low mood and a loss of interest in daily activities. Patients often experience difficulties with cognitive function and emotional regulation.

While the symptoms are psychological, the condition is rooted in biological changes within the brain. Researchers have sought to understand the physical mechanisms behind the disorder for decades. The goal is to move beyond symptom management toward treatments that address the root biological causes.

Most previous research has looked at brain changes in isolation. Some studies use structural magnetic resonance imaging to measure the volume of gray matter. This tissue contains the cell bodies of neurons. A reduction in gray matter volume typically suggests a loss of neurons or a shrinkage of connections between them.

Other studies use functional magnetic resonance imaging. This technique measures blood flow to track brain activity. It looks at how well different brain regions synchronize their firing patterns or the intensity of their activity while the person is resting.

Results from these single-method studies have often been inconsistent. One study might find a problem in the frontal lobe, while another points to the temporal lobe. It has been difficult to know if structural damage causes functional problems or if they occur independently. Additionally, scientists know that genetics play a large role in depression risk. However, it remains unclear how specific genetic variations translate into the physical brain changes seen in patients.

To bridge this gap, a team of researchers led by Ying Zhai, Jinglei Xu, and Zhihui Zhang from Tianjin Medical University General Hospital conducted a large-scale study. They aimed to integrate data on brain structure, brain function, and genetics. Their primary objective was to find regions where structural and functional abnormalities overlap. They also sought to identify which genes might be responsible for these simultaneous changes.

The research team began by conducting a meta-analysis. This is a statistical method that combines data from many previous studies to find patterns that are too subtle for a single study to detect. They gathered data from 89 independent studies.

These included over 3,000 patients with major depressive disorder and a similar number of healthy control subjects for the structural analysis. The functional analysis included over 2,000 patients and controls. The researchers used a technique called voxel-wise analysis. This divides the brain into thousands of tiny three-dimensional cubes to pinpoint exactly where changes occur.

The team looked for three specific markers. First, they examined gray matter volume to assess physical structure. Second, they looked at regional homogeneity. This measures how synchronized a brain region is with its immediate neighbors. Third, they analyzed the amplitude of low-frequency fluctuations. This indicates the intensity of spontaneous brain activity. By combining these metrics, the researchers created a detailed map of the “depressed brain.”

The analysis revealed widespread disruptions. The researchers found that patients with depression consistently showed reduced gray matter volume in several key areas. These included the median cingulate cortex, the insula, and the superior temporal gyrus. These regions are essential for processing emotions and sensing the body’s internal state.

The functional data showed a more complicated picture. In some areas, brain activity was lower than normal. In others, it was higher. The researchers then overlaid the structural and functional maps to find the convergence points. This multimodal analysis uncovered two distinct patterns of overlap.

The first pattern involved regions that showed both physical shrinkage and reduced functional activity. This “double hit” was observed primarily in the median cingulate cortex and the insula. The insula helps the brain interpret bodily sensations, such as heartbeat or hunger, and links them to emotions. A failure in this region could explain why depressed patients often feel physically lethargic or disconnected from their bodies. The reduced activity and volume suggest a breakdown in the neural circuits responsible for emotional and sensory integration.

The second pattern was unexpected. Some regions showed reduced gray matter volume but increased functional activity. This occurred in the anterior cingulate cortex and parts of the frontal lobe. These areas are involved in self-reflection and identifying errors. The researchers suggest this hyperactivity might be a form of compensation.

The brain may be working harder to maintain normal function despite physical deterioration. Alternatively, this high activity could represent neural noise or inefficient processing. This might contribute to the persistent rumination and negative self-focus that many patients experience.

After mapping these brain regions, the researchers investigated the genetic underpinnings. They used a large database of genetic information from over 170,000 depression cases. They applied a method called H-MAGMA to prioritize genes associated with the disorder. They identified 1,604 genes linked to depression risk. The team then used the Allen Human Brain Atlas to see where these genes are expressed in the human brain. This atlas maps gene activity across different brain tissues.

The team looked for a spatial correlation. They wanted to know if the depression-linked genes were most active in the same brain regions that showed structural and functional damage. The analysis was successful. They identified 279 genes that were spatially linked to the overlapping brain abnormalities. These genes were not randomly distributed. They were highly expressed in the specific areas where the researchers had found the “double hit” of shrinkage and altered activity.

The researchers then performed an enrichment analysis to understand what these 279 genes do. The results pointed toward biological processes that happen very early in life. The genes were heavily involved in the development of the nervous system. They play roles in neuron projection guidance, which is how neurons extend their fibers to connect with targets. They are also involved in synaptic signaling, the process by which neurons communicate.

The study also looked at when these genes are most active. The data showed that these genes are highly expressed during fetal development. They are particularly active in the cortex and hippocampus during the middle to late fetal stages. This suggests that the vulnerability to depression may be established long before birth. Disruptions in these genes during critical developmental windows could lead to the structural weak points identified in the MRI scans.

The researchers also examined which types of cells use these genes. They found that the genes were predominantly expressed in specific types of neurons in the cortex and striatum. This includes neurons that use dopamine, a chemical messenger vital for motivation and pleasure. This connects the genetic findings to the known symptoms of depression, such as anhedonia, or the inability to feel pleasure.

There are limitations to this study that should be noted. The meta-analysis relied on coordinates reported in previous papers rather than raw brain scans. This can slightly reduce the precision of the location data. Additionally, the gene expression data came from the Allen Human Brain Atlas, which is based on healthy adult brains. It does not reflect how gene expression might change in a depressed brain.

The study was also cross-sectional. This means it looked at a snapshot of patients at one point in time. It cannot prove that the brain shrinkage caused the depression or vice versa. The researchers also noted that demographic factors like age and sex influence brain structure. While they controlled for these variables statistically, future research should look at how these patterns differ between men and women or across different age groups.

Future research will need to verify these findings using longitudinal data. Scientists need to track individuals over time to see how gene expression interacts with environmental stressors to reshape the brain. The team suggests that future studies should also incorporate environmental data. Factors such as inflammation or stress exposure could modify how these risk genes affect brain structure.

This study represents a step forward in integrating different types of biological data. It moves beyond viewing depression as just a chemical imbalance or a structural deficit. Instead, it presents a cohesive model where genetic risks during development lead to specific structural and functional vulnerabilities. These physical changes then manifest as the emotional and cognitive symptoms of depression.

The study, “Neuroimaging-genetic integration reveals shared structural and functional brain alterations in major depressive disorder,” was authored by Ying Zhai, Jinglei Xu, Zhihui Zhang, Yue Wu, Qian Wu, Minghuan Lei, Haolin Wang, Qi An, Wenjie Cai, Shen Li, Quan Zhang, and Feng Liu.

A dream-like psychedelic might help traumatized veterans reset their brains

30 January 2026 at 17:00

A new study suggests that the intensity of spiritual or “mystical” moments felt during psychedelic treatment may predict how well veterans recover from trauma symptoms. Researchers found that soldiers who reported profound feelings of unity and sacredness while taking ibogaine experienced lasting relief from post-traumatic stress disorder. These findings were published in the Journal of Affective Disorders.

For decades, medical professionals have sought better ways to assist military personnel returning from combat. Many veterans suffer from post-traumatic stress disorder, or PTSD, as well as traumatic brain injuries caused by repeated exposure to blasts. These conditions often occur together and can be resistant to standard pharmaceutical treatments. The lack of effective options has led some researchers to investigate alternative therapies derived from natural sources.

One such substance is ibogaine. This psychoactive compound comes from the root bark of the Tabernanthe iboga shrub, which is native to Central Africa. Cultures in that region have used the plant for centuries in healing and spiritual ceremonies. In recent years, it has gained attention in the West for its potential to treat addiction and psychiatric distress. Unlike some other psychedelics, ibogaine often induces a dream-like state where users review their memories.

Despite anecdotal reports of success, the scientific community still has a limited understanding of how ibogaine works in the human brain. Most prior research focused on classic psychedelics like psilocybin or MDMA. The specific psychological mechanisms that might allow ibogaine to alleviate trauma symptoms remain largely unexplored.

Randi E. Brown, a researcher at the Stanford University School of Medicine and the VA Palo Alto Health Care System, led a team to investigate this question. They worked in collaboration with the late Nolan R. Williams and other specialists in psychiatry and behavioral sciences. The team sought to determine if the subjective quality of the drug experience mattered for recovery. They hypothesized that a “mystical experience” might be a key driver of therapeutic change.

The concept of a mystical experience in psychology is specific and measurable. It refers to a sensation of unity with the universe, a transcendence of time and space, and deeply felt peace or joy. It also includes a quality known as ineffability, meaning the experience is too profound to be described in words. The researchers wanted to know if veterans who felt these sensations more strongly would see better clinical results.

The study analyzed data from thirty male Special Operations Veterans. All participants had a history of traumatic brain injury and combat exposure. Because ibogaine is not approved for medical use in the United States, the veterans traveled to a clinic in Mexico for the treatment. This setup allowed the researchers to observe the effects of the drug in a clinical setting outside the U.S.

The treatment protocol involved a single administration of the drug. The medical staff combined ibogaine with magnesium sulfate. This addition is intended to protect the heart, as ibogaine can sometimes disrupt cardiac rhythms. The veterans received the medication orally after a period of fasting. They spent the session lying down with eyeshades, generally experiencing the effects internally rather than interacting with others.

To measure the psychological impact of the session, the researchers administered the Mystical Experiences Questionnaire. This survey asks participants to rate the intensity of various feelings, such as awe or a sense of sacredness. The researchers collected these scores immediately after the treatment concluded.

The team also assessed the veterans’ PTSD severity using a standardized clinical interview. They took these measurements before the treatment, immediately after, and again one month later. This allowed them to track changes in symptom severity over time. Additionally, the researchers used electroencephalography, or EEG, to record electrical activity in the brain.

The analysis revealed a clear statistical association between the survey responses and the clinical outcomes. Veterans who reported more intense mystical experiences showed larger reductions in PTSD severity. This pattern held true immediately after the treatment. It also persisted when the researchers checked on the participants one month later.

The researchers observed similar trends for other mental health measures. Higher scores on the mystical experience survey correlated with greater improvements in depression and anxiety. These findings align with previous research on other psychedelics, such as psilocybin, which has linked spiritual breakthroughs to improved mental health.

The study also identified changes in brain physiology. The researchers focused on a specific brain wave measurement called peak alpha frequency. This measurement reflects the speed of the brain’s electrical cycles when a person is resting but awake. High arousal states, often seen in PTSD, can be linked to faster alpha frequencies.

The data showed that more intense mystical experiences were associated with a slowing of this alpha frequency one month after treatment. This reduction suggests a shift away from the hyper-aroused state that characterizes trauma. The brain appeared to move toward a more relaxed mode of functioning.

This physiological change supports the idea that the treatment effects are biological and not just psychological. The slowing of brain rhythms may represent a lasting neural adaptation. It implies that the intense subjective experience of the drug might trigger neuroplastic changes that help the brain reset.

Brown and her colleagues suggest that the “ego death” often reported during mystical experiences may play a role. This phenomenon involves a temporary loss of the sense of self. It may allow individuals to detach from rigid, negative beliefs about themselves formed during trauma. When the sense of self returns, it may do so without the heavy burden of past guilt or fear.

The authors noted several limitations to their work. The study used an open-label design, meaning there was no placebo group for comparison. All participants knew they were receiving ibogaine. It is possible that their expectation of healing contributed to the positive results.

The sample size was also relatively small, consisting of only thirty individuals. Furthermore, the group was entirely male and composed of Special Operations Veterans. This specific demographic means the results may not apply to women or the general public. The unique training and resilience of these veterans might influence how they respond to such treatments.

The researchers also pointed out that the study relies on correlation. While the link between mystical experiences and recovery is strong, it does not prove causation. It is possible that a third, unmeasured factor causes both the mystical experience and the symptom improvement.

Despite these caveats, the research provides a foundation for future investigation. The authors recommend that subsequent studies use randomized, controlled designs to verify these effects. They also suggest exploring whether these psychological and physiological changes endure beyond the one-month mark.

Future research could also investigate the role of psychotherapy combined with the drug. In this study, the veterans received coaching but not intensive therapy during the dosing session. Combining the biological reset of ibogaine with structured psychological support might enhance the benefits.

This study adds to a growing body of evidence supporting the potential of psychedelic therapies. It highlights the importance of the subjective experience in the healing process. For veterans struggling with the aftermath of war, these findings offer a preliminary hope that treatments addressing both the brain and the spirit may offer relief.

The study, “Mystical experiences during magnesium-Ibogaine are associated with improvements in PTSD symptoms in veterans,” was authored by Randi E. Brown, Jennifer I. Lissemore, Kenneth F. Shinozuka, John P. Coetzee, Afik Faerman, Clayton A. Olash, Andrew D. Geoly, Derrick M. Buchanan, Kirsten N. Cherian, Anna Chaiken, Ahmed Shamma, Malvika Sridhar, Saron A. Hunegnaw, Noriah D. Johnson, Camarin E. Rolle, Maheen M. Adamson, and Nolan R. Williams.

Cannabis beverages may help people drink less alcohol

30 January 2026 at 03:00

Recent survey data suggests that cannabis-infused beverages may serve as an effective tool for individuals looking to curb their alcohol consumption. People who incorporated these drinks into their routines reported reducing their weekly alcohol intake and engaging in fewer episodes of binge drinking. The findings were published in the Journal of Psychoactive Drugs.

Alcohol consumption is a well-documented public health concern. It is linked to nearly 200 different health conditions. These include liver disease, cardiovascular issues, and various forms of cancer.

While total abstinence is the most effective way to eliminate these risks, many adults choose not to stop drinking entirely. This reality has led public health experts to explore harm reduction strategies. The goal of harm reduction is to minimize the negative consequences of substance use without necessarily demanding complete sobriety.

Cannabis is increasingly viewed through this harm reduction lens. It generally presents fewer physiological risks to the user compared to alcohol. The legalization of cannabis in many U.S. states has diversified the market beyond traditional smokable products. Consumers can now purchase cannabis-infused seltzers, sodas, and tonics. These products are often packaged in cans that resemble beer or hard seltzer containers.

This similarity in packaging and consumption method is notable. It allows users to participate in the social ritual of holding and sipping a drink without consuming ethanol. Jessica S. Kruger, a clinical associate professor of community health and health behavior at the University at Buffalo, led an investigation into this phenomenon. She collaborated with researchers Nicholas Felicione and Daniel J. Kruger. The team sought to understand if these new products are merely a novelty or if they serve a functional role in alcohol substitution.

The researchers designed a study to capture the behaviors of current cannabis users. They distributed an anonymous survey between August and December of 2022. Recruitment took place through various channels to reach a broad audience.

The team placed recruitment cards with QR codes in licensed dispensaries. They also utilized email lists from these businesses. Additionally, they posted links to the survey on nearly 40 cannabis-related communities on the social media platform Reddit.

The final analytic sample consisted of 438 adults. All participants had used cannabis within the past year. The survey incorporated questions from the Behavioral Risk Factor Surveillance System. This is a standard tool used by the Centers for Disease Control and Prevention to track health-related behaviors. The researchers used these questions to assess alcohol consumption frequency and intensity.

The study aimed to compare the behaviors of those who drank cannabis beverages against those who used other forms of cannabis. It also sought to compare alcohol habits before and after individuals began consuming cannabis drinks. Roughly one-third of the respondents reported using cannabis beverages. These users typically consumed one infused drink per session.

The researchers found differences in substitution behaviors between groups. Participants who consumed cannabis beverages were more likely to report substituting cannabis for alcohol than those who did not drink them. The data showed that 58.6 percent of beverage users reported this substitution. In contrast, 47.2 percent of non-beverage users reported doing so.

The study provided specific data regarding changes in alcohol intake levels. The researchers asked beverage users to recall their alcohol consumption habits prior to adopting cannabis drinks. Before trying these products, the group reported consuming an average of roughly seven alcoholic drinks per week. After they started using cannabis beverages, that average dropped to approximately 3.35 drinks per week.

Binge drinking rates also saw a decline. The researchers defined a binge drinking episode based on standard gender-specific thresholds. Before initiating cannabis beverage use, about 47 percent of the group reported binge drinking less than once a month or never. After incorporating cannabis drinks, the proportion of people reporting this low frequency of binge drinking rose to nearly 81 percent.

Most participants did not replace alcohol entirely. The survey results indicated that 61.5 percent of beverage users reduced their alcohol intake. Only about 1 percent reported stopping alcohol consumption completely.

A small minority, roughly 3 percent, reported increasing their alcohol use. This suggests that for most users, cannabis beverages act as a moderator for alcohol rather than a complete replacement.

The study also examined the potency of the beverages being consumed. Most respondents chose products with lower doses of Tetrahydrocannabinol (THC). Two-thirds of the users drank beverages containing 10 milligrams of THC or less. This dosage allows for a milder experience compared to high-potency edibles. It may facilitate a more controlled social experience similar to drinking a glass of wine or a beer.

Daniel J. Kruger, a co-author of the study, noted the potential reasons for these findings. He suggests that the similarity in the method of administration plays a role. People at parties or bars are accustomed to having a drink in their hand. A cannabis beverage allows them to maintain that behavior. It fits into the social context more seamlessly than smoking a joint or taking a gummy.

There are limitations to this research that require consideration. The study relied on retrospective self-reports. Participants had to recall their past alcohol consumption. This relies on memory and can be subject to bias. The sample was also a convenience sample rather than a nationally representative one. Many respondents were recruited from New York State dispensaries or specific online communities.

The researchers also point out potential risks associated with these products. Cannabis beverages and edibles have a slower onset of effects compared to inhalation. It takes time for the digestive system to process the cannabinoids. This delay can lead inexperienced users to consume more than intended. Accidental overconsumption can result in negative physical and mental health outcomes.

Furthermore, there is the issue of dual use. Most participants continued to drink alcohol, albeit in smaller quantities. Combining alcohol and cannabis can intensify impairment. The authors note that this interaction needs further study to ensure public safety.

Future research is necessary to validate these preliminary findings. The authors suggest that longitudinal studies would be beneficial. Such studies would track individuals over time rather than relying on past recall. This would provide a clearer picture of whether the reduction in alcohol use is sustained in the long term.

Public education will be key as this market expands. Consumers need to understand the differences between alcohol and cannabis impairment. They also need accurate information regarding dosing and onset times. Policies that ensure clear labeling and child-proof packaging remain essential for harm reduction.

Despite the caveats, the study offers a new perspective on alcohol harm reduction. It highlights a potential avenue for individuals seeking to lower their alcohol intake. As the market for these beverages grows, understanding their role in consumer behavior becomes increasingly important for public health officials.

The study, “The Exploration of Cannabis Beverage Substitution for Alcohol: A Novel Harm Reduction Strategy,” was authored by Jessica S. Kruger, Nicholas Felicione, and Daniel J. Kruger.

New maps of brain activity challenge century-old anatomical boundaries

30 January 2026 at 01:00

New research challenges the century-old practice of mapping the brain based on how tissue looks under a microscope. By analyzing electrical signals from thousands of neurons in mice, scientists discovered that the brain’s command center organizes itself by information flow rather than physical structure. These findings appear in the journal Nature Neuroscience.

The prefrontal cortex acts as the brain’s executive hub. It manages complex processes such as planning, decision-making, and reasoning. Historically, neuroscientists defined the boundaries of this region by studying cytoarchitecture. This method involves staining brain tissue and observing the arrangement of cells. The assumption has been that physical differences in cell layout correspond to distinct functional jobs.

However, the connection between these static maps and the dynamic electrical firing of neurons remains unproven. A research team led by Marie Carlén at the Karolinska Institutet in Sweden sought to test this long-standing assumption. Pierre Le Merre and Katharina Heining served as the lead authors on the paper. They aimed to create a functional map based on what neurons actually do rather than just where they sit.

To achieve this, the team performed an extensive analysis of single-neuron activity. They focused on the mouse brain, which serves as a model for mammalian neural structure. The researchers implanted high-density probes known as Neuropixels into the brains of awake mice. These advanced sensors allowed them to record the electrical output of more than 24,000 individual neurons.

The study included recordings from the prefrontal cortex as well as sensory and motor areas. The investigators first analyzed spontaneous activity. This refers to the electrical firing that occurs when the animal is resting and not performing a specific task. Spontaneous activity offers a window into the intrinsic properties of a neuron and its local network.

The team needed precise ways to describe this activity. Simply counting the number of electrical spikes per second was insufficient. They introduced three specific mathematical metrics to characterize the firing patterns. The first metric was the firing rate, or how often a neuron sends a signal.

The second metric was “burstiness.” This describes the irregularity of the intervals between spikes. A neuron with high burstiness fires in rapid clusters followed by silence. A neuron with low burstiness fires with a steady, metronomic rhythm.

The third metric was “memory.” This measures the sequential structure of the firing. It asks whether the length of one interval between spikes predicts the length of the next one. Taken together, these three variables provided a unique “fingerprint” for every recorded neuron.

The researchers used a machine learning technique called a Self-Organizing Map to sort these fingerprints. This algorithm grouped neurons with similar firing properties together. It allowed the scientists to visualize the landscape of neuronal activity without imposing human biases.

The analysis revealed a distinct signature for the prefrontal cortex. Neurons in this area predominantly displayed low firing rates and highly regular rhythms. They did not fire in erratic bursts. This created a “low-rate, regular-firing” profile that distinguished the prefrontal cortex from other brain regions.

The team then projected these activity profiles back onto the physical map of the brain. They compared the boundaries of their activity-based clusters with the traditional cytoarchitectural borders. The two maps did not align.

Regions that looked different under a microscope often contained neurons with identical firing patterns. Conversely, regions that looked the same structurally often hosted different types of activity. The distinct functional modules of the prefrontal cortex ignored the classical boundaries drawn by anatomists.

Instead of anatomy, the activity patterns aligned with hierarchy. In neuroscience, hierarchy refers to the order of information processing. Sensory areas that receive raw data from the eyes or ears are at the bottom of the hierarchy. The prefrontal cortex, which integrates this data to make decisions, sits at the top.

The researchers correlated their activity maps with existing maps of brain connectivity. They found that regions higher up in the hierarchy consistently displayed the low-rate, regular-firing signature. This suggests that the way neurons fire is determined by their place in the network, not by the local architecture of the cells.

This finding aligns with theories about how the brain processes information. Sensory areas need to respond quickly to changing environments, requiring fast or bursty firing. High-level areas need to integrate information over time to maintain stable plans. A slow, regular rhythm is ideal for holding information in working memory without being easily distracted by noise.

The study then moved beyond resting activity to examine goal-directed behavior. The mice performed a task where they heard a tone or saw a visual stimulus. They had to turn a wheel to receive a water reward. This allowed the researchers to see how the functional map changed during active decision-making.

The team identified neurons that were “tuned” to specific aspects of the task. Some neurons responded only to the sound. Others fired specifically when the mouse made a choice to turn the wheel.

When they mapped these task-related neurons, they again found no relation to the traditional anatomical borders. The functional activity formed its own unique territories. One specific finding presented a paradox.

The researchers had established that the hallmark of the prefrontal cortex was slow, regular firing. However, the specific neurons that coded for “choice”—the act of making a decision—tended to have high firing rates. These “decider” neurons were chemically and spatially mixed in with the “integrator” neurons but behaved differently.

This implies a separation of duties within the same brain space. The general population of neurons maintains a slow, steady rhythm to provide a stable platform for cognition. Embedded within this stable network are specific, highly excitable neurons that trigger actions.

The overlap of these two populations suggests that connectivity shapes the landscape. The high-hierarchy network supports the regular firing. Within that network, specific inputs drive the high-rate choice neurons.

These results suggest that intrinsic connectivity is the primary organizing principle of the prefrontal cortex. The physical appearance of the tissue is a poor predictor of function. “Our findings challenge the traditional way of defining brain regions and have major implications for understanding brain organisation overall,” says Marie Carlén.

The study does have limitations. It relied on data from mice. While mouse and human brains share many features, the human prefrontal cortex is far more complex. Additionally, the recordings focused primarily on the deep layers of the cortex. These layers are responsible for sending output signals to other parts of the brain.

The activity in the surface layers, which receive input, might show different patterns. The study also looked at a limited set of behaviors. Future research will need to explore whether these maps hold true across different types of cognitive tasks.

Scientists must also validate these metrics in other species. If the pattern holds, it could provide a new roadmap for understanding brain disorders. Many psychiatric conditions involve dysfunction in the prefrontal cortex. Understanding the “normal” activity signature—slow and regular—could help identify what goes wrong in disease.

This data-driven approach offers a scalable framework. It moves neuroscience away from subjective visual descriptions toward objective mathematical categorization. It suggests that to understand the brain, we must look at the invisible traffic of electricity rather than just the visible roads of tissue.

The study, “A prefrontal cortex map based on single-neuron activity,” was authored by Pierre Le Merre, Katharina Heining, Marina Slashcheva, Felix Jung, Eleni Moysiadou, Nicolas Guyon, Ram Yahya, Hyunsoo Park, Fredrik Wernstal & Marie Carlén.

Alzheimer’s patients show reduced neural integration during brain stimulation

29 January 2026 at 19:00

New research suggests that the electrical complexity of the brain diminishes in early Alzheimer’s disease, potentially signaling a breakdown in the neural networks that support conscious awareness. By stimulating the brain with magnetic pulses and recording the response, scientists found distinct differences between healthy aging adults and those with mild dementia. These findings appear online in the journal Neuroscience of Consciousness.

The human brain operates on multiple levels of awareness. Alzheimer’s disease is widely recognized for eroding memory, but the specific type of memory loss offers clues about the nature of the condition. Patients typically lose the ability to consciously recall events, facts, and conversations. This is known as explicit memory.

Yet, these same individuals often retain unconscious capabilities, such as the ability to walk, eat, or play a musical instrument. This preservation of procedural or implicit memory suggests that the disease targets the specific neural architecture required for conscious processing while leaving other automatic systems relatively intact.

Andrew E. Budson, a professor of neurology at Boston University Chobanian & Avedisian School of Medicine, has proposed that these “cortical dementias” should be viewed as disorders of consciousness. According to this theory, consciousness developed as part of the explicit memory system. As the disease damages the cerebral cortex, the physical machinery capable of sustaining complex conscious thought deteriorates. This deterioration eventually leads to a state where the individual is awake but possesses a diminishing capacity for complex awareness.

To investigate this theory, a research team led by Brenna Hagan, a doctoral candidate in behavioral neuroscience at the same institution, sought a biological marker that could quantify this decline. They turned to a metric originally developed to assess patients with severe brain injuries, such as those in comas or vegetative states. This metric is called the perturbation complexity index, specifically an analysis of state transitions.

The measurement acts somewhat like a sonar system for the brain. In a healthy, conscious brain, a stimulus should trigger a complex, long-lasting chain reaction of electrical activity that ripples across various neural networks. In a brain where consciousness is compromised, the response is expected to be simpler, local, and short-lived. The researchers hypothesized that even in the early stages of Alzheimer’s, this capacity for complex electrical integration would be reduced compared to healthy aging.

The study included 55 participants in total. The breakdown consisted of 28 individuals diagnosed with early-stage Alzheimer’s disease or mild cognitive impairment and 27 healthy older adults who served as controls. The research team employed a technique known as transcranial magnetic stimulation, or TMS, paired with electroencephalography, or EEG.

During the experiment, participants sat comfortably while wearing a cap fitted with 64 electrodes designed to detect electrical signals on the scalp. The researchers placed a magnetic coil against the participant’s head. This coil delivered a brief, focused pulse of magnetic energy through the skull and into the brain tissue. This pulse is the “perturbation” in the index’s name. It effectively rings the brain like a bell.

The researchers targeted two specific areas of the brain. The first was the left motor cortex, which controls voluntary movement on the right side of the body. The second was the left inferior parietal lobule, a region involved in integrating sensory information and language. By stimulating these distinct sites, the team hoped to determine if the loss of complexity was specific to certain areas or if it represented a global failure of the brain’s networks.

As the magnetic pulse struck the cortex, the EEG electrodes recorded the brain’s immediate reaction. This recording captured the “echo” of the stimulation as it propagated through the neural circuits. The researchers then used a complex mathematical algorithm to analyze these echoes. They looked for the number of “state transitions,” which are shifts in the spatial pattern of the electrical activity. A higher number of state transitions indicates a more complex, integrated response, implying a healthier and more connected brain.

The analysis revealed a clear distinction between the two groups. The participants with Alzheimer’s disease displayed a reduced level of brain complexity compared to the healthy controls. The average complexity score for the Alzheimer’s group was 20.1. In contrast, the healthy group averaged 28.2. This downward shift suggests that the neural infrastructure required for high-level conscious thought is compromised in the disease.

The reduction in complexity was consistent regardless of which brain area was stimulated. The scores obtained from the motor cortex were nearly identical to those from the parietal lobe. This suggests that the loss of neural complexity in Alzheimer’s is a widespread, global phenomenon rather than a problem isolated to specific regions. The disease appears to affect the brain’s overall ability to sustain complex patterns of communication.

The researchers also examined whether these complexity scores correlated with standard clinical measures. They compared the EEG data to scores from the Montreal Cognitive Assessment, a paper-and-pencil test commonly used to screen for dementia.

Within the groups, there was no strong statistical relationship between a person’s cognitive test score and their brain complexity score. This lack of correlation implies that the magnetic stimulation technique measures a fundamental physiological state of the brain that is distinct from behavioral performance on a test.

“Despite their impaired conscious memory, individuals with Alzheimer’s disease may be able to use intact implicit, unconscious forms of memory, such as procedural memory (often termed ‘muscle memory’) to continue their daily routines at home,” Budson explains. He adds that when patients leave familiar settings, “their home routines are not helpful and their dysfunctional conscious memory can lead to disorientation and distress.”

There are caveats to these findings that warrant attention. While the difference between the groups was clear, the absolute scores raised questions. A surprising number of participants in both groups scored below the threshold typically used to define consciousness in coma studies. Specifically, 70 percent of the Alzheimer’s patients and 29 percent of the healthy volunteers fell into a range usually associated with unconsciousness or minimally conscious states.

This does not mean these individuals are unconscious. Instead, it indicates that the mathematical cutoffs established for traumatic brain injury may not directly apply to neurodegenerative diseases or aging populations. The metric likely exists on a spectrum. The physiological changes in an aging brain might lower the baseline for complexity without extinguishing consciousness entirely.

The study opens new paths for future research. Scientists can now explore how this loss of complexity relates to the progression of the disease. It may be possible to use this metric to track the transition from mild impairment to severe dementia. The lack of correlation with behavioral tests suggests that this method could provide an objective, biological way to assess brain function that does not rely on a patient’s ability to speak or follow instructions.

This perspective also informs potential therapeutic strategies. If the disease is viewed as a progressive loss of conscious processing, treatments could focus on maximizing the use of preserved unconscious systems. Therapies might emphasize habit formation and procedural learning to help patients maintain independence.

“This research opens the avenue for future studies in individuals with cortical dementia to examine the relationship between conscious processes, global measures of consciousness, and their underlying neuroanatomical correlates,” Budson says. The team hopes that future work will clarify the biological mechanisms driving this loss of complexity and lead to better diagnostic tools.

The study, “Evaluating Alzheimer’s disease with the TMS-EEG perturbation complexity index,” was authored by Brenna Hagan, Stephanie S. Buss, Peter J. Fried, Mouhsin M. Shafi, Katherine W. Turk, Kathy Y. Xie, Brandon Frank, Brice Passera, Recep Ali Ozdemir, and Andrew E. Budson.

Menopause is linked to reduced gray matter and increased anxiety

29 January 2026 at 01:00

New research suggests that menopause is accompanied by distinct changes in the brain’s structure and a notable increase in mental health challenges. While hormone replacement therapy appears to aid in maintaining reaction speeds, it does not seem to prevent the loss of brain tissue or alleviate symptoms of depression according to this specific dataset. These observations were published online in the journal Psychological Medicine.

Menopause represents a major biological transition marked by the cessation of menstruation and a steep decline in reproductive hormones. Women frequently report a variety of symptoms during this time, ranging from hot flashes to difficulties with sleep and mood regulation.

Many individuals turn to hormone replacement therapy to manage these physical and psychological obstacles. Despite the common use of these treatments, the medical community still has questions about how these hormonal shifts affect the brain itself. Previous research has yielded mixed results regarding whether hormone treatments protect the brain or potentially pose risks.

To clarify these effects, a team of researchers from the University of Cambridge undertook a large-scale analysis. Katharina Zuhlsdorff, a researcher in the Department of Psychology at the University of Cambridge, served as the lead author on the project.

She worked alongside senior author Barbara J. Sahakian and colleagues from the Departments of Psychiatry and Psychology. Their objective was to provide a clearer picture of how the end of fertility influences mental well-being, thinking skills, and the physical architecture of the brain.

The team utilized data from the UK Biobank, a massive biomedical database containing genetic and health information from half a million participants. For this specific investigation, they selected a sample of nearly 125,000 women.

The researchers divided these participants into three distinct groups to allow for comparison. These groups included women who had not yet gone through menopause, post-menopausal women who had never used hormone therapy, and post-menopausal women who were users of such therapies.

The investigation first assessed psychological well-being across the different groups. The data showed that women who had passed menopause reported higher levels of anxiety and depression compared to those who had not.

Sleep quality also appeared to decline after this biological transition. The researchers observed that women taking hormone replacement therapy actually reported more mental health challenges than those who did not take it. This group also reported higher levels of tiredness.

This result initially seemed counterintuitive, as hormone therapy is often prescribed to help with mood. To understand this, the authors looked backward at the medical history of the participants. They found that women prescribed these treatments were more likely to have had depression or anxiety before they ever started the medication. This suggests that doctors may be prescribing the hormones specifically to women who are already struggling with severe symptoms.

The study also tested how quickly the participants could think and process information. The researchers found that reaction times typically slow down as part of the aging process.

However, menopause seemed to speed up this decline in processing speed. In this specific domain, hormone therapy appeared to offer a benefit. Post-menopausal women taking hormones had reaction times that were faster than those not taking them, effectively matching the speeds of pre-menopausal women.

Dr. Katharina Zühlsdorff noted the nuance in these cognitive findings. She stated, “Menopause seems to accelerate this process, but HRT appears to put the brakes on, slowing the ageing process slightly.”

While reaction times varied, the study did not find similar differences in memory performance. The researchers administered tasks designed to test prospective memory, which is the ability to remember to perform an action later. They also used a digit-span task to measure working memory capacity. Across all three groups, performance on these memory challenges remained relatively comparable.

A smaller subset of about 11,000 women underwent magnetic resonance imaging scans to measure brain volume. The researchers focused on gray matter, the tissue containing the body of nerve cells. They specifically looked at regions involved in memory and emotional regulation. These included the hippocampus, the entorhinal cortex, and the anterior cingulate cortex.

The hippocampus is a seahorse-shaped structure deep in the brain that is essential for learning and memory. The entorhinal cortex functions as a gateway, channeling information between the hippocampus and the rest of the brain. The anterior cingulate cortex plays a primary role in managing emotions, impulse control, and decision-making.

The scans revealed that post-menopausal women had reduced gray matter volume in these key areas compared to pre-menopausal women. This reduction helps explain the higher rates of mood issues in this demographic. Unexpectedly, the group taking hormone therapy showed the lowest brain volumes of all. The treatment did not appear to prevent the loss of brain tissue associated with the end of reproductive years.

The specific regions identified in the study are often implicated in neurodegenerative conditions. Professor Barbara Sahakian highlighted the potential long-term importance of this observation. She explained, “The brain regions where we saw these differences are ones that tend to be affected by Alzheimer’s disease. Menopause could make these women vulnerable further down the line.”

While the sample size was large, the study design was observational rather than experimental. This means the researchers could identify associations but cannot definitively prove that menopause or hormone therapy caused the changes.

The UK Biobank population also tends to be wealthier and healthier than the general public, which may skew the results. Additionally, the study relied on self-reported data for some measures, which can introduce inaccuracies.

The finding regarding hormone therapy and lower brain volume is difficult to interpret without further research. It remains unclear if the medication contributes to the reduction or if the women taking it had different brain structures to begin with.

The researchers emphasize that more work is needed to disentangle these factors. Future studies could look at genetic factors or other health conditions that might influence how hormones affect the brain.

Despite these limitations, the research highlights the biological reality of menopause. It confirms that the transition involves more than just reproductive changes.

Christelle Langley emphasized the need for broader support systems. She remarked, “We all need to be more sensitive to not only the physical, but also the mental health of women during menopause, however, and recognise when they are struggling.”

The study, “Emotional and cognitive effects of menopause and hormone replacement therapy,” was authored by Katharina Zuhlsdorff, Christelle Langley, Richard Bethlehem, Varun Warrier, Rafael Romero Garcia, and Barbara J Sahakian.

Having a close friend with a gambling addiction increases personal risk, study finds

28 January 2026 at 23:00

Having a close relationship with someone who suffers from a gambling problem increases the likelihood that an individual will develop similar issues over time. A new longitudinal analysis published in the Journal of Gambling Studies has found that while strong family bonds can shield adults from this risk, close friendships do not appear to offer the same protection. These findings suggest that the social transmission of gambling behaviors operates differently depending on the nature of the relationship.

For decades, researchers have recognized that addiction often ripples through social networks. This phenomenon is well-documented in the study of alcohol and substance use. Scientists refer to this as the transmission of problem behavior. The impact of a person’s addiction extends beyond themselves, affecting family members, partners, and friends. In Finland, where this research took place, estimates suggest that approximately 20 percent of adults identify as “affected others” of someone else’s gambling. These individuals often bear significant emotional, financial, and health-related burdens.

Past inquiries into gambling transmission have predominantly focused on intergenerational lines. Studies have frequently examined how parents influence their children or how peer pressure impacts adolescents. Far less is known about how these dynamics function among adults. It has remained unclear whether adult gambling is primarily an individual trait or a behavior continuously shaped by social interactions. The protective potential of different types of social connections has also been an open question.

Emmi Kauppila, a doctoral researcher at the Faculty of Social Sciences at Tampere University in Finland, led the new investigation. She collaborated with a team of scholars from the University of Helsinki, the University of Turku, and the University of Bath in the United Kingdom. The researchers sought to determine if exposure to problem gambling in adulthood predicts an increase in one’s own gambling severity. They also aimed to test whether having strong, supportive relationships could act as a buffer against this potential harm.

The team employed a longitudinal survey design to answer these questions. They recruited 1,530 adults residing in mainland Finland to participate in the study. The data collection spanned from April 2021 to September 2024. Participants completed surveys across eight separate waves, with each wave occurring at six-month intervals. This repeated-measures design allowed the scientists to track changes within specific individuals over time, rather than relying on a single snapshot of the population.

The researchers assessed gambling severity using the Problem Gambling Severity Index. This is a standard screening tool where respondents rate their gambling behaviors and consequences on a scale from zero to 27. Higher scores indicate a greater risk of problem gambling. Participants also reported whether they had a family member or a close friend who had experienced gambling problems. To measure the quality of these relationships, the study used the Social and Emotional Loneliness Scale for Adults. This metric evaluated how connected and supported the participants felt by their families and friends.

To analyze the data, the team used a statistical technique known as hybrid multilevel regression modeling. This method is particularly useful for longitudinal data. It allows researchers to distinguish between differences among people and changes that happen to a specific person. The model could determine if a person’s gambling habits changed during the specific six-month periods when they reported exposure to a problem gambler.

The analysis revealed that exposure to problem gambling within a social circle predicted a rise in an individual’s own gambling issues. When a participant reported that a family member had a gambling problem, their own score on the severity index increased by a measurable margin. This “within-person” effect suggests that the change in the social environment directly influenced the individual’s behavior. A similar pattern was observed regarding friends. Individuals who had friends with gambling problems tended to have higher severity scores themselves.

However, a distinct difference emerged when the researchers examined the protective role of relationship quality. The data showed that positive family relationships moderated the risk. Participants who reported strong, supportive connections with their family members were less likely to see their gambling increase, even when a family member had a gambling problem. The emotional support and connectedness provided by the family unit appeared to act as a buffer. This suggests that a supportive family environment can mitigate the transmission of harmful behaviors.

The same protective effect was not found for friendships. Strong emotional bonds with friends did not reduce the risk of acquiring gambling problems from a peer. The analysis indicated that close friendships did not buffer the impact of exposure. In some cases, high-quality friendships with problem gamblers were associated with higher risks for the individual. The researchers propose several explanations for this discrepancy.

One possibility is that peer groups often normalize risky behaviors. If gambling is a shared activity among friends, it may be viewed as a standard form of social interaction. In such contexts, a close friendship might reinforce the behavior rather than discourage it. This mirrors findings in alcohol research, where “drinking buddies” may encourage consumption. The authors also suggest that individuals might select friends who share similar attitudes toward risk. Consequently, the social environment maintains the habit rather than disrupting it.

Another interpretation involves social withdrawal. People who are affected by a loved one’s gambling often experience shame or stigma. This can lead them to isolate themselves from broader social support networks. They might feel that friends would not understand their situation. This isolation can prevent friends from acting as a protective resource. In contrast, family members are often already embedded in the dynamic and may be better positioned to offer support or monitoring.

Richard Velleman, an emeritus professor at the University of Bath and co-author of the paper, highlighted the broader implications of these results. He stated, “It has long been known that alcohol-related problems run in families – this study demonstrates that this is also the case with gambling.” He noted the importance of recognizing the severity of the issue. Velleman added, “This is an important discovery, as many people don’t see gambling problems as equivalent to alcohol or drug problems, as gamblers don’t ‘ingest’ anything, yet gambling can equally lead to serious problems which cause serious harm to individuals and families.”

The findings support the idea that gambling harm is not solely an individual pathology. It is a systemic issue that clusters in social networks. Emmi Kauppila noted, “In this paper, we demonstrate that gambling-related problems cluster within families and close relationships in ways similar to alcohol- and other substance-related harms.” She emphasized that the mechanism of transmission involves “shared environments, stressors and social dynamics.”

This perspective suggests that prevention and treatment strategies need to evolve. Interventions that focus exclusively on the individual gambler may miss a vital component of the recovery process. The study advocates for family-oriented approaches. Therapies that include family members could help strengthen the protective bonds that buffer against transmission. By addressing the needs of “affected others,” clinicians may be able to break the cycle of harm.

There are limitations to the study that contextually frame the results. The research was conducted in Finland, a nation with a specific cultural relationship to gambling. Gambling is widely accepted in Finland and is integrated into the funding of the welfare state. This cultural normalization might influence how gambling behaviors are shared and perceived. The results might differ in countries with more restrictive gambling laws or different cultural attitudes.

Additionally, the study relied on participants to report the gambling problems of their family and friends. These reports reflect the participants’ perceptions and were not clinically verified diagnoses. It is possible that some participants overestimated or underestimated the severity of their loved ones’ problems. The data also did not specify which family member was the source of the exposure. The influence of a partner might differ from that of a parent or sibling. The sample size for specific family roles was too small to analyze separately.

Future research could benefit from a more granular approach. Identifying specific family roles would clarify the transmission dynamics. Verifying the gambling status of the social network members would also strengthen the evidence. Comparative studies in other countries would help determine if these patterns are universal or culture-specific.

Despite these caveats, the study provides robust evidence that adult gambling behavior is deeply intertwined with social relationships. It challenges the view of the solitary gambler. The people surrounding an individual play a role in either amplifying risk or providing protection. Recognizing the power of these social bonds may be key to developing more effective harm reduction strategies.

The study, “Problem Gambling Transmission. An Eight-wave Longitudinal Study on Problem Gambling Among Affected Others,” was authored by Emmi Kauppila, Sari Hautamäki, Iina Savolainen, Sari Castrén, Richard Velleman and Atte Oksanen.

The psychology behind why we pay to avoid uncertainty

28 January 2026 at 19:00

Most people are familiar with the feeling of anxiety while waiting for the result of a medical test or a job interview. A new study suggests that this feeling of dread is far more powerful than the excitement of looking forward to a positive outcome.

The research indicates that the intensity of this dread drives people to avoid risks and demand immediate results. This behavior explains why impatience and risk-avoidance often appear together in the same individuals. The findings were published in the journal Cognitive Science.

Economists have traditionally viewed risk-taking and patience as separate character traits. A person could theoretically be a daring risk-taker while also being very patient. However, researchers have frequently observed that these two traits tend to correlate. People who are unwilling to take risks are often the same people who are unwilling to wait for a reward.

Chris Dawson of the University of Bath and Samuel G. B. Johnson of the University of Waterloo sought to explain this connection. They proposed that the link lies in the emotions people feel while waiting for an outcome. They distinguished between the feelings experienced after an event occurs and the feelings experienced beforehand.

When an event happens, we feel “reactive” emotions. We feel pleasure when we win money or displeasure when we lose it. But before the event occurs, we engage in “anticipatory” emotions. We might savor the thought of a win or dread the possibility of a loss.

The researchers hypothesized that these anticipatory emotions are not symmetrical. They suspected that the dread of a future loss is much stronger than the savoring of a future gain. If this is true, it would create a psychological cost to waiting.

To test this theory, Dawson and Johnson analyzed a massive dataset from the United Kingdom. They used the British Household Panel Survey and the Understanding Society study. These surveys followed approximately 14,000 individuals over a period spanning from 1991 to 2024.

The team needed a way to measure dread and savoring without asking participants directly. They developed a novel method using data on financial expectations and general well-being. The survey asked participants if they expected their financial situation to get better or worse over the next year.

The researchers then looked at how these expectations affected the participants’ current happiness. If a person expected to be worse off and their happiness dropped, that drop represented dread. If they expected to be better off and their happiness rose, that rise represented savoring.

The analysis revealed a dramatic imbalance between these two emotional states. The negative impact of anticipating a loss was more than six times stronger than the positive impact of anticipating a gain. This suggests that the human brain weighs future pain much more heavily than future pleasure.

The researchers also measured “reactive” emotions using the same method. They looked at how participants felt after they actually experienced a financial loss or gain. As expected, losses hurt more than gains felt good.

However, the imbalance in reactive emotions was much smaller than the imbalance in anticipatory emotions. Realized losses were about twice as impactful as realized gains. The anticipatory dread was three times more lopsided than the reactive experience.

This finding implies that the waiting period itself is a major source of distress. The researchers describe this phenomenon as “dread aversion.” It is distinct from the more famous concept of loss aversion.

The study then connected these emotional patterns to economic preferences. The survey included questions about the participants’ willingness to take risks in general. It also measured their patience through a delayed gratification scale.

The results showed a strong correlation between high levels of dread and risk-avoidance. People who experienced intense dread were much less likely to take risks. This makes sense within the researchers’ framework.

Taking a gamble creates a situation where a negative outcome is possible. This possibility triggers dread. By avoiding the risk entirely, the individual removes the source of the dread.

The results also showed a strong connection between dread and impatience. People who felt high levels of dread were less willing to wait for rewards. This also aligns with the researchers’ model.

Waiting for an uncertain outcome prolongs the experience of dread. A person who hates waiting may simply be trying to shorten the time they spend feeling anxious. They choose immediate rewards to stop the emotional wheel from spinning.

The study found that savoring plays a much smaller role in decision-making. The pleasure of imagining a good outcome is generally weak. This may be because positive anticipation is often mixed with the fear that the good event might not happen.

The authors checked to see if these results were simply due to personality traits. For example, a person with high neuroticism might naturally be both anxious and risk-avoidant. The researchers controlled for the “Big Five” personality traits in their analysis.

Even after accounting for neuroticism and other traits, the effect of dread remained. This suggests that the asymmetry of anticipatory emotions is a distinct psychological mechanism. It is not just a symptom of being a generally anxious person.

This research offers a unified explanation for economic behavior. It suggests that risk preferences and time preferences are not independent. They are both shaped by the desire to manage anticipatory emotions.

The authors use the analogy of a roulette wheel to explain their findings. When a person bets on roulette, they are not just weighing the odds of winning or losing. They are also deciding if they can endure the feeling of watching the wheel spin.

If the dread of losing is overwhelming, the person will not bet at all. If they do bet, they will want the wheel to stop as quickly as possible. The act of betting creates a stream of emotional discomfort that lasts until the result is known.

There are some limitations to this study. It relies on observational data rather than a controlled experiment. The researchers inferred emotions from survey responses rather than measuring them physiologically.

Additionally, the study assumes that changes in well-being are caused by financial expectations. It is possible that other unmeasured factors influenced both happiness and expectations. However, the use of longitudinal data helps to account for stable individual differences.

The findings have implications for various sectors. In healthcare, patients might avoid screening tests because the dread of a bad result outweighs the benefit of knowing. Reducing the waiting time for results could encourage more people to get tested.

In finance, investors might choose low-return savings accounts over stocks to avoid the anxiety of market fluctuations. This “dread premium” could explain why safe assets are often overvalued. Investors pay a price for emotional tranquility.

Future research could investigate how to modify these anticipatory emotions. If people can learn to reduce their dread, they might make better long-term decisions. Techniques from cognitive behavioral therapy could potentially help investors and patients manage their anxiety.

The study provides a new lens through which to view human irrationality. We often make choices that look bad on paper because we are optimizing for our current emotional state. We are willing to pay a high price to avoid the shadow of the future.

The study, “Asymmetric Anticipatory Emotions and Economic Preferences: Dread, Savoring, Risk, and Time,” was authored by Chris Dawson and Samuel G. B. Johnson.

❌
❌