Reading view

Why a quest for a psychologically rich life may lead us to choose unpleasant experiences

New research suggests that the desire for a psychologically rich life, one filled with varied and perspective-altering experiences, is a significant driver behind why people choose activities that are intentionally unpleasant or challenging. The series of studies, published in the journal Psychology & Marketing, indicates that this preference is largely fueled by a motivation for personal growth.

Researchers have long been interested in why people sometimes opt for experiences that are not traditionally pleasurable, such as watching horror movies, eating intensely sour foods, or enduring grueling physical challenges. This behavior, known as counterhedonic consumption, seems to contradict the basic human drive to seek pleasure and avoid pain. While previous explanations have pointed to factors like sensation-seeking or a desire to accumulate a diverse set of life experiences, researchers proposed a new motivational framework to explain this phenomenon.

They theorized that some individuals are driven by a search for psychological richness, a dimension of well-being distinct from happiness or a sense of meaning. A psychologically rich life is characterized by novelty, complexity, and experiences that shift one’s perspective. The researchers hypothesized that this drive could lead people to embrace discomfort, not for the discomfort itself, but for the personal transformation and growth such experiences might offer.

To investigate this idea, the researchers conducted a series of ten studies involving a total of 2,275 participants. In an initial study, participants were presented with a poster for a haunted house pass and asked how likely they would be to try it. They also completed questionnaires measuring their desire for a psychologically rich life, as well as their desire for a happy or meaningful life and their tendency toward sensation-seeking.

The results showed a positive relationship between the search for psychological richness and a preference for the haunted house experience. This connection remained even when accounting for the other factors.

To see if this finding extended beyond fear-based activities, a subsequent study presented participants with a detailed description of an intensely sour chicken dish. Again, individuals who scored higher on the scale for psychological richness expressed a greater likelihood of ordering the dish.

A third study solidified these findings in a choice-based scenario, asking participants to select between a “blissful garden” experience and a “dark maze” designed to be disorienting. Those with a stronger desire for psychological richness were more likely to choose the dark maze, a finding that held even after controlling for general risk-taking tendencies.

Having established a consistent link, the research team sought to determine causality. In another experiment, they temporarily prompted one group of participants to focus on psychological richness by having them write about what it means to make choices based on a desire for interesting and perspective-changing outcomes. A control group wrote about their daily life. Afterward, both groups were asked about their interest in a horror movie streaming service.

The group primed to think about psychological richness showed a significantly higher preference for the service, suggesting that this mindset can directly cause an increased interest in counterhedonic experiences.

The next step was to understand the psychological process behind this link. The researchers proposed that a focus on self-growth was the key mechanism. One study tested this by again presenting the sour food scenario and then asking participants to what extent their choice was motivated by a desire for self-discovery and personal development. A statistical analysis revealed that the desire for self-growth fully explained the connection between a search for psychological richness and the preference for the sour dish.

To ensure self-growth was the primary driver, another study tested it against an alternative explanation: the desire to create profound memories. While a rich life might involve creating interesting stories to tell, the results showed that self-growth was the significant factor explaining the choice for the sour dish, whereas the desire for profound memories was not.

Further strengthening the causal claim, another experiment first manipulated participants’ focus on psychological richness and then measured their self-growth motivation. The results showed that the manipulation increased a focus on self-growth, which in turn increased the preference for the counterhedonic food item.

A final, more nuanced experiment provided further support for the self-growth mechanism. In this study, the researchers manipulated self-growth motivation directly. One group was asked to write about making choices that foster personal growth, while a control group was not. In the control condition, the expected pattern emerged: people higher in the search for psychological richness were more interested in the sour dish.

However, in the group where self-growth was made salient, preferences for the sour dish increased across the board. This effectively reduced the predictive power of a person’s baseline level of psychological richness, indicating that when the need for self-growth is met, the underlying trait becomes less of a deciding factor.

The research has some limitations. Many of the studies relied on hypothetical scenarios and self-reported preferences, which may not perfectly reflect real-world consumer behavior. The researchers suggest that future work could use field experiments to observe actual choices in natural settings. They also note that cultural differences could play a role, as some cultures may place a higher value on experiences of discomfort as a pathway to wisdom or personal development. Exploring these boundary conditions could provide a more complete picture of this motivational system.

The study, “The Allure of Pain: How the Quest for Psychological Richness Drives Counterhedonic Consumption,” was authored by Sarah Su Lin Lee, Ritesh Saini, and Shashi Minchael.

Depression may lead to cognitive decline via social isolation

An analysis of the China Health and Retirement Longitudinal Study data found that individuals with more severe depressive symptoms tend to report higher levels of social isolation at a later time point. In turn, individuals who are more socially isolated tend to report slightly worse cognitive functioning. Analyses showed that social isolation mediates a small part of the link between depressive symptoms and worse cognitive functioning. The paper was published in the Journal of Affective Disorders.

Depression is a mental health disorder characterized by persistent sadness, loss of interest or pleasure, and feelings of hopelessness that interfere with daily functioning. It adversely affects the way a person thinks, feels, and behaves. It can lead to difficulties in work, relationships, and self-care.

People with depression may experience fatigue, changes in appetite, and sleep disturbances. Concentration and decision-making can become harder, reducing productivity and motivation. Physical symptoms such as pain, headaches, or digestive issues may also appear without clear medical causes.

Depression can diminish the ability to enjoy previously pleasurable activities, leading to social withdrawal. This isolation can worsen depressive symptoms, creating a cycle of loneliness and despair. Social isolation itself is both a risk factor for developing depression and a common consequence of it.

Study author Jia Fang and her colleagues note that depressed individuals also tend to show worse cognitive functioning. They conducted a study aiming to explore the likely causal direction underpinning the longitudinal association between depressive symptoms and cognitive decline, and a possible mediating role social isolation has in this link among Chinese adults aged 45 years and above. These authors hypothesized that social isolation mediates the association between depressive symptoms and cognitive function.

Study authors analyzed data from the China Health and Retirement Longitudinal Study (CHARLS). CHARLS is a nationally representative longitudinal survey of Chinese residents aged 45 and above. This analysis used CHARLS data from three waves in 2013, 2015, and 2018, including a total of 9,220 participants. 51.4% were women. Participants’ average age was 58 years.

The authors of the study used data on participants’ depressive symptoms (the 10-item Center for Epidemiologic Studies Depression Scale), social isolation, and cognitive function (assessed with tests of contextual memory and mental integrity). A social isolation score was calculated based on four factors: being unmarried (single, separated, divorced, or widowed), living alone, having less than weekly contact with children (in person, via phone, or email), and not participating in any social activities in the past month.

Results showed that depressive symptoms were associated with subsequent social isolation. Social isolation, in turn, was associated with subsequent worse cognitive functioning. Further analyses showed that social isolation partially mediated the link between depressive symptoms and cognitive functioning, explaining 3.1% of the total effect.

The study authors concluded that the association between depressive symptoms and cognitive function is partially mediated by social isolation. They suggest that public health initiatives targeting depressive symptoms in older adults could reduce social isolation and help maintain cognitive health in middle-aged and older adults in China.

The study sheds light on the nature of the link between depressive symptoms and cognitive functioning. However, it should be noted that the design of the study does not allow definitive causal inferences to be derived from these results. Additionally, social isolation was assessed through self-reports, leaving room for reporting bias to have affected the results. Finally, the reported mediation effect was very modest in size, indicating that the link between depression and cognitive functioning depends much more on factors other than social isolation.

The paper, “Social isolation mediates association between depressive symptoms and cognitive function: Evidence from China Health and Retirement Longitudinal Study,” was authored by Jia Fang, Wencan Cheng, Huiyuan Li, Chen Yang, Ni Zhang, Baoyi Zhang, Ye Zhang, and Meifen Zhang.

New research explores why being single is linked to lower well-being in two different cultures

A new study finds that single adults in both the United States and Japan report lower well-being than their married peers. The research suggests that the influence of family support and strain on this health and satisfaction gap differs significantly between the two cultures. The findings were published in the journal Personal Relationships.

Researchers conducted this study to better understand the experiences of single adults outside of Western contexts. Much of the existing research has focused on places like the United States, where singlehood is becoming more common and accepted. In these individualistic cultures, some studies suggest single people may even have stronger connections with family and friends than married individuals.

However, in many Asian cultures, including Japan, marriage is often seen as a more essential part of life and family. This can create a different set of social pressures for single people. The researchers wanted to investigate whether these cultural differences would alter how family relationships, both positive and negative, are connected to the well-being of single and married people in the U.S. and Japan.

“I’ve always been curious about relationship transitions and singlehood lies in this awkward space where people are unsure if it really counts as an actual ‘relationship stage’ per se,” said study author Lester Sim, an assistant professor of psychology at Singapore Management University.

“Fortunately, the field is starting to recognize singlehood as an important period and it’s becoming more common, yet people still seem to judge singles pretty harshly. I find that kind of funny in a way, because it often reflects how we judge ourselves through others. Coming from an Asian background, I also wondered if these attitudes toward singlehood might play out differently across cultures, especially since family ties are so central in Asian contexts. That curiosity really sparked this project.”

To explore this, the research team analyzed data from two large, nationally representative studies: the Midlife in the U.S. (MIDUS) study and the Midlife in Japan (MIDJA) study. The combined sample included 4,746 participants who were 30 years of age or older. The researchers focused specifically on individuals who identified as either “married” or “never married,” and they took additional steps to exclude participants who were in a cohabiting or romantic relationship despite being unmarried.

Participants in both studies answered questions at two different points in time. The first wave of data included their marital status, their perceptions of family support, and their experiences of family strain. Family support was measured with items asking how much they felt their family cared for them or how much they could open up to family about their worries. Family strain was assessed with questions about how often family members criticized them or let them down.

At the second wave of data collection, participants reported on their well-being. This included rating their overall physical health on a scale from 0 to 10 and their satisfaction with life through a series of six questions about different life domains. The researchers then used a statistical approach to see how marital status at the first time point was related to well-being at the second time point, and whether family support and strain helped explain that relationship.

Across the board, the results showed that single adults in both the United States and Japan reported poorer physical health and lower life satisfaction compared to their married counterparts. This finding aligns with a large body of previous research suggesting that marriage is generally associated with better health outcomes.

When the researchers examined the role of family dynamics, they found distinct patterns in each country. For American participants, being married was associated with receiving more family support and experiencing less family strain. Both of these family factors were, in turn, linked to higher well-being. This suggests that for Americans, the well-being advantage of being married is partially explained by having more supportive and less tense family relationships.

The pattern observed in the Japanese sample was quite different. Single Japanese adults did report experiencing more family strain than married Japanese adults. Yet, this higher level of family strain did not have a significant connection to their physical health or life satisfaction later on.

“Family relationships matter a lot for everyone, whether you’re single or married, but in different ways across cultures,” Sim told PsyPost. “We found that singles in both the US and Japan reported lower well-being, in part because they experienced more family strain and less support (differentially across cultures). So even though singlehood is becoming more common, it still carries social and emotional costs. I think this shows how important it is to build more inclusive environments where singles feel equally supported and valued.”

Another notable finding from the Japanese sample was that there was no significant difference in the amount of family support reported by single and married individuals. While family support did predict higher life satisfaction for Japanese participants, it did not serve as a pathway explaining the well-being gap between single and married people in the way it did for Americans.

“I honestly thought the patterns would differ more across cultures,” Sim said. “I expected singles in Western countries to feel more accepted, and singles in Asia to rely more on family support and report greater strain; but neither of the latter findings turned out to be the case. It seems that, across the board, social norms around marriage still shape how people experience singlehood and well-being.”

The researchers acknowledged some limitations of their work. The definition of “single” was based on available survey questions and could be refined in future studies with more direct inquiries about relationship status.

“We focused only on familial support and strain because family is such a big part of East Asian culture,” Sim noted. “But singlehood is complex: friendships, loneliness, voluntary versus involuntary singlehood, and how satisfied people feel being single all matter too. We didn’t examine these constructs in the current study because there is existing work on this topic, so I wanted to bring more focus onto the family (especially with the cross-cultural focus). Future work should dig into those other layers and examine how they interact to shape the singlehood experience.”

It would also be beneficial to explore these dynamics across different age groups, as the pressures and supports related to marital status may change over a person’s lifespan. Such work would help create a more comprehensive picture of how singlehood is experienced around the world.

“I want to keep exploring how culture shapes the meanings people attach to relationships and singlehood,” Sim explained. “Long term, I hope this work helps shift the narrative away from the idea that marriage is the default route to happiness, and shift toward recognizing that there are many valid ways to live a good life.”

“Being single isn’t a problem to be fixed. It’s a meaningful, often intentional part of many people’s lives. The more we understand that, the closer we get to supporting well-being for everyone, not just those who are married.”

The study, “Cross-Cultural Differences in the Links Between Familial Support and Strain in Married and Single Adults’ Well-Being,” was authored by Lester Sim and Robin Edelstein.

“Major problem”: Ketamine fails to outperform placebo for treating severe depression in new clinical trial

A new clinical trial has found that adding repeated intravenous ketamine infusions to standard care for hospitalized patients with serious depression did not provide a significant additional benefit. The study, which compared ketamine to a psychoactive placebo, suggests that previous estimates of the drug’s effectiveness might have been influenced by patient and clinician expectations. These findings were published in the journal JAMA Psychiatry.

Ketamine, originally developed as an anesthetic, has gained attention over the past two decades for its ability to produce rapid antidepressant effects in individuals who have not responded to conventional treatments. Unlike standard antidepressants that can take weeks to work, a single infusion of ketamine can sometimes lift mood within hours. A significant drawback, however, is that these benefits are often short-lived, typically fading within a week.

This has led to the widespread practice of administering a series of infusions to sustain the positive effects. A central challenge in studying ketamine is its distinct psychological effects, such as feelings of dissociation or detachment from reality. When compared to an inactive placebo like a saline solution, it is very easy for participants and researchers to know who received the active drug, potentially creating strong expectancy effects that can inflate the perceived benefits.

To address this, the researchers designed their study to use an “active” placebo, a drug called midazolam, which is a sedative that produces noticeable effects of its own, making it a more rigorous comparison.

“Ketamine has attracted a lot of interest as a rapidly-acting antidepressant but it has short-lived effects. Therefore, its usefulness is quite limited. Despite this major limitation, ketamine is increasingly being adopted as an off-label treatment for depression, especially in the USA,” said study author Declan McLoughlin, a professor at Trinity College Dublin.

“We hypothesized that repeated ketamine infusions may have more sustained benefit. So far this has been evaluated in only a small number of trials. Another problem is that few ketamine trials have used an adequate control condition to mask the obvious dissociative effects of ketamine, e.g. altered consciousness and perceptions of oneself and one’s environment.”

“To try address some of these issues, we conducted an independent investigator-led randomized trial (KARMA-Dep 2) to evaluate antidepressant efficacy, safety, cost-effectiveness, and quality of life during and after serial ketamine infusions when compared to a psychoactive comparison drug midazolam. Trial participants were randomized to receive up to eight infusions of either ketamine or midazolam, given over four weeks, in addition to all other aspects of usual inpatient care.”

The trial, conducted at an academic hospital in Dublin, Ireland, aimed to see if adding twice-weekly ketamine infusions to the usual comprehensive care provided to inpatients could improve depression outcomes. Researchers enrolled adults who had been voluntarily admitted to the hospital for moderate to severe depression. These participants were already receiving a range of treatments, including medication, various forms of therapy, and psychoeducation programs.

In this randomized, double-blind study, 65 participants were assigned to one of two groups. One group received intravenous ketamine infusions twice a week for up to four weeks, while the other group received intravenous midazolam on the same schedule. The doses were calculated based on body weight. The double-blind design meant that neither the patients, the clinicians rating their symptoms, nor the main investigators knew who was receiving which substance. Only the anesthesiologist administering the infusion knew the assignment, ensuring patient safety without influencing the results.

The primary measure of success was the change in participants’ depression scores, assessed using a standard clinical tool called the Montgomery-Åsberg Depression Rating Scale. This assessment was conducted at the beginning of the study and again 24 hours after the final infusion. The researchers also tracked other outcomes, such as self-reported symptoms, rates of response and remission, cognitive function, side effects, and overall quality of life.

After analyzing the data from 62 participants who completed the treatment phase, the study found no statistically significant difference in the main outcome between the two groups. Although patients in both groups showed improvement in their depressive symptoms during their hospital stay, the group receiving ketamine did not fare significantly better than the group receiving midazolam. The average reduction in depression scores was only slightly larger in the ketamine group, a difference that was small and could have been due to chance.

Similarly, there were no significant advantages for ketamine on secondary measures, including self-reported depression symptoms, cognitive performance, or long-term quality of life. While the rate of remission from depression was slightly higher in the ketamine group (about 44 percent) compared to the midazolam group (30 percent), this difference was not statistically robust. The treatments were found to be generally safe, though ketamine produced more dissociative experiences during the infusion, while midazolam produced more sedation.

“We found no significant difference between the two groups on our primary outcome measure (i.e. depression severity assessed with the commonly used Montgomery-Åsberg Depression Rating Scale (MADRS)),” McLoughlin told PsyPost. “Nor did we find any difference between the two groups on any other secondary outcome or cost-effectiveness measure. Under rigorous clinical trial conditions, adjunctive ketamine provided no additional benefit to routine inpatient care during the initial treatment phase or the six-month follow-up period.”

A key finding emerged when the researchers checked how well the “blinding” had worked. They discovered that it was not very successful. From the very first infusion, the clinicians rating patient symptoms were able to guess with high accuracy who was receiving ketamine.

Patients in the ketamine group also became quite accurate at guessing their treatment over time. This functional unblinding complicates the interpretation of the results, as the small, nonsignificant trend favoring ketamine could be explained by the psychological effect of knowing one is receiving a treatment with a powerful reputation.

“Our initial hypothesis was that repeated ketamine infusions for people hospitalised with depression would improve mood outcomes,” McLoughlin said. “However, contrary to our hypothesis, we found this not to be the case. We suspect that functional unblinding (due to its obvious dissociative effects) has amplified the placebo effects of ketamine in previous trials. This is a major, often unacknowledged, problem with many recent trials in psychiatry evaluating ketamine, psychedelic, and brain stimulation therapies. Our trial highlights the importance of reporting the success, or lack thereof, of blinding in clinical trials.”

The study’s authors acknowledged some limitations. The research was unable to recruit its planned number of participants, partly due to logistical challenges created by the COVID-19 pandemic. This smaller sample size reduced the study’s statistical power, making it harder to detect a real, but modest, difference between the treatments if one existed. The primary limitation, however, remains the challenge of blinding.

The results from this trial suggest that when tested under more rigorous conditions, the antidepressant benefit of repeated ketamine infusions may be smaller than suggested by earlier studies that used inactive placebos. The researchers propose that expectations for both patients and clinicians may play a substantial role in ketamine’s perceived effects. This highlights the need to recalibrate expectations for ketamine in clinical practice and for more robustly designed trials in psychiatry.

Looking forward, the researchers emphasize the importance of reporting negative or null trial results to provide a balanced view of a treatment’s capabilities. They also expressed concern about a separate in the field: the promotion of ketamine as an equally effective alternative to electroconvulsive therapy, or ECT.

“Scrutiny of the scientific literature shows that this includes methodologically flawed trials and invalid meta-analyses,” McLoughlin said. “We discuss this in some detail in a Comment piece just published in Lancet Psychiatry. Unfortunately, such errors have been accepted as scientific evidence and are already creeping into international clinical guidelines. There is a thus a real risk of patients and clinicians being steered towards a less effective treatment, particularly for patients with severe, sometimes life-threatening, depression.”

The study, “Serial Ketamine Infusions as Adjunctive Therapy to Inpatient Care for Depression: The KARMA-Dep 2 Randomized Clinical Trial,” was authored by Ana Jelovac, Cathal McCaffrey, Masashi Terao, Enda Shanahan, Emma Whooley, Kelly McDonagh, Sarah McDonogh, Orlaith Loughran, Ellie Shackleton, Anna Igoe, Sarah Thompson, Enas Mohamed, Duyen Nguyen, Ciaran O’Neill, Cathal Walsh, and Declan M. McLoughlin.

Perceiving these “dark” personality traits in a partner strongly predicts relationship dissatisfaction

A new study suggests that higher levels of psychopathic traits are associated with lower relationship satisfaction in romantic couples. The research indicates that a person’s perception of their partner’s traits is a particularly strong predictor of their own discontent within the relationship. The findings were published in the Journal of Couple & Relationship Therapy.

The research team was motivated by the established connection between personality and the quality of romantic relationships. While traits like agreeableness and conscientiousness are known to support relationship satisfaction, maladaptive traits, such as those associated with psychopathy, are understood to be detrimental. Psychopathy is not a single trait but a combination of characteristics, including interpersonal manipulation, a callous lack of empathy, an erratic lifestyle, and antisocial tendencies.

Previous studies have shown that individuals with more pronounced psychopathic traits tend to prefer short-term relationships, are more likely to be unfaithful, and may engage in controlling or destructive behaviors. Yet, much of this research did not simultaneously account for the perspectives of both partners in a relationship. The researchers aimed to provide a more nuanced understanding by examining how both a person’s own traits and their partner’s traits, as viewed by themselves and by their partner, collectively influence relationship satisfaction.

To investigate these dynamics, the researchers recruited a sample of 85 heterosexual couples from the Netherlands. The participants were predominantly young adults, many of whom were students. Each member of the couple independently completed a series of online questionnaires. The surveys were designed to measure their own psychopathic traits, their perception of their partner’s psychopathic traits, and their overall satisfaction with their relationship.

For measuring psychopathic traits, the study used a well-established questionnaire that assesses three primary facets: Interpersonal Manipulation (e.g., being charming but deceptive), Callous Affect (e.g., lacking guilt or empathy), and Erratic Lifestyle (e.g., impulsivity and irresponsibility). A fourth facet, Antisocial Tendencies, was excluded from the final analysis due to statistical unreliability within this specific sample. Participants completed one version of this questionnaire about themselves and a modified version about their romantic partner.

The researchers used a specialized statistical technique called the Actor-Partner Interdependence Model to analyze the data. This method is uniquely suited for studying couples because it can distinguish between two different kinds of influence. “Actor effects” refer to the association between an individual’s own characteristics and their own outcomes. For example, it can measure how your self-rated manipulativeness relates to your own relationship satisfaction. “Partner effects” describe the association between an individual’s characteristics and their partner’s outcomes, such as how your self-rated manipulativeness relates to your partner’s satisfaction.

Before conducting the main analysis, the researchers examined how partners’ ratings related to one another. They found very little “actual similarity,” meaning that a man’s level of psychopathic traits was not significantly related to his female partner’s level. However, they did find moderate “perceptual accuracy,” which means that how a person rated their partner was generally in line with how that partner rated themselves. There was also strong “perceptual similarity,” indicating that people tended to rate their partners in a way that was similar to how they rated themselves.

One notable preliminary finding was that both men and women tended to rate their partners as having lower levels of psychopathic traits than their partners reported for themselves. This could suggest a positive bias, where individuals maintain a more charitable view of their partner, or it may indicate that certain maladaptive traits are not easily observable to others in a relationship.

The central findings of the study emerged from the Actor-Partner Interdependence Model. The most consistent result was a negative actor effect related to partner perception. When an individual rated their partner higher on psychopathic traits, that same individual reported lower satisfaction with the relationship. This connection was present for both men and women and held true across the total psychopathy score and its specific facets.

The study also identified other significant associations. For both men and women, rating oneself higher on Interpersonal Manipulation was linked to lower satisfaction in one’s own relationship. This suggests that a manipulative style may be unfulfilling even for the person exhibiting it.

A partner effect was observed for the trait of Callous Affect. When a person was perceived by their partner as being more callous, unemotional, and lacking in empathy, that partner reported lower relationship satisfaction. This highlights the direct interpersonal damage that a lack of emotional connection can inflict on a relationship.

In an unexpected turn, the analysis revealed one positive association. When women rated themselves as higher in Callous Affect, their male partners reported slightly higher levels of relationship satisfaction. The researchers propose that this could be related to gender stereotypes, where traits that might be labeled as callous in a clinical sense could be interpreted differently, perhaps as toughness or independence, in women by their male partners.

The study has some limitations that the authors acknowledge. The sample consisted of young, primarily student-based, heterosexual couples in relatively short-term relationships, which may not represent the dynamics in older, married, or more diverse couples. Because the study captured data at a single point in time, it cannot establish causality; it shows an association, not that psychopathic traits cause dissatisfaction. The sample size also meant the study was better equipped to detect medium-to-large effects, and smaller but still meaningful associations might have been missed.

Future research could build on these findings by studying larger and more diverse populations over a longer period. Following couples over time would help clarify how these personality dynamics affect relationship quality and stability as the relationship matures. A longitudinal approach could also determine if these traits predict relationship dissolution.

The study, “Psychopathic Traits and Relationship Satisfaction in Intimate Partners: A Dyadic Approach,” was authored by Frederica M. Martijn, Liam Cahill, Mieke Decuyper, and Katarzyna (Kasia) Uzieblo.

What scientists found when they analyzed 187 of Donald Trump’s shrugs

A new study indicates that Donald Trump’s frequent shrugging is a deliberate communication tool used to establish common ground with his audience and express negative evaluations of his opponents and their policies. The research, published in the journal Visual Communication, suggests these gestures are a key component of his populist performance style, helping him appear both ordinary and larger-than-life.

Researchers have become increasingly interested in the communication style of right-wing populism, which extends beyond spoken words to include physical performance. While a significant amount of analysis has focused on Donald Trump’s language, particularly on social media platforms, his live performances at rallies have received less systematic attention. The body is widely recognized as being important to political performance, but the specific gestures used are not always well understood.

This new research on shrugging builds on a previous study by one of the authors that examined Trump’s use of pointing gestures. That analysis found that Trump uses different kinds of points to serve distinct functions, such as pointing outwards to single out opponents, pointing inwards to emphasize his personal commitment, and pointing downwards to connect his message to the immediate location of his audience. The current study continues this investigation into his non-verbal communication by focusing on another of his signature moves, the shrug.

“The study was motivated by several factors,” explained Christopher Hart, a professor of linguistics at Lancaster University and the author of Language, Image, Gesture: The Cognitive Semiotics of Politics.

(1) Political scientists frequently refer to the more animated bodily performance of right wing populist politicians like Trump compared to non-populist leaders. We wanted to study one gesture – the shrug – that seemed to be implicated here. (2) Trump’s shrug gestures have been noted by the media previously and described as his “signature move”. We wanted to study this gesture in more detail to examine its precise forms and the way he uses it to fulfil rhetorical goals.”

“(3) To meet a gap: while a great deal has been written about Donald Trump’s speech and his use of language online, much less has been written about the gestures that accompany his speech in live settings. This is despite the known importance of gesture in political communication.”

To conduct their analysis, the researchers examined video footage of two of Trump’s campaign rallies from the 2016 primary season. The events, one in Dayton, Ohio, and the other in Buffalo, New York, amounted to approximately 110 minutes of data. The researchers adopted a conservative approach, identifying 187 clear instances of shrugging gestures across the two events.

Each shrug was coded based on its physical form and its communicative function. For the form, they classified shrugs based on the orientation of the forearms and the position of the hands relative to the body. They also noted whether the shrug was performed with one or two hands and whether it was a simple gesture or a more complex, animated movement. To understand the function, they analyzed the spoken words accompanying each shrug to determine the meaning being conveyed.

Hart was surprised “just how often Trump shrugs – 1.7 times per minute in the campaign rallies analyzed. Trump is a prolific shrugger and this is one way his communication style breaks with traditional forms of political communication.”

The analysis of the physical forms of the shrugs provided evidence for what has been described as a strong “corporeal presence.” Trump tended to favor expansive shrugs, with his hands positioned outside his shoulder width, a form that physically occupies more space.

The second most frequent type was the “lateral” shrug, where his arms extend out to his sides, sometimes in a highly theatrical, showman-like manner. This use of large, exaggerated gestures appears to contribute to a performance style more commonly associated with live entertainment than with traditional politics.

The researchers also noted that nearly a third of his shrugs were complex, meaning they involved animated, oscillating movements. These gestures create a dynamic and sometimes caricatured performance. While these expansive and animated shrugs help create an extraordinary, entertaining persona, the very act of shrugging is an informal, everyday gesture. This combination seems to allow Trump to simultaneously signal both his ordinariness and his exceptionalism.

When examining the functions of the shrugs, the researchers found that the most common meaning was not what many people might expect. While shrugs are often associated with expressing ignorance (“I don’t know”) or indifference (“I don’t care”), these were not their primary uses in Trump’s speeches. Instead, the most frequent function, accounting for over 44 percent of instances, was to signal common ground or obviousness. Trump often uses a shrug to present a statement as a self-evident truth that he and his audience already share.

For example, he would shrug when asking rhetorical questions like “We love our police. Do we love our police?” The gesture suggests the answer is obvious and that everyone in the room is in agreement. He also used these shrugs to present his own political skills as a given fact or to frame the shortcomings of his opponents as plainly evident to all. This use of shrugging appears to be a powerful tool for building a sense of shared knowledge and values with his supporters.

“Most people think of shrugs as conveying ‘I don’t know’ or ‘I don’t care,” Hart told PsyPost. “While Trump uses shrugs to convey these meanings, more often he uses shrugs to indicate that something is known to everyone or obviously the case. This is one of the ways he establishes common ground and aligns himself with his audience, indicating that he and they hold a shared worldview.”

The second most common function was to express what the researchers term “affective distance.” This involves conveying negative emotions like disapproval, dissatisfaction, or dismay towards a particular state of affairs. When discussing trade deals he considered terrible or military situations he found lacking, a shrug would often accompany his words. In these cases, the gesture itself, rather than the explicit language, carried the negative emotional evaluation of the topic.

Shrugs that conveyed “epistemic distance,” meaning ignorance, doubt, or disbelief, accounted for about 17 percent of the total. A notable use of this function occurred during what is known as “constructed dialogue,” where Trump would re-enact conversations. In one instance, he used a mocking shrug while impersonating a political opponent to portray them as clueless and incompetent, a performance that drew laughter from the crowd.

The least common function was indifference, or the classic “I don’t care” meaning. Though infrequent, these shrugs served a strategic purpose. When shrugging alongside a phrase like “I understand that it might not be presidential. Who cares?,” Trump used the gesture to dismiss the conventions of traditional politics. This helps him position himself as an outsider who is not bound by the same rules as the political establishment.

The findings highlight that “what politicians do with their hands and other body parts is an important part of their message and their brand,” Hart told PsyPost. However, he emphasizes that “gestures are not ‘body language.’ They do not accidentally give away one’s emotional state. Gestures are built in to the language system and are part of the way we communicate. They carry part of the information speakers intend to convey and that information forms part of the message audiences take away.”

The study does have some limitations. Its analysis is focused exclusively on Donald Trump, so it remains unclear whether this pattern of shrugging is unique to his style or a broader feature of right-wing populist communication. Future research could compare his gestural profile to that of other populist and non-populist leaders.

Additionally, the study centered on one specific gesture, and a more complete picture would require analyzing the full range of a politician’s non-verbal repertoire. The authors also suggest that future work could examine other elements, like facial expressions and the timing of gestures, in greater detail.

Despite these limitations, the research provides a detailed look at how a seemingly simple gesture can be a sophisticated and versatile rhetorical tool. Trump’s shrugs appear to be a central part of a performance style that transgresses political norms, creates entertainment value, and forges a strong connection with his base. The findings indicate the importance of looking beyond a politician’s words to understand the full, embodied performance through which they communicate their message.

“We hope to look at other gestures of Trump to build a bigger picture of how he uses his body to distinguish himself from other politicians and to imbue his performances with entertainment value,” Hart said. This might include, for example, his use of chopping or slicing gestures. I also hope to explore the gestural performances of other right wing populist politicians in Europe to see how their gestures compare. ”

The study, “A shrug of the shoulders is a stance-taking act: The form-function interface of shrugs in the multimodal performance of Donald Trump,” was authored by Christopher Hart and Steve Strudwick.

Horror films may help us manage uncertainty, a new theory suggests

A new study proposes that horror films are appealing because they offer a controlled environment for our brains to practice predicting and managing uncertainty. This process of learning to master fear-inducing situations can be an inherently rewarding experience, according to the paper published in Philosophical Transactions of the Royal Society B.

The authors behind the paper, published in 2013, sought to address why people are drawn to entertainment that is designed to be frightening or disgusting. While some studies have shown psychological benefits from engaging with horror, many existing theories about its appeal seem to contradict one another. The authors aimed to provide a single, unifying framework that could explain how intentionally seeking out negative feelings like fear can result in positive psychological outcomes.

To do this, they applied a theory of brain function known as predictive processing. This framework suggests the brain operates as a prediction engine, constantly making forecasts about incoming sensory information from the world. When reality does not match the brain’s prediction, a “prediction error” occurs, which the brain then works to minimize by updating its internal models or by acting on the world to make it more predictable.

This does not mean humans always seek out calm and predictable situations. The theory suggests people are motivated to find optimal opportunities for learning, which often lie at the edge of their understanding. The brain is not just sensitive to the amount of prediction error, but to the rate at which that error is reduced over time. When we reduce uncertainty faster than we expected, it generates a positive feeling.

This search for the ideal rate of error reduction is what drives curiosity and play. We are naturally drawn to a “Goldilocks zone” of manageable uncertainty that is neither too boringly simple nor too chaotically complex. The researchers argue that horror entertainment is specifically engineered to place its audience within this zone.

According to the theory, horror films can be understood as a form of “affective technology,” designed to manipulate our predictive minds. Even though we know the monsters are not real, the brain processes the film as an improbable version of reality from which it can still learn. Many horror monsters tap into deep-seated, evolutionary fears of predators by featuring sharp teeth, claws, and stealthy, ambush-style behaviors.

The narrative structures of horror films are also built to play with our expectations. The slow build-up of suspense creates a state of high anticipation, and a “jump scare” works by suddenly violating our moment-to-moment predictions. The effectiveness of these techniques is heightened because they are not always predictable. Sometimes the suspense builds and nothing happens, which makes the audience’s response system even more alert.

At the same time, horror films often rely on familiar patterns and clichés, such as the “final girl” who survives to confront the villain. This combination of surprising events within a somewhat predictable structure provides the mix of uncertainty and resolvability that the predictive brain finds so engaging.

The authors propose that engaging with this controlled uncertainty has several benefits. One is that horror provides a low-stakes training ground for learning about high-stakes situations. This idea, known as morbid curiosity, suggests that we watch frightening content to gain information that could be useful for recognizing and avoiding real-world dangers. For example, the film Contagion saw a surge in popularity during the early days of the COVID-19 pandemic, as people sought to understand the potential realities of a global health crisis.

Another benefit is related to emotion regulation. By exposing ourselves to fear in a safe context, we can learn about our own psychological and physiological responses. The experience allows us to observe our own anxiety, increased heart rate, and other reactions as objects of attention, rather than just being swept away by them. This process can grant us a greater sense of awareness and control over our own emotional states, similar to the effects of mindfulness practices.

The theory also offers an explanation for why some people prone to anxiety might be drawn to horror. Anxiety can be associated with a feeling of uncertainty about one’s own internal bodily signals, a state known as noisy interoception. Watching a horror movie provides a clear, external source for feelings of fear and anxiety. For a short time, the rapid heartbeat and sweaty palms have an obvious and controllable cause: the monster on the screen, not some unknown internal turmoil.

The researchers note that this engagement is not always beneficial. For some individuals, particularly those with a history of trauma, horror media may serve to confirm negative beliefs about the world being a dangerous and threatening place. This can create a feedback loop where a person repeatedly seeks out horrifying content, reinforcing a sense of hopelessness or learned helplessness. Future work could examine when the engagement with scary media crosses from a healthy learning experience into a potentially pathological pattern.

The study, “Surfing uncertainty with screams: predictive processing, error dynamics and horror films,” was authored by Mark Miller, Ben White and Coltan Scrivner.

Long-term study shows romantic partners mutually shape political party support

A new longitudinal study suggests that intimate partners mutually influence each other’s support for political parties over time. The research found that a shift in one person’s support for a party was predictive of a similar shift in their partner’s support the following year, a process that may contribute to political alignment within couples and broader societal polarization. The findings were published in Personality and Social Psychology Bulletin/em>.

Political preferences are often similar within families, particularly between parents and children. However, less is known about how political views might be shaped during adulthood, especially within the context of a long-term romantic relationship. Prior studies have shown that partners often hold similar political beliefs, but it has been difficult to determine if this is because people choose partners who already agree with them or if they gradually influence each other over the years.

The authors of the new study sought to examine if this similarity is a result of ongoing influence. They wanted to test whether a change in one partner’s political stance could predict a future change in the other’s. To do this, they used a large dataset from New Zealand, a country with a multi-party system. This setting allowed them to see if any influence was specific to one or two major parties or if it occurred across a wider ideological spectrum, including smaller parties focused on issues like environmentalism, indigenous rights, and libertarianism.

To conduct their investigation, the researchers analyzed data from the New Zealand Attitudes and Values Study, a large-scale project that has tracked thousands of individuals over many years. Their analysis focused on 1,613 woman-man couples who participated in the study for up to 10 consecutive years. Participants annually rated their level of support for six different political parties on a scale from one (strongly oppose) to seven (strongly support).

The study employed a sophisticated statistical model designed for longitudinal data from couples. This technique allowed the researchers to separate two different aspects of a person’s political support. First, it identified each individual’s stable, long-term average level of support for a given party. Second, it isolated the small, year-to-year fluctuations or deviations from that personal average. This separation is important because it allows for a more precise test of influence over time.

The analysis then examined whether a fluctuation in one partner’s party support in a given year could predict a similar fluctuation in the other partner’s support in the subsequent year. This was done while accounting for the fact that couples already tend to have similar average levels of support.

The results showed a consistent pattern of mutual influence. For all six political parties examined, a temporary increase in one partner’s support for that party was associated with a subsequent increase in the other partner’s support one year later. This finding suggests that partners are not just politically similar from the start of their relationship but continue to shape one another’s specific party preferences over time.

This influence also appeared to be a two-way street. The researchers tested whether men had a stronger effect on women’s views or if the reverse was true. They found that the strength of influence was generally equal between partners. With only one exception, the effect of men on women’s party support was just as strong as the effect of women on men’s support.

The single exception involved the libertarian Association of Consumers and Taxpayers Party, where men’s changing support had a slightly stronger influence on women’s subsequent support than the other way around. For the other five parties, including the two largest and three other smaller parties, the influence was symmetrical. This challenges the idea that one partner, typically the man, is the primary driver of a couple’s political identity.

An additional analysis explored whether this dynamic of influence applied to a person’s general political orientation, which was measured on a scale from extremely liberal to extremely conservative. In this case, the pattern was different. While partners tended to be similar in their overall political orientation, changes in one partner’s self-rated orientation did not predict changes in the other’s over time. This suggests that the influence partners have on each other may be more about support for specific parties and their platforms than about shifting a person’s fundamental ideological identity.

The researchers acknowledge some limitations of their work. The study focused on established, long-term, cohabiting couples in New Zealand, so the findings may not apply to all types of relationships or to couples in other countries with different political systems. Because the couples were already in established relationships, the study also cannot entirely separate the effects of ongoing influence from the possibility that people initially select partners who are politically similar to them.

Future research could explore these dynamics in newer relationships to better understand the interplay between partner selection and later influence. Additional studies could also investigate the specific mechanisms of this influence, such as how political discussions, media consumption, or conflict avoidance might play a role in this process. Examining whether these shifts in expressed support translate to actual behaviors like voting is another important avenue for exploration.

The study, “The Interpersonal Transmission of Political Party Support in Intimate Relationships,” was authored by Sam Fluit, Nickola C. Overall, Danny Osborne, Matthew D. Hammond, and Chris G. Sibley.

Study finds a shift toward liberal politics after leaving religion

A new study suggests that individuals who leave their religion tend to become more politically liberal, often adopting views similar to those who have never been religious. This research, published in the Journal of Personality, provides evidence that the lingering effects of a religious upbringing may not extend to a person’s overall political orientation. The findings indicate a potential boundary for a psychological phenomenon known as “religious residue.”

Researchers conducted this study to investigate a concept called religious residue. This is the idea that certain aspects of a person’s former religion, such as specific beliefs, behaviors, or moral attitudes, can persist even after they no longer identify with that faith. Previous work has shown that these lingering effects can be seen in areas like moral values and consumer habits, where formerly religious people, often called “religious dones,” continue to resemble currently religious individuals more than those who have never been religious.

The research team wanted to determine if this pattern of residue also applied to political orientation. Given the strong link between religiosity and political conservatism in many cultures, it was an open question what would happen to a person’s politics after leaving their faith. They considered three main possibilities. One was that religious residue would hold, meaning religious dones would remain relatively conservative.

Another possibility was that they would undergo a “religious departure,” shifting to a liberal orientation similar to the never-religious. A third option was “religious reactance,” where they might react against their past by becoming even more liberal than those who were never religious.

To explore these possibilities, the researchers analyzed data from eight different samples across three multi-part studies. The first part involved a series of six cross-sectional analyses, which provide a snapshot in time. These studies included a total of 7,089 adults from the United States, the Netherlands, and Hong Kong. Participants were asked to identify as currently religious, formerly religious, or never religious, and to rate their political orientation on a scale from conservative to liberal.

In five of these six samples, the results pointed toward a similar pattern. Individuals who had left their religion reported significantly more liberal political views than those who were currently religious. Their political orientation tended to align closely with that of individuals who had never been religious. When the researchers combined all six samples for a more powerful analysis, they found that religious dones were, on average, more politically liberal than both currently religious and never-religious individuals. This combined result offered some initial evidence for the religious reactance hypothesis.

To gain a clearer picture of how these changes unfold over time, the researchers next turned to longitudinal data, which tracks the same individuals over many years. The second study utilized data from the National Study of Youth and Religion, a project that followed a representative sample of 2,071 American adolescents into young adulthood. This allowed the researchers to compare the political attitudes of those who remained affiliated with a religion, those who left their religion at different points, and those who were never religious.

The findings from this longitudinal sample provided strong support for the religious departure hypothesis. Individuals who left their religion during their youth or young adulthood reported more liberal political attitudes than those who remained religious. However, their political views were not significantly different from the views of those who had never been religious. This study also failed to find evidence for “residual decay,” the idea that religious residue might fade slowly over time. Instead, the shift toward a more liberal orientation appeared to be a distinct change associated with leaving religion, regardless of how long ago the person had de-identified.

The third study aimed to build on these findings with another longitudinal dataset, the Family Foundations of Youth Development project. This study followed 1,857 adolescents and young adults and had the advantage of measuring both religious identification and political orientation at multiple time points. This design allowed the researchers to use advanced statistical models to examine the sequence of these changes. Specifically, they could test whether becoming more liberal preceded leaving religion, or if leaving religion preceded becoming more liberal.

The results of this final study confirmed the findings of the previous ones. Religious dones again reported more liberal political attitudes, similar to their never-religious peers. The more advanced analysis revealed that changes in religious identity tended to precede changes in political orientation. In other words, the data suggests that an individual’s departure from religion came first, and this was followed by a shift toward a more liberal political stance. The reverse relationship, where political orientation predicted a later change in religious identity, was not statistically significant in this sample.

The researchers acknowledge some limitations in their work. The studies relied on a single, broad question to measure political orientation, which may not capture the complexity of political beliefs on specific social or economic issues. While the longitudinal designs provide a strong basis for inference, the data is observational, and experimental methods would be needed to make definitive causal claims. The modest evidence for religious reactance was only present in the combined cross-sectional data and may have been influenced by the age of the participants or other sample-specific factors.

Future research could explore these dynamics using more detailed assessments of political ideology to see if religious residue appears in certain policy areas but not others. Examining the role of personality traits like dogmatism could also offer insight into why some individuals shift their political views so distinctly.

Despite these limitations, the collection of studies provides converging evidence that for many people, leaving religion is associated with a clear and significant move toward a more liberal political identity. This suggests that as secularization continues in many parts of the world, it may be accompanied by corresponding shifts in the political landscape.

The study, “Religious Dones Become More Politically Liberal After Leaving Religion,” was authored by Daryl R. Van Tongeren, Sam A. Hardy, Emily M. Taylor, and Phillip Schwadel.

Popular ‘cognitive reserve’ theory challenged by massive new study on education and aging

An analysis of massive cognitive and neuroimaging databases indicated that more education was associated with better memory, larger intracranial volume, and slightly larger volumes of memory-sensitive brain regions. However, contrary to popular theories, education did not appear to protect against the rate of age-related memory decline, nor did it weaken the effects of brain decline on cognition. The paper was published in Nature Medicine.

As people reach advanced age, they tend to start gradually losing their mental abilities. This is called age-related cognitive decline. It typically affects functions such as memory, attention, processing speed, and problem-solving. This decline is a normal part of aging and differs from more serious conditions like dementia or Alzheimer’s disease.

Many older adults notice mild forgetfulness, slower thinking, or difficulty learning new information. Biological changes in the brain, such as reduced neural activity and decreased blood flow, contribute to this process. Lifestyle factors like lack of physical activity, poor diet, and chronic stress can accelerate cognitive aging.

On the other hand, regular mental stimulation, social engagement, and physical exercise can help maintain cognitive health. Adequate sleep and managing conditions like hypertension or diabetes also play a role in slowing decline. The rate and severity of decline vary greatly among individuals. Some people maintain sharp cognitive abilities well into old age, while others experience noticeable difficulties.

Study author Anders M. Fjell and his colleagues note that leading theories propose that education reduces brain decline related to aging and enhances tolerance to brain pathology. Other theories propose that education does not affect cognitive decline but instead reflects higher early-life cognitive function. With this in mind, they conducted a study aiming to resolve this long-standing debate.

They conducted a large-scale mega-analysis of data from multiple longitudinal cohorts, including the Survey of Health, Ageing, and Retirement in Europe (SHARE) and the Lifebrain consortium. In total, they analyzed over 407,000 episodic memory scores from more than 170,000 participants across 33 countries. For the neuroimaging component, they analyzed 15,157 magnetic resonance imaging scans with concurrent memory tests from 6,472 participants across seven countries. In their analyses, they defined brain decline as reductions over time in memory-sensitive brain regions within the same participant.

Results showed that while older age was associated with lower memory scores, the association between education level and the rate of memory decline was negligible. Individuals with a higher education level tended to have better memory throughout their lives but did not differ from their less-educated peers in the speed with which their memory declined as they aged.

Individuals with more education also tended to have a larger intracranial volume (a proxy for maximum brain size developed early in life) and slightly larger volumes of memory-sensitive brain regions.

“In this large-scale, geographically diverse longitudinal mega-analytic study, we found that education is related to better episodic memory and larger intracranial volume and modestly to memory-sensitive brain regions. These associations are established early in life and not driven by slower brain aging or increased resilience to structural brain changes. Therefore, effects of education on episodic memory function in aging likely originate earlier in life,” the study authors concluded.

The study contributes to the scientific understanding of factors affecting age-related cognitive decline by providing strong evidence that education provides a “head start” rather than acting as a shield against decline. The research focused on episodic memory because it is particularly sensitive to the effects of aging and is a key indicator in dementia research. Sensitivity analyses on other cognitive tests, such as numeric skills and orientation, showed the same pattern, strengthening the study’s main conclusion.

The paper, “Reevaluating the role of education on cognitive decline and brain aging in longitudinal cohorts across 33 Western countries,” was authored by Anders M. Fjell, Ole Rogeberg, Øystein Sørensen, Inge K. Amlien, David Bartrés-Faz, Andreas M. Brandmaier, Gabriele Cattaneo, Sandra Düzel, Håkon Grydeland, Richard N. Henson, Simone Kühn, Ulman Lindenberger, Torkild Hovde Lyngstad, Athanasia M. Mowinckel, Lars Nyberg, Alvaro Pascual-Leone, Cristina Solé-Padullés, Markus H. Sneve, Javier Solana, Marie Strømstad, Leiv Otto Watne, Kristine B. Walhovd, and Didac Vidal-Piñeiro.

Psilocybin therapy linked to lasting depression remission five years later

A new long-term follow-up study has found that a significant majority of individuals treated for major depressive disorder with psilocybin-assisted therapy were still in remission from their depression five years later. The research, which tracked participants from an earlier clinical trial, suggests that the combination of the psychedelic substance with psychotherapy can lead to lasting improvements in mental health and overall well-being. The findings were published in the Journal of Psychedelic Studies.

Psilocybin is the primary psychoactive compound found in certain species of mushrooms, often referred to as “magic mushrooms.” When ingested, it can produce profound alterations in perception, mood, and thought. In recent years, researchers have been investigating its potential as a therapeutic tool when administered in a controlled clinical setting alongside psychological support.

The rationale for this line of research stems from the limitations of existing treatments for major depressive disorder. While many people benefit from conventional antidepressants and psychotherapy, a substantial portion do not achieve lasting remission, and medications often come with undesirable side effects and require daily, long-term use.

Psychedelic-assisted therapy represents a different treatment model, one where a small number of high-intensity experiences might catalyze durable psychological changes. This new study was conducted to understand the longevity of the effects observed in an earlier, promising trial.

The research team, led by Alan Davis, an associate professor and director of the Center for Psychedelic Drug Research and Education at The Ohio State University, sought to determine if the initial antidepressant effects would hold up over a much longer period. Davis co-led the original 2021 trial at Johns Hopkins University, and this follow-up represents a collaborative effort between researchers at both institutions.

“We conducted this study to answer a critical question about the enduring effects of psilocybin therapy – namely, what happens after clinical trials end, and do participants experience enduring benefits from this treatment,” Davis told PsyPost.

The investigation was designed as a long-term extension of a clinical trial first published in 2021. That initial study involved 24 adults with a diagnosis of major depressive disorder. The participants were divided into two groups: one that received the treatment immediately and another that was placed on a wait-list before receiving the same treatment.

The therapeutic protocol was intensive, involving approximately 13 hours of psychotherapy in addition to two separate sessions where participants received a dose of psilocybin. The original findings were significant, showing a large and rapid reduction in depression symptoms for the participants, with about half reporting a complete remission from their depression that lasted for up to one year.

For the new follow-up, conducted an average of five years after the original treatment, the researchers contacted all 24 of the initial participants. Of those, 18 enrolled and completed the follow-up assessments. This process involved a series of online questionnaires designed to measure symptoms of depression and anxiety, as well as any functional impairment in their daily lives.

Participants also underwent a depression rating assessment administered by a clinician and took part in in-depth interviews. These interviews were intended to capture a more nuanced understanding of their experiences and life changes since the trial concluded, going beyond what numerical scores alone could convey.

The researchers found that 67% of the original participants were in remission from their depression. This percentage was slightly higher than the 58% who were in remission at the one-year follow-up point.

“We found that most people reported enduring benefits in their life since participating in psilocybin therapy,” Davis said. “Overall, many reported that even if depression came back, that it was more manageable, less tied to their identity, and that they found it was less interfering in their life.”

To ensure their analysis was robust, the scientists took a conservative approach when handling the data for the six individuals who did not participate in the long-term follow-up. They made the assumption that these participants had experienced a complete relapse and that their depression symptoms had returned to their pre-treatment levels.

“Even controlling for those baseline estimates from the people who didn’t participate in the long-term follow-up, we still see a very large and significant reduction in depression symptoms,” said Davis, who also holds faculty positions in internal medicine and psychology at Ohio State. “That was really exciting for us because this showed that the number of participants still in complete remission from their depression had gone up slightly.”

The study also revealed that these lasting improvements were not solely the product of the psilocybin therapy sessions from five years earlier. The reality of the participants’ lives was more complex. Through the interviews, the researchers learned that only three of the 18 follow-up participants had not received any other form of depression-related treatment in the intervening years. The others had engaged in various forms of support, including taking antidepressant medications, undergoing traditional psychotherapy, or trying other treatments like ketamine or psychedelics on their own.

However, the qualitative data provided important context for these decisions. Many participants described a fundamental shift in their relationship with depression after the trial. Before undergoing psilocybin-assisted therapy, they often felt their depression was a debilitating and all-encompassing condition that prevented them from engaging with life. After the treatment, even if symptoms sometimes returned, they perceived their depression as more situational and manageable.

Participants reported a greater capacity for positive emotions and enthusiasm. Davis explained that these shifts appeared to lead to important changes in how they related to their depressive experiences. This newfound perspective may have made other forms of therapy more effective or made navigating difficult periods less impairing.

“Five years later, most people continued to view this treatment as safe, meaningful, important, and something that catalyzed an ongoing betterment of their life,” said Davis, who co-led the 2021 trial at Johns Hopkins University. “It’s important for us to understand the details of what comes after treatment. I think this is a sign that regardless of what the outcomes are, their lives were improved because they participated in something like this.”

Some participants who had tried using psychedelics on their own reported that the experiences were not as helpful without the supportive framework provided by the clinical trial, reinforcing the idea that the therapeutic context is a vital component of the treatment’s success.

Regarding safety, 11 of the participants reported no negative effects since the trial. A few recalled feeling unprepared for the heightened emotional sensitivity they experienced after the treatment, while others noted that the process of weaning off their previous medications before the trial was difficult.

The researchers acknowledge several limitations of their work. The small sample size of the original trial means that the findings need to be interpreted with caution and require replication in larger studies. Because the study was a long-term follow-up without a continuing control group, it is not possible to definitively attribute all the observed benefits to the psilocybin-assisted therapy, especially since most participants sought other forms of treatment during the five-year period. It is also difficult to know how natural fluctuations in mood and life circumstances may have influenced the outcomes.

“I’d like for people to know that this treatment is not a magic bullet, and these findings support that notion,” Davis noted. “Not everyone was in remission, and some had depression that was ongoing and a major negative impact in their lives. Thankfully, this was not the case for the majority of folks in the study, but readers should know that this treatment does not work for everyone even under the most rigorous and clinically supported conditions.”

Future research should aim to include larger and more diverse groups of participants, including individuals with a high risk for suicide, who were excluded from this trial. Despite these limitations, this study provides a first look at the potential for psilocybin-assisted therapy to produce durable, long-term positive effects for people with major depressive disorder. The findings suggest the treatment may not be a simple cure but rather a catalyst that helps people re-engage with their lives and other therapeutic processes, ultimately leading to sustained improvements in functioning and well-being.

“Next steps are to continue evaluating the efficacy of psilocybin therapy among larger samples and in special populations,” Davis said. “Our work at OSU involves exploring this treatment for Veterans with PTSD, lung cancer patients with depression, gender and sexual minorities with PTSD, and adolescents with depression.”

The study, “Five-year outcomes of psilocybin-assisted therapy for Major Depressive Disorder,” was authored by Alan K. Davis, Nathan D. Sepeda, Adam W. Levin, Mary Cosimano, Hillary Shaub, Taylor Washington, Peter M. Gooch, Shoval Gilead, Skylar J. Gaughan, Stacey B. Armstrong, and Frederick S. Barrett.

Rising autism and ADHD diagnoses not matched by an increase in symptoms

A new study examining nine consecutive birth years in Sweden indicates that the dramatic rise in clinical diagnoses of autism spectrum disorder is not accompanied by an increase in autism-related symptoms in the population. The research, published in the journal Psychiatry Research, also found that while parent-reported symptoms of ADHD remained stable in boys, there was a small but statistically significant increase in symptoms among girls.

Autism spectrum disorder, or ASD, is a neurodevelopmental condition characterized by differences in social communication and interaction, along with restricted or repetitive patterns of behavior and interests. Attention-Deficit/Hyperactivity Disorder, or ADHD, is another neurodevelopmental condition marked by persistent patterns of inattention, hyperactivity, and impulsivity that can interfere with functioning or development. Over the past two decades, the number of clinical diagnoses for both conditions has increased substantially in many Western countries, particularly among teenagers and young adults.

This trend has raised questions about whether the underlying traits associated with these conditions are becoming more common in the general population. Researchers sought to investigate this possibility by looking beyond clinical diagnoses to the level of symptoms reported by parents.

“The frequency of clinical diagnoses of ASD and ADHD has increased substantially over the past decades across the world,” said study author Olof Arvidsson, a PhD student at the Gillberg Neuropsychiatry Centre at Gothenburg University and resident physician in Child and Adolescent Psychiatry.

“The largest prevalence increase has been among teenagers and young adults. Therefore, we wanted to investigate if symptoms of ASD and ADHD in the population had increased over time in 18-year-olds. In this study we used data from a twin study in Sweden in which parents reported on symptoms of ASD and ADHD when their children turned 18 and investigated whether symptoms had increased between year 2011 to 2019.”

To conduct their analysis, the researchers utilized data from a large, ongoing project called the Child and Adolescent Twin Study in Sweden. This study follows twins born in Sweden to learn more about mental and physical health. For this specific investigation, researchers focused on information collected from the parents of nearly 10,000 twins born between 1993 and 2001. When the twins reached their 18th birthday, their parents were asked to complete a web-based questionnaire about their children’s behaviors and traits.

Parents answered a set of 12 questions designed to measure symptoms related to autism. These items correspond to the diagnostic criteria for ASD. For ADHD, parents completed a 17-item checklist covering problems associated with inattention and executive function, which are core components of ADHD.

Using this data, the researchers employed statistical methods to analyze whether the average symptom scores changed across the nine different birth years, from 1993 to 2001. They also looked at the percentage of individuals who scored in the highest percentiles, representing those with the most significant number of traits.

The analysis showed no increase in the average level of parent-reported autism symptoms among 18-year-olds across the nine-year span. This stability was observed for both boys and girls. Similarly, when the researchers examined the proportion of individuals with the highest symptom scores, defined as those in the top five percent, they found no statistically significant change over time. This suggests that the prevalence of autism-related traits in the young adult population remained constant during this period.

The results for ADHD presented a more nuanced picture. Among boys, the data indicated that parent-reported ADHD symptoms were stable. There was no significant change in either the average symptom scores or in the percentage of boys scoring in the top 10 percent. For girls, however, the study identified a small but statistically detectable increase in ADHD symptoms over the nine birth years. This trend was apparent in both the average symptom scores and in the proportion of girls who scored in the top 10 percent for ADHD traits.

Despite being statistically significant, the researchers note that the magnitude of this increase in girls was small. The year of birth explained only a very small fraction of the variation in ADHD symptom scores. The results suggest that while there may be a slight upward trend in certain ADHD symptoms among adolescent girls, it is not nearly large enough to account for the substantial increase in clinical ADHD diagnoses reported in this group. The study provides evidence that the steep rise in both autism and ADHD diagnoses is likely influenced by factors other than a simple increase in the symptoms themselves.

“Across the nine birth years examined, there was no sign of increasing symptoms of ASD in the population, despite rising diagnoses,” Arvidsson told PsyPost. “For ADHD, there was no increase among boys. However, in 18-year-old girls we saw a very small but statistically significant increase in ADHD symptoms. The increase in absolute numbers was small in relation to the increase in clinical diagnoses.”

The researchers propose several alternative explanations for the growing number of diagnoses. Increased public and professional awareness may lead more people to seek assessments. Diagnostic criteria for both conditions have also widened over the years, potentially including individuals who would not have met the threshold in the past. Another factor may be a change in perception, where certain behaviors are now seen as more impairing than they were previously. This aligns with other research indicating that parents today tend to report higher levels of dysfunction associated with the same number of symptoms compared to a decade ago.

Changes in societal demands, particularly in educational settings that place a greater emphasis on executive functioning and complex social skills, could also contribute. In some cases, a formal diagnosis may be a prerequisite for accessing academic support and resources, creating an incentive for assessment. For the slight increase in ADHD symptoms among girls, the authors suggest it could reflect better recognition of how ADHD presents in females, or perhaps an overlap with symptoms of anxiety and depression, which have also been on the rise in this demographic.

“The takeaway is that the increases in clinical diagnoses of both ASD and ADHD need to be explained by other factors than increasing symptoms in the population, such as increased awareness and increased perceived impairment related to ASD and ADHD symptoms,” Arvidsson said. “Taken together we also hope to curb any worries about a true increase in ASD or ADHD.”

The study has some limitations. The response rate for the parental questionnaires was about 41 percent. While the researchers checked for potential biases and found that their main conclusions about the trends over time were likely unaffected, a higher participation rate would strengthen the findings. Additionally, the questionnaire for ADHD primarily measured symptoms of inattention and did not include items on hyperactivity. The results, therefore, mainly speak to the inattentive aspects of ADHD.

Future research could explore these trends with different measures and in different populations. The researchers also plan to investigate trends in clinical diagnoses more closely to better understand resource allocation for healthcare systems.

“We want to better understand trends of clinical diagnoses, such as trends of incidence of diagnoses in different groups,” Arvidsson said. “With increasing clinical diagnoses of ASD and ADHD and the resulting impact on the healthcare system as well as on the affected patients, it is important to characterize these trends in order to motivate an increased allocation of resources.”

The study, “ASD and ADHD symptoms in 18-year-olds – A population-based study of twins born 1993 to 2001,” was authored by Olof Arvidsson, Isabell Brikell, Henrik Larsson, Paul Lichtenstein, Ralf Kuja-Halkola, Mats Johnson, Christopher Gillberg, and Sebastian Lundström.

Scientists identify ecological factors that predict dark personality traits across 48 countries

Recent research published in the journal Evolution and Human Behavior offers new insights into how broad environmental conditions may shape “dark” personality traits on a national level. The study suggests that harsh or unpredictable ecological factors experienced during childhood, such as natural disasters or skewed sex ratios, are linked to higher average levels of traits like narcissism in adulthood. These findings indicate that forces largely outside of an individual’s control could play a key role in the development of antisocial personality profiles across different cultures.

The “Dark Triad” consists of three distinct but related personality traits: narcissism, Machiavellianism, and psychopathy. Individuals with high levels of narcissism often display grandiosity, entitlement, and a constant need for admiration. Machiavellianism is characterized by a cynical, manipulative approach to social interaction and a focus on self-interest over moral principles. Psychopathy involves high impulsivity, thrill-seeking behavior, and a lack of empathy or remorse for others.

While these traits are often viewed as undesirable, evolutionary perspectives suggest they may represent adaptive strategies in certain environments. Psychological research frequently focuses on immediate social causes for these traits, such as family upbringing or individual trauma. However, this new study aimed to broaden that lens by examining macro-level ecological factors that affect entire populations.

“There were several reasons to do this study,” explained Peter Jonason, a professor at Vizja University, creator of the Your Stylish Scientist YouTube Channel, and editor of Shining Light on the Dark Side of Personality: Measurement Properties and Theoretical Advances.

“First, there is limited understanding how ecological factors predict personality at all, let alone the Dark Triad. That is, most research focuses on personal, familial, or sociological predictors, but these are embedded in larger ecological systems. If the Dark Triad traits are mere pathologies of defunkt parenting or income inequality, one would not predict sensitivity to ecological factors in determining people’s adult Dark Triad scores let alone sex differences therein.”

“Second, most research on the Dark Triad traits focuses on individual-level variance but here we examined what you might call a culture of each trait and what might account for it. Third, and, less interestingly perhaps, the team happened to meet, get along, have the skills needed, and had access to the data to examine this.”

The researchers employed a theoretical framework known as life history theory to guide their investigation. This theory proposes that organisms, including humans, unconsciously adjust their reproductive and survival strategies based on the harshness and predictability of their environment. In dangerous or unstable environments, “faster” life strategies (characterized by greater risk-taking, short-term mating, and higher aggression) tend to be more advantageous for evolutionary fitness.

To test this idea, the researchers utilized existing personality data from 11,504 participants across 48 different countries. The data for these national averages were collected around 2016 using the “Dirty Dozen,” a widely used twelve-item questionnaire designed to briefly measure the three Dark Triad traits. The researchers then paired these personality scores with historical ecological data from the World Bank and other international databases.

They specifically examined ecological conditions during three developmental windows: early childhood (years 2000–2004), mid-childhood (years 2005–2009), and adolescence (years 2010–2015). The ecological indicators included population density, life expectancy (survival to age 65), and the operational sex ratio, which measures the balance of men to women in society. They also included data on the frequency of natural disasters, the prevalence of major infectious disease outbreaks, and levels of income inequality.

“When considering what makes people different from around the world, it is lazy to say ‘culture,'” Jonason told PsyPost. “Culture is a system that results from higher-order conditions like access to resources and ecological threats. If you want to understand why someone differs from you, you must consider more than just her/his immediate–and obvious–circumstances.”

The analysis used advanced statistical techniques known as spatial autoregressive models. These models allowed the researchers to not only test the direct associations within a country but also to account for “spillover” effects from neighboring nations. This approach recognizes that countries do not exist in isolation and may be influenced by the conditions and cultures of sharing borders.

The results indicated that different ecological factors were associated with distinct Dark Triad traits. Countries that had more male-biased sex ratios during the participants’ childhoods tended to have higher average levels of adult narcissism. The researchers suggest that an excess of males may intensify intrasexual competition, prompting men to adopt grander, more self-promoting behaviors to attract mates.

Conversely, a higher prevalence of infectious diseases during childhood and adolescence was associated with lower national levels of Machiavellianism and psychopathy. In environments with a high disease burden, strict adherence to social norms and greater group cohesion are often necessary for survival. In such contexts, manipulative or antisocial behaviors that disrupt group harmony might be less adaptive and therefore less common.

The study also found that ecological conditions might influence the magnitude of personality differences between men and women. Exposure to natural disasters during developmental years was consistently linked to larger sex differences across all three Dark Triad traits in adulthood. High-threat environments may cause men and women to adopt increasingly divergent survival and reproductive strategies, thereby widening the psychological gap between the sexes.

Furthermore, the research provided evidence for regional clustering of these personality profiles. Conditions in neighboring countries frequently predicted a focal country’s personality scores. For example, higher income inequality or natural disaster impact in bordering nations was associated with higher narcissism or Machiavellianism in the country being studied.

This suggests that dark personality traits may diffuse across borders. This could happen through mechanisms such as migration, shared regional economic challenges, or cultural transmission. The findings highlight the importance of considering regional contexts when studying national character.

“Do not assume that good parenting, safe schools, and successful social experiences are all that matter in determining who goes dark,” Jonason explained. “Larger factors, well beyond our control, have influence as well. By removing the human from the equation, we can better see how people are subject to forces well beyond their will, self-reports, and even situated in larger socioecological systems.”

As with all research, the study has some limitations that should be considered when interpreting these results. The personality data were largely derived from university students, who may not be fully representative of their national populations. Additionally, because the study relied on historical aggregate data, it cannot establish a definitive causal link between these ecological factors and individual personality development. It is possible that other unmeasured variables contribute to these associations.

Future research could aim to replicate these findings using more diverse and representative samples from the general population. The researchers also express an interest in investigating the specific psychological and cognitive mechanisms that might link broad environmental conditions to individual differences in motives and morals. Understanding these mechanisms could provide a clearer picture of how macro-level forces shape the human mind.

“We hope to pursue projects that try to understand the specific conditions that allow for not just personality, but also motives, morals, and mate preferences to be calibrated to local conditions providing more robust tests of not just cross-national differences, but, also, what are the cognitive mechanisms and perceptions that drive those differences,” Jonason said. “This is assuming we get some grant money to do so!”

“This is a study attempting to understand how lived experiences in people’s mileu can correlate with their personality and sex differences therein. This is an important step forward because while manipulating the conditions in people’s lives is nearly impossible, we can get a strong glimpse of how conditions in people’s generalized past can cause adaptive responses to help them solve important tasks like securing status and mates–two motivations highly valued by those high in the Dark Triad traits.”

The study, “Towards an ecological model of the dark triad traits,” was authored by Peter K. Jonason, Dritjon Gruda, and Mark van Vugt.

Music engagement is associated with substantially lower dementia risk in older adults

A new study provides evidence that older adults who frequently engage with music may have a significantly lower risk of developing dementia. The research, published in the International Journal of Geriatric Psychiatry, indicates that consistently listening to music was associated with up to a 39 percent reduced risk, while regularly playing an instrument was linked to a 35 percent reduced risk. These findings suggest that music-related activities could be an accessible way to support cognitive health in later life.

Researchers were motivated to conduct this study because of the growing global health challenge posed by aging populations and the corresponding rise in dementia cases. As life expectancy increases, so does the prevalence of age-related conditions like cognitive decline. With no current cure for dementia, identifying lifestyle factors that might help prevent or delay its onset has become a major focus of scientific inquiry.

While some previous research pointed to potential cognitive benefits from music, many of those studies were limited. They often involved small groups of participants, included people who already had cognitive problems, or were susceptible to selection bias. This new study aimed to overcome these limitations by using a large, long-term dataset of older adults who were cognitively healthy at the beginning of the research period. The team also wanted to explore how education level might influence the relationship between music engagement and cognitive outcomes.

The investigation utilized data from a large-scale Australian study called ASPirin in Reducing Events in the Elderly (ASPREE) and its sub-study. The final analysis included 10,893 community-dwelling adults who were 70 years of age or older and did not have a dementia diagnosis when they enrolled. These participants were followed for a median of 4.7 years, with some observational follow-up extending beyond that period.

About three years into the study, participants answered questions about their social activities, including how often they listened to music or played a musical instrument. Their responses ranged from “never” to “always.” Researchers then tracked the participants’ cognitive health over subsequent years through annual assessments. Dementia diagnoses were made by an expert panel based on rigorous criteria, while a condition known as cognitive impairment no dementia (CIND), a less severe form of cognitive decline, was also identified.

The findings indicate a strong association between music engagement and a lower risk of dementia. Individuals who reported “always” listening to music had a 39 percent decreased risk of developing dementia compared to those who listened never, rarely, or sometimes. This group also showed a 17 percent decreased risk of developing CIND.

Regularly playing a musical instrument was also associated with positive outcomes. Those who played an instrument “often” or “always” had a 35 percent decreased dementia risk compared to those who played rarely or never. However, playing an instrument did not show a significant association with a reduced risk of CIND.

When researchers looked at individuals who engaged in both activities, they found a combined benefit. Participants who frequently listened to music and played an instrument had a 33 percent decreased risk of dementia. This group also showed a 22 percent decreased risk of CIND.

Beyond the risk of dementia or CIND, the study also examined changes in performance on specific cognitive tests over time. Consistently listening to music was associated with better scores in global cognition, which is a measure of overall thinking abilities, as well as in memory. Playing an instrument was not linked to significant changes in scores on these cognitive tests. Neither listening to nor playing music appeared to be associated with changes in participants’ self-reported quality of life or mental wellbeing.

The research team also explored whether a person’s level of education affected these associations. The results suggest that education may play a role, particularly for music listening. The association between listening to music and a lower dementia risk was most pronounced in individuals with 16 or more years of education. In this highly educated group, always listening to music was linked to a 63 percent reduced risk.

The findings were less consistent for those with 12 to 15 years of education, where no significant protective association was observed. The researchers note this particular result was unexpected and may warrant further investigation to understand potential underlying factors.

The study has several limitations that are important to consider. Because it is an observational study, it can only identify associations between music and cognitive health; it cannot establish that music engagement directly causes a reduction in dementia risk. It is possible that individuals with healthier brains are simply more likely to engage with music, a concept known as reverse causation. The study’s participants were also generally healthier than the average older adult population, which may limit how broadly the findings can be applied.

Additionally, the data on music engagement was self-reported, which could introduce inaccuracies. The survey did not collect details on the type of music, the duration of listening or playing sessions, or whether listening to the radio involved music or talk-based content. Such details could be important for understanding the mechanisms behind the observed associations.

Future research could build on these findings by examining longer-term outcomes and exploring which specific aspects of music engagement might be most beneficial. Studies involving more diverse populations could also help determine if these associations hold true across different groups. Ultimately, randomized controlled trials would be needed to determine if actively encouraging music engagement as an intervention can directly improve cognitive function and delay the onset of dementia in older adults.

The study, “What Is the Association Between Music-Related Leisure Activities and Dementia Risk? A Cohort Study,” was authored by Emma Jaffa, Zimu Wu, Alice Owen, Aung Azw Zaw Phyo, Robyn L. Woods, Suzanne G. Orchard, Trevor T.-J. Chong, Raj C. Shah, Anne Murray, and Joanne Ryan.

AI chatbots often violate ethical standards in mental health contexts

A new study suggests that popular large language models like ChatGPT can systematically breach established ethical guidelines for mental health care, even when specifically prompted to use accepted therapeutic techniques. The research, which will be presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, provides evidence that these AI systems may pose risks to individuals who turn to them for mental health support.

The motivation for this research stems from the rapidly growing trend of people using publicly available AI chatbots for advice on mental health issues. While these systems can offer immediate and accessible conversational support, their alignment with the professional standards that govern human therapists has remained largely unexamined. Researchers from Brown University sought to bridge this gap by creating a systematic way to evaluate the ethical performance of these models in a therapeutic context. They collaborated with mental health practitioners to ensure their analysis was grounded in the real-world principles that guide safe and effective psychotherapy.

To conduct their investigation, the researchers first developed a comprehensive framework outlining 15 distinct ethical risks. This framework was informed by the ethical codes of professional organizations, including the American Psychological Association, translating core therapeutic principles into measurable behaviors for an AI. The team then designed a series of simulated conversations between a user and a large language model, or LLM, which is an AI system trained on vast amounts of text to generate human-like conversation. In these simulations, the AI was instructed to act as a counselor employing evidence-based psychotherapeutic methods.

The simulated scenarios were designed to present the AI with common and challenging mental health situations. These included users expressing feelings of worthlessness, anxiety about social situations, and even statements that could indicate a crisis, such as thoughts of self-harm. By analyzing the AI’s responses across these varied prompts, the researchers could map its behavior directly onto their practitioner-informed framework of ethical risks. This allowed for a detailed assessment of when and how the models tended to deviate from professional standards.

The study’s findings indicate that the large language models frequently engaged in behaviors that would be considered ethical violations for a human therapist. One of the most significant areas of concern was in the handling of crisis situations. When a simulated user expressed thoughts of self-harm, the AI models often failed to respond appropriately. Instead of prioritizing safety and providing direct access to crisis resources, some models offered generic advice or conversational platitudes that did not address the severity of the situation.

Another pattern observed was the reinforcement of negative beliefs. In psychotherapy, a practitioner is trained to help a person identify and gently challenge distorted or unhelpful thought patterns, such as believing one is a complete failure after a single mistake. The study found that the AIs, in an attempt to be agreeable and supportive, would sometimes validate these negative self-assessments. This behavior can inadvertently strengthen a user’s harmful beliefs about themselves or their circumstances, which is counterproductive to therapeutic goals.

The research also points to the issue of what the authors term a “false sense of empathy.” While the AI models are proficient at generating text that sounds empathetic, this is a simulation of emotion, not a genuine understanding of the user’s experience. This can create a misleading dynamic where a user may form an attachment to the AI or develop a dependency based on this perceived empathy. Such a one-sided relationship lacks the authentic human connection and accountability that are foundational to effective therapy.

Beyond these specific examples, the broader framework developed by the researchers suggests other potential ethical pitfalls. These include issues of competence, where an AI might provide advice on a topic for which it has no genuine expertise or training, unlike a licensed therapist who must practice within their scope. Similarly, the nature of data privacy and confidentiality is fundamentally different with an AI. Conversations with a chatbot may be recorded and used for model training, a practice that is in direct conflict with the strict confidentiality standards of human-centered therapy.

The study suggests that these ethical violations are not necessarily flaws to be fixed with simple tweaks but may be inherent to the current architecture of large language models. These systems are designed to predict the next most probable word in a sequence, creating coherent and contextually relevant text. They do not possess a true understanding of psychological principles, ethical reasoning, or the potential real-world impact of their words. Their programming prioritizes a helpful and plausible response, which in a therapeutic setting can lead to behaviors that are ethically inappropriate.

The researchers acknowledge certain limitations to their work. The study relied on simulated interactions, which may not fully capture the complexity and unpredictability of conversations with real individuals seeking help. Additionally, the field of artificial intelligence is evolving rapidly, and newer versions of these models may behave differently than the ones tested. The specific prompts used by the research team also shape the AI’s responses, and different user inputs could yield different results.

For future research, the team calls for the development of new standards specifically designed for AI-based mental health tools. They suggest that the current ethical and legal frameworks for human therapists are not sufficient for governing these technologies. New guidelines would need to be created to address the unique challenges posed by AI, from data privacy and algorithmic bias to the management of user dependency and crisis situations.

In their paper, the researchers state, “we call on future work to create ethical, educational, and legal standards for LLM counselors—standards that are reflective of the quality and rigor of care required for human-facilitated psychotherapy.” The study ultimately contributes to a growing body of evidence suggesting that while AI may have a future role in mental health, its current application requires a cautious and well-regulated approach to ensure user safety and well-being.

The study, “How LLM Counselors Violate Ethical Standards in Mental Health Practice: A Practitioner-Informed Framework,” was authored by Zainab Iftikhar, Amy Xiao, Sean Ransom, Jeff Huang, and Harini Suresh.

A religious upbringing in childhood is linked to poorer mental and cognitive health in later life

A new large-scale study of European adults suggests that, on average, being religiously educated as a child is associated with slightly poorer self-rated health after the age of 50. The research, published in the journal Social Science & Medicine, also indicates that this association is not uniform, varying significantly across different aspects of health and among different segments of the population.

Past research has produced a complex and sometimes contradictory picture regarding the connections between religiousness and health. Some studies indicate that religious involvement can offer health benefits, such as reduced suicide risk and fewer unhealthy behaviors. Other research points to negative associations, linking religious attendance with increased depression in some populations.

Most of this work has focused on religious practices in adulthood, leaving the long-term health associations of childhood religious experiences less understood. To address this gap, researchers set out to investigate how a religious upbringing might be linked to health outcomes decades later, taking into account the diverse life experiences that can shape a person’s well-being.

The researchers proposed several potential pathways through which a religious upbringing could influence long-term health. These include psychosocial mechanisms, where religion might foster positive emotions and coping strategies but could also lead to internal conflict or distress. Social and economic mechanisms might involve access to supportive communities and resources, while also potentially exposing individuals to group tensions.

Finally, behavioral mechanisms suggest religion may encourage healthier lifestyles, such as avoiding smoking or excessive drinking, which could have lasting positive effects on physical health. Given these varied and sometimes opposing potential influences, the researchers hypothesized that the link between a religious upbringing and late-life health would not be simple or consistent for everyone.

To explore these questions, the study utilized data from the Survey of Health, Aging, and Retirement in Europe, a major cross-national project. The analysis included information from 10,346 adults aged 50 or older from ten European countries. Participants were asked a straightforward question about their childhood: “Were you religiously educated by your parents?” Their current health was assessed through self-ratings on a five-point scale from “poor” to “excellent.” The study also examined more specific health indicators, including physical health (chronic diseases and limitations in daily activities), mental health (symptoms of depression), and cognitive health (numeracy and orientation skills).

The researchers employed an advanced statistical method known as a causal forest approach. This machine learning technique is particularly well-suited for identifying complex and non-linear patterns in large datasets. Unlike traditional methods that often look for straightforward, linear relationships, the causal forest model can uncover how the association between a religious upbringing and health might change based on a wide array of other factors. The analysis accounted for 19 different variables, including early-life circumstances, late-life demographics like age and marital status, and current religious involvement.

The overall results indicated that, on average, having a religious upbringing was associated with poorer self-rated health in later life. The average effect was modest, representing a -0.10 point difference on the five-point health scale. The analysis showed that for a majority of individuals in the sample, the association was negative.

However, the model also identified a smaller portion of individuals for whom the association was positive, suggesting that for some, a religious upbringing was linked to better health outcomes. This variation highlights that an average finding does not tell the whole story.

When the researchers examined different domains of health, a more nuanced picture emerged. A religious upbringing was associated with poorer mental health, specifically a higher level of depressive symptoms. It was also linked to poorer cognitive health, as measured by lower numeracy, or mathematical ability.

In contrast, the same childhood experience was associated with better physical health, indicated by fewer limitations in activities of daily living, which include basic self-care tasks like bathing and dressing. This suggests that a religious childhood may have different, and even opposing, associations with the physical, mental, and cognitive aspects of a person’s well-being in later life.

The study provided further evidence that the link between a religious upbringing and poorer self-rated health was not the same for all people. The negative association appeared to be stronger for certain subgroups. For example, individuals who grew up with adverse family circumstances, such as a parent with mental health problems or a parent who drank heavily, showed a stronger negative link between their religious education and later health.

Late-life demographic factors also seemed to modify the association. The negative link was more pronounced among older individuals (aged 65 and above), females, those who were not married or partnered, and those with lower levels of education. These findings suggest that disadvantages or vulnerabilities experienced later in life may interact with early experiences to shape health outcomes.

The analysis also considered how adult religious practices related to the findings. The negative association between a religious upbringing and later health was stronger for individuals who reported praying in adulthood. It was also stronger for those who reported that they never attended a religious organization as an adult. This combination suggests a complex interplay between past experiences and present behaviors.

The study does have some limitations. The data on religious upbringing and other childhood circumstances were based on participants’ retrospective self-reports, which can be subject to memory biases. The study’s design is cross-sectional, meaning it captures a snapshot in time and cannot establish a direct causal link between a religious upbringing and health outcomes. It is possible that other unmeasured factors, such as parental socioeconomic status, could play a role in this relationship. The measure of religious upbringing was also broad and did not capture the intensity, type, or strictness of the education received.

Future research could build on these findings by using longitudinal data to track individuals over time, providing a clearer view of how early experiences unfold into later life health. More detailed measures of religious education could also help explain why the experience appears beneficial for some health domains but detrimental for others. Researchers also suggest that exploring the mechanisms, such as coping strategies or social support, would provide a more complete understanding.

The study, “Heterogeneous associations between early-life religious upbringing and late-life health: Evidence from a machine learning approach,” was authored by Xu Zong, Xiangjiao Meng, Karri Silventoinen, Matti Nelimarkka, and Pekka Martikainen.

Men with delayed ejaculation report lower sexual satisfaction and more depressive symptoms

A study of men seeking help for delayed or premature ejaculation in Italy found that those suffering from delayed ejaculation tended to have more severe depressive and anxiety symptoms, and lower sexual desire than men suffering from premature ejaculation. They also tended to be older. The paper was published in IJIR: Your Sexual Medicine Journal.

Premature ejaculation is a sexual condition in which a man reaches orgasm and ejaculates sooner than desired, often within a minute of penetration or with minimal stimulation. It can lead to frustration, anxiety, and reduced sexual satisfaction for both partners. The causes may include psychological factors such as stress, depression, or relationship problems, as well as biological ones like hormonal imbalances or nerve sensitivity.

In contrast, delayed ejaculation is the persistent difficulty or inability to reach orgasm and ejaculate despite adequate sexual stimulation. This condition can also cause emotional distress, relationship strain, and decreased confidence. Delayed ejaculation may result from psychological issues, nerve damage, certain medications, or chronic health conditions such as diabetes. Both conditions are forms of ejaculatory disorders and sexual dysfunction. They can occur occasionally or become chronic depending on underlying causes.

Study author Fausto Negri and his colleagues note that many men experiencing ejaculatory disorders have difficulty expressing their negative feelings and that sexuality and emotional expression are closely connected. With this in mind, they conducted a study aiming to define specific clinical and psychological profiles of individuals suffering from premature and delayed ejaculation and to investigate the association between delayed ejaculation and other domains of sexual functioning.

Study participants were 555 men who were seeking medical help for ejaculation disorders. 76 of them reported for delayed ejaculation, while the rest of them sought help for premature ejaculation. Participants’ average age was approximately 45 years. 53% of participants with delayed ejaculation reported having a stable partner, and this was the case with 64% of participants with premature ejaculation.

Participants completed assessments of erectile function (the International Index of Erectile Function) and depression (the Beck Depression Inventory). Researchers also measured levels of various hormones and collected other medical and demographic information about the participants.

Results showed that participants suffering from delayed ejaculation were older than participants suffering from premature ejaculation (average age of 47 years vs 44 years). They also more often suffered from other disorders. Participants with delayed ejaculation also tended to have more severe symptoms of depression and anxiety. Their sexual desire tended to be lower, as were their orgasmic function scores, compared to participants with premature ejaculation. The two groups did not differ in relationship status, waist circumference, body mass index, or levels of examined hormones.

“Roughly one of ten men presenting for self-reported ejaculatory dysfunction as their main complaint in the real-life setting suffers from DE [delayed ejaculation]. Usually, they are older than men with primary PE [premature ejaculation] and overall less healthy. Likewise, they depict an overall poorer quality of sexual life, with lower SD [sexual desire] and OF [orgasmic function]. Moreover, men with DE have higher chances to report clinically significant depression and anxiety, which significantly impact their overall sexual satisfaction,” the study authors concluded.

The study sheds light on the differences in psychological characteristics between people with different forms of ejaculation disorders. However, it should be noted that the design of the study does not allow any causal inferences to be derived from the results. Additionally, all participants came from the same clinical center. Results on men from other geographical areas might differ.

The paper, “Men with delayed ejaculation report lower sexual satisfaction and more depressive symptoms than those with premature ejaculation: findings from a cross-sectional study,” was authored by Fausto Negri, Christian Corsini, Edoardo Pozzi, Massimiliano Raffo, Alessandro Bertini, Gabriele Birolini, Alessia d’Arma, Luca Boeri, Francesco Montorsi, Michael L. Eisenberg, and Andrea Salonia.

Psychiatrists document extremely rare case of menstrual psychosis

Researchers in Japan have documented the case of a teenager whose psychotic symptoms consistently appeared before her menstrual period and resolved immediately after. A case report published in Psychiatry and Clinical Neurosciences Reports indicates that a medication typically used to treat seizures and bipolar disorder was effective after standard antipsychotic and antidepressant drugs failed to provide relief. This account offers a detailed look at a rare and often misunderstood condition.

The condition is known as menstrual psychosis, which is characterized by the sudden onset of psychotic symptoms in an individual who is otherwise mentally well. These episodes are typically brief and occur in a cyclical pattern that aligns with the menstrual cycle. The presence of symptoms like delusions or hallucinations distinguishes menstrual psychosis from more common conditions such as premenstrual syndrome or premenstrual dysphoric disorder, which primarily involve mood-related changes. Menstrual psychosis is considered exceptionally rare, with fewer than 100 cases identified in the medical literature.

The new report, authored by Atsuo Morisaki and colleagues at the Tokyo Metropolitan Children’s Medical Center, details the experience of a 17-year-old Japanese girl who sought medical help after about two years of recurring psychological distress. Her initial symptoms included intense anxiety, a feeling of being watched, and auditory hallucinations where she heard a classmate’s voice. She also developed the belief that conversations around her were about herself. She had no prior psychiatric history or family history of mental illness.

Initially, she was diagnosed with schizophrenia and prescribed antipsychotic medication, which did not appear to alleviate her symptoms. Upon being transferred to a new medical center, her treatment was changed, but her condition persisted. While hospitalized, her medical team observed a distinct pattern. In the days leading up to her first menstrual period at the hospital, she experienced a depressive mood and restlessness. This escalated to include delusional thoughts and the feeling that “voices and sounds were entering my mind.” These symptoms disappeared completely four days later, once her period ended.

This cycle repeated itself the following month. About twelve days before her second menstruation, she again became restless. Nine days before, she reported the sensation that her thoughts were “leaking out” during phone calls. She also experienced auditory hallucinations and believed her thoughts were being broadcast to others. Her antipsychotic dosage was increased, but the symptoms continued until her menstruation ended, at which point they once again resolved completely.

A similar pattern emerged before her third period during hospitalization. Fourteen days prior, she developed a fearful, delusional mood. She reported that “gazes and voices are entering my head” and her diary entries showed signs of disorganized thinking. An increase in her medication dosage seemed to have no effect. As her period began, the symptoms started to fade, and they were gone by the time it was over. This consistent, cyclical nature of her psychosis, which did not respond to conventional treatments, led her doctors to consider an alternative diagnosis and treatment plan.

Observing this clear link between her symptoms and her menstrual cycle, the medical team initiated treatment with carbamazepine. This medication is an anticonvulsant commonly used to manage seizures and is also prescribed as a mood stabilizer for bipolar disorder. The dosage was started low and gradually increased. Following the administration of carbamazepine, her psychotic symptoms resolved entirely. She was eventually able to discontinue the antipsychotic and antidepressant medications. During follow-up appointments as an outpatient, her symptoms had not returned.

The exact biological mechanisms behind menstrual psychosis are not well understood. Some scientific theories suggest a link to the sharp drop in estrogen that occurs during the late phase of the menstrual cycle. Estrogen influences several brain chemicals, including dopamine, and a significant reduction in estrogen might lead to a state where the brain has too much dopamine activity, which has been associated with psychosis. However, since psychotic episodes can occur at various points in the menstrual cycle, fluctuating estrogen levels alone do not seem to fully explain the condition.

The choice of carbamazepine was partly guided by the patient’s age and the potential long-term side effects of other mood stabilizers. The authors of the report note that carbamazepine may work by modulating the activity of various channels and chemical messengers in the brain, helping to stabilize neuronal excitability. While there are no previous reports of carbamazepine being used specifically for menstrual psychosis, it has shown some effectiveness in other cyclical psychiatric conditions, suggesting it may influence the underlying mechanisms that produce symptoms tied to biological cycles.

It is important to understand the nature of a case report. Findings from a single patient cannot be generalized to a larger population. This report does not establish that carbamazepine is a definitive treatment for all individuals with menstrual psychosis. The positive outcome observed in this one person could be unique to her specific biology and circumstances.

However, case reports like this one serve a significant function in medical science, especially for uncommon conditions. They can highlight patterns that might otherwise be missed and introduce potential new avenues for treatment that warrant further investigation. By documenting this experience, the authors provide information that may help other clinicians recognize this rare disorder and consider a wider range of therapeutic options. This account provides a foundation for future, more systematic research into the causes of menstrual psychosis and the potential effectiveness of medications like carbamazepine.

The report, “Menstrual psychosis with a marked response to carbamazepine,” was authored by Atsuo Morisaki, Ken Ebishima, Akira Uezono, and Takashi Nagasawa.

Short exercise intervention helps teens with ADHD manage stress

A new study published in the Journal of Affective Disorders provides evidence that a brief but structured physical exercise program can help reduce stress levels in adolescents diagnosed with attention-deficit/hyperactivity disorder. The researchers found that after just three weeks of moderate to vigorous physical activity, participants reported lower levels of stress and showed a measurable increase in salivary cortisol, a hormone linked to the body’s stress response.

Adolescence is widely recognized as a time of dramatic psychological and biological development. For teens with ADHD, this period often comes with heightened emotional challenges. In addition to the typical symptoms of inattention and hyperactivity, many adolescents with the condition also struggle with internal feelings such as anxiety and depression. These emotional difficulties can interfere with daily functioning at school and at home, placing them at greater risk for long-term mental health problems.

Although stimulant medications are commonly used to manage symptoms, they often cause side effects such as sleep problems and mood shifts. Due to these complications, many families and young people stop using medication or seek alternative approaches. One such approach gaining traction is physical exercise. Prior research suggests that structured activity may benefit brain function and emotional regulation. However, most studies have focused on children rather than adolescents, and few have examined whether exercise influences cortisol, a stress hormone thought to be dysregulated in young people with ADHD.

Cortisol plays an important role in how the body manages stress. Low levels of cortisol in the morning have been found in children and adolescents with ADHD, and this pattern has been associated with fatigue, anxiety, and greater symptom severity. The researchers behind the new study wanted to know whether a short physical exercise intervention could influence both subjective stress levels and objective stress markers like cortisol in teens with ADHD.

“Adolescents with ADHD face stress-related challenges and appear to display atypical cortisol patterns, yet most exercise studies focus on younger children and rarely include biological stress markers,” explained study author Cindy Sit, a professor of sports science and physical education at The Chinese University of Hong Kong.

“We wanted to test a practical, low-risk intervention that schools and families could feasibly implement and to examine both perceived stress and a physiological marker (salivary cortisol) within a randomized controlled trial design. In short, we aimed to examine whether a brief, feasible program could help regulate stress in this under-researched group through non-pharmacological methods.”

The researchers recruited 82 adolescents, aged 12 to 17, who had been diagnosed with ADHD. Some of the participants also had a diagnosis of autism spectrum disorder, which often co-occurs with ADHD. The teens were randomly assigned to one of two groups. One group participated in a structured physical exercise program lasting three weeks. The other group served as a control and continued with their normal routines.

The exercise group attended two 90-minute sessions each week, totaling 540 minutes over the course of the program. These sessions included a variety of activities designed not only to improve physical fitness but also to engage cognitive functions such as memory, reaction time, and problem-solving. Exercises included circuit training as well as games that required strategic thinking and teamwork. Participants were guided to maintain moderate to vigorous intensity throughout much of the sessions, and their heart rates were monitored to ensure appropriate effort.

To measure outcomes, the researchers used both self-report questionnaires and biological samples. Stress, depression, and anxiety levels were assessed through a validated scale. Cortisol was measured using saliva samples collected in the afternoon before and after the intervention, as well as three months later.

The findings showed that immediately following the exercise program, participants in the exercise group reported lower levels of stress compared to their baseline scores. At the same time, their cortisol levels increased.

The increase in cortisol following exercise was interpreted not as a sign of increased stress but as a reflection of more typical hormonal activity. The researchers noted that this pattern aligns with the idea of exercise as a “positive stressor” that helps train the body to respond more effectively to real-life challenges. Importantly, the teens felt less stressed, even as their cortisol levels rose.

“The combination of lower perceived stress alongside an immediate rise in cortisol was striking,” Sit told PsyPost. “It supports the idea that exercise can feel stress-relieving while still producing a normal physiological stress response that may help calibrate the HPA axis. We also noted a baseline positive association between anxiety and cortisol in the control group only, which warrants further investigation.”

However, by the three-month follow-up, the improvements in self-reported stress had faded, and cortisol levels had returned to their initial levels. There were no significant changes in self-reported depression or anxiety in either group at any point.

“A short, three-week exercise program (90-minute sessions twice a week at moderate to vigorous intensity) reduced perceived stress in adolescents with ADHD immediately after the program,” Sit said. “Cortisol levels increased right after the intervention, consistent with a healthy, short-term activation of the stress system during exertion (often called ‘good stress’). The positive effects on perceived stress did not last for three months without continued physical exercise, and we did not observe short-term changes in depression or anxiety. This suggests that ongoing participation is necessary to sustain these benefits.”

Although the results suggest benefits from the short-term exercise program, there are some limitations to consider. Most of the participants were male, and this gender imbalance could affect how the findings apply to a broader group of adolescents. The study also relied on self-report questionnaires to assess stress, anxiety, and depression, which can be affected by personal bias. Additionally, there was no “active” control group, meaning the control participants were not given an alternate activity that involved social interaction or structure, which might have helped isolate the effects of the exercise itself.

Future studies might benefit from longer intervention periods to examine whether extended participation can produce lasting changes. Collecting saliva samples multiple times during the day could also help map out how cortisol behaves in response to both daily routines and interventions. Incorporating interviews or observer-based assessments could provide a more complete understanding of emotional changes, especially in teens who have difficulty expressing their feelings through questionnaires.

“Our team is currently conducting a large randomized controlled trial testing physical‑activity interventions for people with intellectual disability, with co‑primary outcomes of mood and physical strength,” Sit explained. “The broader aim is to develop scalable, low‑cost programs that can be implemented in schools, day services, and community settings. Ultimately, we aim to increase access for underserved populations so that structured movement becomes a feasible part of everyday care and improves their quality of life.”

“We see exercise as a useful adjunct, not a replacement, for standard ADHD care,” she added. “In practice, that involves incorporating structured movement alongside evidence-based treatments (e.g., medication, psychoeducation, behavioural supports) and working with families, schools, and healthcare providers. Exercise is accessible and generally has low risk; it can assist with stress regulation, sleep, attention, and fitness. However, it should be individualized and monitored, especially for individuals with special needs like ADHD, to support rather than replace routine care.”

The study, “Efficacy of a short-term physical exercise intervention on stress biomarkers and mental health in adolescents with ADHD: A randomized controlled trial,” was authored by Sima Dastamooz, Stephen H.S. Wong, Yijian Yang, Kelly Arbour-Nicitopoulos, Rainbow T.H. Ho, Jason C.S. Yam, Clement C.Y. Tham, Liu Chang, and Cindy H.P. Sit.

Masculinity and sexual attraction appear to shape how people respond to infidelity

A new study in the Archives of Sexual Behavior suggests that how people react to sexual versus emotional infidelity is shaped by more than just biological sex. While heterosexual men were more distressed by sexual betrayal and women by emotional betrayal, the findings indicate that traits like masculinity, femininity, and sexual attraction also influence these responses in flexible ways.

For several decades, psychologists have observed that men and women tend to react differently to infidelity. Men are more likely to be disturbed by sexual infidelity, while women are more upset by emotional cheating. Evolutionary psychologists have suggested that this might reflect reproductive pressures. For men, the risk of raising another man’s child might have favored the development of stronger reactions to sexual betrayal. For women, the loss of a partner’s emotional commitment could mean fewer resources and support for offspring, making emotional infidelity more threatening.

But this difference is not universal. Studies have shown that it becomes much less pronounced among sexual minorities. Gay men and lesbian women often report similar levels of distress over emotional and sexual infidelity, rather than showing a clear difference based on biological sex. This has raised the question of whether the difference between men and women is really just about being male or female—or whether other psychological traits might be involved.

The researchers behind the current study wanted to examine this question in more detail. They were interested in whether traits often associated with masculinity or femininity might influence how people respond to infidelity. They also wanted to test whether sexual orientation, measured not just as a label but as a continuum of attraction to men and women, could account for some of the variation in jealousy responses.

“We have for many years found robust sex difference in jealousy, but we have also been interested in any factors that could influence this pattern. Other researchers discovered that sexual orientation might influence that pattern. We also were influence by David Schmitt’s ideas on sexual dials vs. switches — how masculinization/feminization might be much better described as dimensional than categorical, including sexual orientation and jealousy triggers,” said study author Leif Edward Ottesen Kennair, a professor at the Norwegian University of Science and Technology.

For their study, the researchers collected data from 4,465 adults in Norway, ranging in age from 16 to 80. The sample included people who identified as heterosexual, gay, lesbian, bisexual, and pansexual. Participants were recruited through social media advertisements and LGBTQ+ websites. Each person completed a survey about their responses to hypothetical infidelity scenarios, along with questions about their childhood behavior, personality traits, sexual attraction, and self-perceived masculinity or femininity.

To measure jealousy, the participants were asked to imagine different types of infidelity. In one example, they were asked whether it would be more upsetting if their partner had sex with someone else, or if their partner developed a deep emotional connection with another person. Their answers were used to calculate a jealousy score that reflected how much more distressing they found sexual versus emotional betrayal.

The results supported some long-standing findings. Heterosexual men were much more likely than heterosexual women to be disturbed by sexual infidelity. In fact, nearly 59 percent of heterosexual men said sexual betrayal was more upsetting, compared to only 31 percent of heterosexual women. This pattern was consistent with past research.

But among sexual minorities, the sex difference mostly disappeared. Gay men and lesbian women responded in ways that were more alike, with both groups tending to be more upset by emotional infidelity. Bisexual men and women also reported similar responses. This suggests that sexual orientation plays a key role in how people experience jealousy.

The researchers then examined sexual attraction as a continuous variable. Rather than looking only at how people labeled themselves, they measured how strongly participants were attracted to men and to women. Among men, those who were exclusively attracted to women showed the highest levels of sexual jealousy. Men who had even a small degree of attraction to other men reported less distress about sexual infidelity.

The researchers also measured four different psychological traits related to masculinity and femininity. These included whether participants preferred system-oriented thinking or empathizing, whether they had gender-typical interests as children, whether they preferred male- or female-dominated occupations, and how masculine or feminine they saw themselves. These traits were used to create a broader measure of psychological gender.

In men, higher levels of psychological masculinity were linked to both a stronger attraction to women and a greater tendency to be disturbed by sexual infidelity. But the connection between masculinity and jealousy seemed to depend on whether the man was attracted to women. Masculinity influenced jealousy only when it was also linked to strong gynephilic attraction—that is, attraction to women.

Among women, masculinity was related to sexual orientation, but not to jealousy responses. This suggests that masculinity and femininity may play different roles in shaping sexual psychology for men and women.

Kennair told PsyPost that these findings suggest “that sexual orientation might be best measured dimensionally (as involving both gynephilia and androphilia), that sexual orientation influences sex differences (in this case, jealousy triggers), and that gendering and sex differences are not primarily categorical processes but dimensional processes that are largely influenced by biological sex, but absolutely not categorically determined in an either/or switch pattern. Rather, they function more like interconnected dimensional dials.”

A surprising finding came from a smaller group: bisexual men who were partnered with women. “In the current study, we found that bisexual men with a female partner were still more triggered by emotional than sexual infidelity,” Kennair explained. “Bisexual men should also be concerned about who the father of their partner’s children really is, from an evolutionary perspective, but it seems that only the highly gynephilic men are primarily triggered by sexual infidelity. This needs further investigation and theorizing.”

But the study, like all research, has some caveats. The participants were recruited online, which means the sample might not fully represent the broader population. In addition, the jealousy scenarios were hypothetical, and people’s real-life reactions might differ from what they imagine.

The study raises some new and unresolved questions. One puzzle is why sexual jealousy in men seems to drop off so steeply with even a small degree of androphilic attraction. From an evolutionary standpoint, any man who invested in raising a child would have faced reproductive costs if his partner had been unfaithful, regardless of his own sexual orientation. Yet the findings suggest that the mechanism for sexual jealousy may be tightly linked to sexual attraction to women, rather than simply being male or being partnered with a woman.

It also remains unclear why women’s jealousy responses are less influenced by sexual orientation or masculinity. The results suggest that emotional jealousy is a more stable pattern among women, while sexual jealousy in men appears more sensitive to individual differences in orientation and psychological traits.

“I think this is a first empirical establishment of the dials approach,” Kennair said. “I think it might be helpful to investigate this approach with other phenomena. Also, the research cannot address the developmental and biological processes underlying the psychological level we addressed in the paper. The causal pathways therefore need further investigation. And theorizing.”

He hopes that “maybe in the current polarized discussion of identity and sex/gender, people will find the dimensional and empirical approach of this paper a tool to communicate better than the categorical approaches let us do.”

The study, “Male Sex, Masculinization, Sexual Orientation, and Gynephilia Synergistically Predict Increased Sexual Jealousy,” was authored by Leif Edward Ottesen Kennair, Mons Bendixen, and David P. Schmitt.

Feeling moved by a film may prompt people to reflect and engage politically

Watching a powerful movie may do more than stir emotions. According to a study published in the journal Communication Research, emotionally moving films that explore political or moral issues may encourage viewers to think more deeply about those topics and even engage politically. The researchers found that German television theme nights combining fictional drama with related factual programs were associated with higher levels of information seeking, perceived knowledge, and consideration of political actions related to the issues portrayed.

There is a longstanding debate about whether entertainment harms or helps democracy. Some scholars worry that media such as movies and reality shows distract citizens from more serious political content. But recent research has begun to suggest that certain types of entertainment might actually contribute to political awareness and engagement.

“We were curious about effects of entertainment media on political interest and engagement. Can watching a movie and walking in the shoes of people affected by a political issue raise viewers’ awareness about the issue and motivate them to take action to address the issue?” explained study author Anne Bartsch, a professor at Leipzig University.

“From about a decade of experimental research, we know that moving and thought-provoking media experiences can stimulate empathy and prosocial behavior, including political engagement. In this study, we used television theme nights as an opportunity to replicate these findings ‘in the wild.’ Theme nights are a popular media format in Germany that combines entertainment and information programs about a political issue and attracts a large enough viewership to conduct representative survey research. This opportunity to study political effects of naturally occurring media use was quite unique.”

The researchers conducted three studies around two German television theme nights. The first theme night focused on the arms trade, while the second dealt with physician-assisted suicide. Each theme night included a full-length fictional film followed by an informational program. Across the three studies, more than 2,800 people took part through telephone and online surveys.

In the first study, researchers surveyed a nationally representative sample of 905 German adults by phone after the arms trade theme night. Participants were asked whether they watched the movie, the documentary, or both. They were also asked about their emotional reactions, whether they had thought deeply about the issue, and what actions they had taken afterward.

People who had seen the movie reported feeling more emotionally moved and were more likely to report having reflected on the issue. These viewers also reported greater interest in seeking more information, higher levels of both perceived and factual knowledge, and more willingness to engage in political actions related to arms trade, such as signing petitions or considering the issue when voting.

Statistical analysis indicated that the emotional experience of feeling moved led to deeper reflection, which then predicted greater knowledge and political engagement. However, there was no significant difference in how often viewers talked about the issue with others, compared to non-viewers. Surprisingly, emotional reactions did not appear to encourage discussion on social media, and may have slightly reduced it.

In the second study, the researchers repeated the survey online with a different sample of 877 participants following the same theme night. The results were largely consistent. Again, those who watched the movie felt more moved, thought more about the issue, and were more engaged. In this study, feeling moved was also linked to more frequent interpersonal discussion.

The third study examined the theme night about physician-assisted suicide. Over 1,000 people took part in the online survey. As with the earlier studies, viewers who watched the movie reported being emotionally affected and more reflective. These experiences were linked to higher interest in the topic, greater perceived knowledge, and a higher likelihood of discussing the issue or participating politically. Watching the movie also predicted stronger interest in the subsequent political talk show.

Across all three studies, the researchers found that emotional and reflective experiences were key pathways leading from entertainment to political engagement. People who felt moved by the movies were more likely to think about the issues they portrayed. These thoughts were, in turn, connected to learning more about the issue, talking with others, and taking or considering political action.

The findings suggest that serious entertainment can function as a catalyst, helping viewers process complex social issues and motivating them to become more engaged citizens.

“We found that moving and thought-provoking entertainment can have politically mobilizing effects, including issue interest, political participation, information seeking, learning, and discussing the issue with others,” Bartsch told PsyPost. “This is interesting because entertainment often gets a bad rap, as superficial, escapist pastime. Our findings suggest that it depends on the type of entertainment and the thoughts and feelings it provokes. Some forms of entertainment, it seems, can make a valuable complementary contribution to political discourse, in particular for audiences that rarely consume traditional news.”

Although the findings were consistent across different samples and topics, the authors note some limitations. Most importantly, the studies were correlational, meaning they cannot establish that the movies directly caused people to seek information or take political action. It is possible that people who are already interested in politics are more likely to watch such films and respond emotionally to them.

The researchers also caution that while theme nights seem to offer an effective combination of entertainment and information, these findings might not easily transfer to other types of media or digital platforms. Watching a movie on television with millions of others at the same time may create a shared cultural moment that is less common in today’s fragmented media landscape.

“Our findings cannot be generalized to all forms of entertainment, of course,” Bartsch noted. “Many entertainment formats are apolitical ‘feel-good’ content – which is needed for mood management as well. What is more concerning is that entertainment can also be instrumentalized to spread misinformation, hate and discrimination.”

Future studies could use experimental methods to better isolate cause and effect, and could also explore how similar effects might occur with streaming platforms or social media. Researchers might also investigate how hedonic, or lighter, forms of entertainment interact with political content, and how emotional reactions unfold over time after watching a movie.

“Our study underscores the value of ‘old school’ media formats like television theme nights that can attract large audiences and provide input for shared media experiences and discussions,” Bartsch said. “With the digital transformation of media, however, it is important to explore how entertainment changes in the digital age. For example, we are currently studying parasocial opinion leadership on social media and AI generated content.”

The study, “Eudaimonic Entertainment Experiences of TV Theme Nights and Their Relationships With Political Information Processing and Engagement,” was authored by Frank M. Schneider, Anne Bartsch, Larissa Leonhard, and Anea Meinert.

New study challenges a leading theory on how noise affects ADHD traits

A new study challenges a leading explanation for why auditory stimulation, such as pink noise, can improve cognitive performance in people with traits of attention deficit hyperactivity disorder. The research found that both random noise and a non-random pure tone had similar effects on a brain activity measure linked to neural noise, which contradicts key assumptions of the prominent moderate brain arousal model. These findings were published in the Journal of Attention Disorders.

For years, scientists have observed that listening to random auditory noise, like white or pink noise, can benefit cognitive functioning in individuals with ADHD or elevated traits of the condition. The moderate brain arousal model was proposed to explain this phenomenon. This model is built on two primary assumptions. First, it suggests that ADHD is associated with lower-than-optimal levels of internal neural noise.

Second, it proposes that external random noise boosts this internal neural noise through a mechanism called stochastic resonance, improving the brain’s ability to process signals. However, these foundational ideas had not been sufficiently tested, particularly because most studies lacked a direct measure of neural noise or a proper non-random sound condition to isolate the effects of stochastic resonance.

Joske Rijmen and her colleagues at Ghent University aimed to directly test these two core assumptions of the moderate brain arousal model. They designed an experiment to measure neural noise directly while participants listened to different types of sound. The researchers wanted to see if ADHD traits were indeed linked to lower neural noise at baseline. They also sought to determine if the effects of sound on brain activity were specific to random noise, as the theory of stochastic resonance would predict.

To conduct their investigation, the researchers recruited 69 neurotypical adults. Participants first completed the Adult ADHD Self-Report Scale, a questionnaire used to assess the number and frequency of symptoms associated with the condition. This allowed the scientists to examine ADHD as a spectrum of traits rather than a simple diagnostic category.

Each participant then underwent a resting-state electroencephalogram, a non-invasive procedure that records the brain’s electrical activity. While their brain activity was monitored, participants sat with their eyes closed for three distinct two-minute periods: one in silence, one while listening to continuous pink noise (a random signal), and one while listening to a continuous 100 Hz pure tone (a non-random signal).

The research team analyzed the electroencephalogram data by focusing on a specific feature known as the aperiodic slope of the power spectral density. This measure reflects background brain activity that is not part of rhythmic brain waves and is considered a direct index of neural noise. A steeper slope in this measurement corresponds to less neural noise, while a flatter slope indicates more neural noise. By examining how this slope changed across the different sound conditions and in relation to participants’ ADHD traits, the scientists could test the predictions of the moderate brain arousal model.

The study’s findings presented a direct challenge to the model’s first assumption. During the silent condition, the researchers found a relationship between ADHD traits and the aperiodic slope. Individuals who reported more traits of ADHD tended to have a flatter slope. This finding suggests that they had more background neural noise, not less. The result is the opposite of what the moderate brain arousal model predicted and aligns with other recent studies that have also found evidence for increased neural noise in older children and adolescents with ADHD.

The results also contradicted the model’s second assumption regarding the mechanism of stochastic resonance. When participants with elevated ADHD traits listened to pink noise, their aperiodic slope became steeper. This change signifies a reduction in their neural noise. This outcome is contrary to the model’s suggestion that random noise should increase neural noise in this group.

Most significantly, the researchers found that the non-random pure tone had a virtually identical effect on brain activity as the pink noise. Listening to the 100 Hz tone also led to a steeper aperiodic slope, or a decrease in neural noise, in participants with higher levels of ADHD traits. The fact that a non-random sound produced the same effect as a random sound strongly questions the idea that stochastic resonance, which requires a random signal, is the necessary mechanism behind the benefits of auditory stimulation. If stochastic resonance were the driving force, only the pink noise should have produced this effect.

The authors propose that an alternative explanation may be needed. Rather than relying on stochastic resonance, both types of sound might have a more general effect on brain arousal. This idea is more consistent with the state regulation deficit account of ADHD, which suggests that individuals with the condition have difficulty regulating their arousal levels to match situational demands.

According to this view, any form of additional stimulation, not just random noise, could help modulate arousal to a more optimal state. The researchers also noted the puzzling observation that stimulation appeared to decrease brain arousal in individuals with higher ADHD traits. They speculate this might relate to difficulties these individuals have in achieving a truly restful state, and the continuous sound may have helped them to calm or regulate their brain activity.

The study has some limitations that the authors acknowledge. The research was conducted with neurotypical adults who varied in their traits of ADHD, so the findings need to be replicated in a group of individuals with a formal clinical diagnosis. Another point is that the brain activity was measured during a resting state, not while participants were engaged in a cognitive task where the benefits of noise are typically observed.

Future research should explore whether these same brain activity patterns occur during tasks that require attention and focus. Investigating these effects in a clinical sample of people with diagnosed ADHD will be an important next step to confirm these conclusions.

The study, “Pink Noise and a Pure Tone Both Reduce 1/f Neural Noise in Adults With Elevated ADHD Traits: A Critical Appraisal of the Moderate Brain Arousal Model,” was authored by Joske Rijmen, Mehdi Senoussi, and Jan R. Wiersema.

Heatwaves and air pollution linked to heightened depression risks

An analysis of data from the China Health and Retirement Longitudinal Study combined with weather and air pollution information showed that exposure to heatwaves, air pollution, and lack of access to blue spaces are all associated with an increased risk of depression. The increase in depression risk was even higher in individuals simultaneously exposed to these factors. The paper was published in the Journal of Environmental Psychology.

Climate change refers to long-term alterations in global temperatures, weather patterns, and ecosystems. It is understood that currently observed climate changes are mainly driven by human activities such as burning fossil fuels, industrial emissions, and deforestation. These processes release large amounts of greenhouse gases like carbon dioxide and methane, trapping heat in the atmosphere and disrupting natural climate systems. As a result, the planet experiences more frequent heat waves, droughts, floods, and wildfires.

Air pollution, which often comes from the same sources that cause climate change, adds another layer of harm by degrading air quality and contributing to respiratory and cardiovascular diseases. Fine particulate matter and toxic pollutants can adversely affect brain health as well. Extreme weather events linked to climate change can create massive devastation, triggering physical and psychological trauma, post-traumatic stress disorder, and long-lasting psychological distress for those affected.

Chronic exposure to uncertainty about the environment fuels eco-anxiety, a growing concern especially among young people. Communities facing displacement or loss of livelihoods due to environmental degradation may suffer from grief and helplessness. The psychological burden is particularly heavy on farmers, children, and low-income populations with limited access to healthcare.

The study’s authors, Weiqi Wang and his colleagues, wanted to investigate the individual and joint impacts of heatwaves, air pollutants, and access to blue and green spaces on depressive symptoms in middle-aged and older Chinese populations.

They analyzed data from the China Health and Retirement Longitudinal Study (CHARLS). CHARLS is a national survey in China focused on the issue of population aging, encompassing data from individuals aged 45 and older. It was started with a survey in 2011 and included four additional surveys conducted up to 2020. In each of these follow-up surveys, the study recruited a small number of additional participants.

The data analyzed in this study came from 12,316 participants across 124 cities in 28 of 31 provinces of China. The number of participants per city ranged between 51 and 211. Participants’ average age was approximately 58 years. About 53% were men, and 58% lived in rural areas.

This study used data on depressive symptoms from the CHARLS dataset (assessed using the Center for Epidemiologic Studies Depression Scale), and air pollution data (concentrations of ground-level pollutants CO, SO2, PM2.5, and PM10 derived from the China High Air Pollutants (CHAP) dataset), data on heatwave exposure (based on maximum daily temperature data during the warm season from monitoring stations across China, provided by the United States Air Force Weather Agency), and exposure to green and blue spaces (based on the degree of vegetation cover and the proportion of open water bodies in a city).

Green spaces are areas of land covered with vegetation such as parks, gardens, forests, and grasslands that provide natural environments within urban or rural settings. Blue spaces are natural or artificial water environments like rivers, lakes, seas, and fountains.

Results indicated that exposure to heatwaves was associated with a 4-14% increase in the odds of depression. Likewise, exposure to air pollution was also associated with depression risk. The authors reported that for every 10 μg/m3 increase in ambient concentrations of PM2.5 particles, the odds of depression increased by 25%. The increase was 13% per the same unit increase in PM10 particle concentrations, 1% for CO, while the odds increased 55% for every 10 μg/m3 increase in SO2 concentrations.

The risk of depression was also heightened in areas where access to blue spaces was lower. The study found a synergistic effect: individuals simultaneously exposed to both heatwaves and high air pollution, or to heatwaves combined with a lack of green and blue spaces, had a significantly higher increase in depression risk than would be expected from adding the individual risks together.

“The findings indicate that heatwaves, air pollution, and lack of blue spaces each independently have a detrimental impact on depressive symptoms. Furthermore, the interactive effects of air contaminants, insufficient blue and green spaces, and heatwaves exposure significantly affect depressive symptoms, both on multiplicative and additive scales. Our results emphasize the necessity of developing public health strategies to curb air pollution, and preserve blue and green spaces, especially during periods of heatwaves,” study authors concluded.

The study contributes to the scientific understanding of the links between climate and mental health. However, it should be noted that the design of this study does not allow any definitive causal inferences to be derived from the results.

The paper, “Individual and combined effects of heatwaves, air pollution, green spaces, and blue spaces on depressive symptoms incidence,” was authored by Weiqi Wang, Yuqing Hao, Meiyu Peng, Jin Yan, Longzhu Xu, Haiyang Yu, Zhugen Yang, and Fanyu Meng.

A 35-day study of couples reveals the daily interpersonal benefits of sexual mindfulness

A new study finds that being present and non-judgmental during sex is associated with greater sexual well-being, not only for oneself but for one’s partner as well. The research, which tracked couples over 35 days, suggests that the benefits of sexual mindfulness can be observed on a daily basis within a relationship. The findings were published in the scientific journal Mindfulness.

Many individuals in established relationships report problems with their sexual health, such as low desire or dissatisfaction. Previous research has suggested that mindfulness, a state of present-moment awareness without judgment, could help address these issues. Researchers believe that cognitive distractions during sex, like concerns about performance or body image, can interfere with sexual well-being. Mindfulness may act as an antidote to these distractions by helping individuals redirect their attention to the physical sensations and emotional connection of the moment.

Led by Simone Y. Goldberg of the University of British Columbia, a team of researchers noted that most prior studies had significant limitations. Much of the research focused on general mindfulness as a personality trait rather than the specific state of being mindful during a sexual encounter. Additionally, studies often sampled individuals instead of couples, missing the interpersonal dynamics of sex. Finally, no research had used a daily diary design, which is needed to capture the natural fluctuations in a person’s ability to be mindful across different sexual experiences. Goldberg and her colleagues designed their study to address these gaps.

To conduct their research, the scientists recruited 297 couples who were living together. For 35 consecutive days, each partner independently completed a brief online survey every evening before going to sleep. This daily diary method allowed the researchers to gather information about the couples’ experiences in near real-time, reducing reliance on long-term memory which can be unreliable. The daily survey asked about each person’s level of sexual desire and any sexually related distress they felt that day.

On the days that participants reported having sex with their partner, they were asked additional questions. They completed a 5-item questionnaire to measure their level of sexual mindfulness during that specific encounter. This included rating their agreement with statements about their ability to stay in the present moment, notice physical sensations, and not judge their thoughts or feelings. They also answered questions to assess their level of sexual satisfaction with that day’s experience. This design allowed the researchers to analyze how a person’s mindfulness during sex on a given day related to their own and their partner’s sexual well-being on that same day.

The results showed a clear link between daily sexual mindfulness and sexual well-being for both partners. On days when individuals reported being more sexually mindful than their own personal average, they also reported higher levels of sexual satisfaction and sexual desire. At the same time, they reported lower levels of sexual distress. This demonstrates that fluctuations in a person’s ability to be mindful during sex are connected to their own sexual experience from one day to the next.

The study also revealed significant interpersonal benefits. On the days when one person was more sexually mindful, their partner also reported better outcomes. The partner experienced higher sexual satisfaction, increased sexual desire, and less sexual distress. This suggests that one person’s mental state during a sexual encounter has a direct and immediate association with their partner’s experience. The researchers propose that a mindful partner may be more attentive and responsive, which in turn enhances the other person’s enjoyment and sense of connection.

When the researchers analyzed the overall averages across the 35-day period, they found a slightly different pattern. Individuals who were, on average, more sexually mindful throughout the study reported greater sexual well-being for themselves. However, a person’s average level of sexual mindfulness was not linked to their partner’s average sexual well-being. This suggests that the benefit to a partner may be more of an in-the-moment phenomenon tied to specific sexual encounters, rather than a general effect of being with a typically mindful person.

The study also explored the role of gender in these associations. The connection between a person’s own daily sexual mindfulness and their own sexual well-being was stronger for women than for men. The researchers speculate that since women sometimes report higher levels of cognitive distraction during sex, the practice of mindfulness might offer a particularly powerful benefit for them. In contrast, the association between one person’s mindfulness and their partner’s sexual satisfaction was stronger when the mindful partner was a man.

These findings contribute to a growing body of evidence supporting the idea that being present and aware during sex is beneficial for couples. The study highlights that these benefits are not just personal but are shared within the relationship. By focusing on physical sensations and letting go of distracting or self-critical thoughts, individuals may not only improve their own sexual satisfaction but also contribute positively to their partner’s experience. This points to the potential of clinical interventions that teach mindfulness skills specifically within a sexual context.

The researchers acknowledged some limitations of their work. The participant sample was predominantly White and heterosexual, which means the results may not be generalizable to couples from other ethnic backgrounds or to same-sex couples. Future research could explore these dynamics in more diverse populations to see if the same patterns hold.

Another important point is that the study’s design is correlational, meaning it identifies a relationship between variables but cannot prove causation. It is not possible to say for certain that being more mindful causes better sexual well-being. The relationship could potentially work in the other direction, where a more positive sexual experience allows a person to be more mindful. Future studies using experimental methods, where mindfulness is actively manipulated, could help clarify the direction of this effect. Despite these limitations, the study provides a detailed picture of the day-to-day connections between mindfulness and sexual health in romantic partners.

The study, “Daily Sexual Mindfulness is Linked with Greater Sexual Well‑Being in Couples,” was authored by Simone Y. Goldberg, Marie‑Pier Vaillancourt‑Morel, Marta Kolbuszewska, Sophie Bergeron, and Samantha J. Dawson.

Spouses from less privileged backgrounds tend to share more synchronized heartbeats

When people feel emotionally close, their bodies may start to act in tandem. A new study published in Biological Psychology offers evidence that this alignment can reach the level of the heart. Researchers found that married couples from lower socioeconomic backgrounds were more likely to show synchronized heart rate patterns than couples from higher socioeconomic backgrounds. The findings suggest that social and economic conditions may shape not only how people relate to one another emotionally, but also how their bodies respond during social connection.

Previous research has shown that people from lower-income and lower-education backgrounds tend to emphasize relationships more than their more affluent peers. Studies suggest that individuals from these environments often rely more on their social networks for support, given that they face more external challenges such as financial strain and limited access to resources. This emphasis on social interdependence appears in how people think, feel, and behave. But until now, little was known about whether this tendency might also appear in physical processes, such as heart rate.

“Social connection is essential for human well-being and survival. And how we connect with others is shaped by the resources and opportunities we have. When socioeconomic resources are scarce, social relationships can become a refuge and a resource, taking on a particularly important role in people’s lives,” said Tabea Meier, a postdoctoral scholar affiliated with the University of Zurich, and Claudia Haase, an associate professor at Northwestern University, the corresponding authors of the study.

“Prior research has shown that people from less privileged backgrounds tend to be more interdependent and attuned to others, for example, in experiencing greater empathy and compassion. This stands in contrast to the individualism that tends to dominate more privileged social contexts.”

“However, much less is known about whether this attunement to others goes beyond experiences and behavior—whether it shows up in people’s bodies or physiology. Our study of married couples examined this question by probing how socioeconomic status relates to physiological linkage – the way spouses’ heart rates rise and fall together when they interact. In moments of deep connection, people’s hearts can beat in sync.”

For their study, the researchers recruited 48 married couples living in the Chicago area, resulting in a sample of 96 individuals. The couples varied widely in terms of income and education. Some earned less than $20,000 per year, while others made over $150,000. Their education levels also ranged from less than high school to advanced degrees. The sample included people from several racial and ethnic backgrounds.

Each couple participated in a three-hour lab session. After some initial procedures, they took part in two ten-minute conversations: one focused on a topic of conflict in their relationship, and another centered on a mutually enjoyable subject. During these conversations, the participants wore sensors that tracked their heart activity in real time. The researchers focused on a measure called “interbeat interval,” which is the amount of time between heartbeats. These second-by-second measurements allowed the team to assess how each spouse’s heart rate changed throughout the conversation.

The researchers analyzed how closely the spouses’ heart rate patterns mirrored each other. When both people’s heart rates sped up or slowed down together, this was called “in-phase linkage.” When one person’s heart rate increased while the other’s decreased, that was labeled “anti-phase linkage.” In both cases, stronger linkage meant a tighter correlation between spouses’ heart rate shifts. The team looked at how these two types of linkage were related to the couple’s socioeconomic background.

Across both conflict and pleasant conversations, couples from lower socioeconomic backgrounds showed higher in-phase linkage. In other words, their heart rates were more likely to change in the same direction. At the same time, they showed lower anti-phase linkage, meaning their heart rates were less likely to change in opposite directions.

This pattern suggests that less affluent couples tend to experience a stronger bodily connection during interpersonal interactions. Their heart rhythms moved more in unison, regardless of whether they were arguing or sharing positive memories. The difference was particularly strong for anti-phase linkage, which was much lower in lower-income and lower-education couples compared to their more privileged peers.

“When people connect, it’s not just their thoughts, feelings, and behaviors that can align – their bodies can, too,” Meier and Haase told PsyPost. “Our study found that couples’ socioeconomic backgrounds may shape how this connection unfolds at a physiological level. Specifically, the heart rates of spouses from less privileged backgrounds were more likely to change in the same direction (i.e., speeding up or slowing down together) and less likely to change in opposite directions (i.e., one speeding up while the other is slowing down) compared to those from more privileged backgrounds.”

These results held even after the researchers controlled for several other factors, including age and racial background. The effect was also more strongly tied to education than income, although both contributed to the findings.

Importantly, the level of synchrony did not appear to be linked to the emotional tone of the conversation or to how many times the couples used inclusive words like “we.” That suggests that the physiological linkage observed may be operating somewhat independently of what the spouses said or how they rated their emotions.

“These findings build on a long line of research showing that people from less privileged backgrounds tend prioritize relationships and are more attuned to those around them,” the researchers said. “Our study suggests, to our knowledge for the first time, that this connection may not only appear in feelings or behaviors, but also at a physiological level in the form of linked heart rates between spouses. It is a reminder that our social worlds live within us.”

There are a few caveats to consider. The sample size, although consistent with similar lab-based studies, was relatively small. It also focused on heterosexual married couples with children in the United States, which limits how broadly the results can be applied.

The study also did not look at how these heart rate patterns affect the couples over time. It remains unclear whether higher in-phase linkage leads to better relationship satisfaction, improved health, or other benefits. Some previous research suggests that synchrony may be helpful in many cases, but not always. For example, when couples are arguing, syncing up physiologically might sometimes make things worse by escalating conflict. On the other hand, moving in opposite directions might help one partner stay calm while the other is distressed.

“It is important not to oversimplify these results,” Meier and Haase explained. “Linked heart rates do not necessarily mean “better” or healthier relationships. Whether physiological linkage is beneficial or not may really depend on the context in which it occurs, for example, whether spouses are cracking up about an inside joke, are throwing harsh words at each other, or comforting each other in sadness. Future research can explore when and how different heart rate linkage patterns support or harm relationship satisfaction, well-being, and health.”

“Our study is a first step and there are many open questions that we would love the research community to pursue. While we worked hard to recruit a diverse sample of couples from all walks of life from the U.S. Chicagoland area, larger samples will be needed, ideally not just from the US. There are many other open questions. For instance, how does physiological linkage predict how satisfied spouses from less or more privileged backgrounds are with their relationship over time? And what are the consequences for mental and physical health? We look forward to more research in this area that connects the macro and the micro.”

“Socioeconomic status can shape our everyday lives in powerful ways, including how we connect with loved ones,” the researchers added. “Psychological research on couples has traditionally focused mostly on white, middle-class couples. Findings from our study, along with others, highlight the importance of inclusive approaches in the study of social connection. The couples in our study allowed us to gain a deeper understanding of how emotional dynamics and social connection may differ across socioeconomic contexts, and we are grateful that they shared their time and insights with us.”

The study, “Connected at heart? Socioeconomic status and physiological linkage during marital interactions,” was authored by Tabea Meier, Aaron M. Geller, Kuan-Hua Chen, and Claudia M. Haase.

Trigger warnings spark curiosity more than caution, new research indicates

Trigger warnings are meant to help people emotionally prepare for or avoid potentially upsetting material. But new evidence from a week-long study of young adults suggests they often do neither. Instead, most people who encounter these warnings choose to view the content anyway. The findings also indicate that even individuals with trauma histories or mental health concerns are no more likely to avoid warned content than others. The results provide further support for the growing idea that trigger warnings, while widespread, may not function as intended in everyday digital life.

Trigger warnings are now common in both online and offline environments, appearing ahead of everything from social media posts to college course material. They are typically used to signal content that could be distressing, especially for those with past trauma or mental health challenges. Advocates argue that these warnings give vulnerable people the opportunity to prepare for or avoid harmful content.

But a growing body of lab-based studies has cast doubt on the idea that trigger warnings work in the way people hope. While many assume that warnings prompt avoidance, experiments have shown that most people choose to view the content anyway, and that warnings rarely reduce emotional distress. Until now, however, nearly all of this evidence came from controlled settings. Researchers had not yet studied how people actually respond to trigger warnings in their everyday lives.

The new study, published in the Journal of Behavior Therapy and Experimental Psychiatry, aimed to fill that gap. The researchers set out to track when and how often people encounter trigger warnings on social media, whether they choose to view or avoid the associated content, and whether certain psychological traits—such as symptoms of posttraumatic stress or depression—are linked to different patterns of behavior.

“Over the past (almost decade) my research has been concerned with cutting through online debate about trigger warnings and examining them using an experimental framework. This work has found that in the lab, warnings about upcoming negative content do not reduce people’s emotional reactions to material, nor do they seem effective in deterring the majority of people from viewing negative content when given a neutral/non-distressing alternative,” said study author Victoria Bridgland, a lecturer at Flinders University.

“We were interested in seeing if these findings, particularly about avoidance, extend outside of lab environments. Participating in a lab study is inherently coercive, however participants have no obligation to watch or avoid negative content in daily life. However, aligning with lab findings, we found that the most common response to seeing trigger warnings online in daily life was to view the content, and the most common reason given was because of curiosity—which is also something we hear in lab.”

The study followed 261 young adults between the ages of 17 and 25 over a seven-day period. Participants reported their daily experiences with social media, including whether they saw any trigger warnings and what kind of content those warnings accompanied. They also recorded whether they chose to look at or avoid the content after seeing the warning.

To explore whether psychological traits influenced avoidance behavior, participants completed several standardized assessments at the beginning of the study. These included measures of trauma exposure, symptoms of posttraumatic stress disorder, depression, anxiety, and general well-being. The researchers also asked whether participants had a tendency to deliberately seek out reminders of traumatic experiences, a behavior sometimes referred to as self-triggering.

The researchers wanted to see whether people who had higher levels of psychological distress were more likely to avoid warned content, as trigger warning advocates often suggest. They also looked at how frequently participants encountered these warnings and what motivated their decision to view or avoid the content.

Nearly half of the participants reported seeing at least one trigger warning during the week. Among those who did, the average number of warnings seen was about four. The most common platforms for encountering these warnings were Instagram, TikTok, and Twitter, and the most frequent content types were violent or aggressive material, depictions of physical injury, and sexually explicit content.

When asked how they responded to the warnings, the overwhelming majority said they chose to look at the content. On a scale from “never looked” to “always looked,” most people leaned heavily toward viewing. In fact, only around 11 percent reported consistently avoiding warned material throughout the week, while more than a third said they always approached it. When asked why they looked, more than half cited curiosity—the desire to know what was being hidden—as their main motivation.

The results were not a surprise. “We have known for some time from lab experiments that trigger warnings don’t seem to increase rates of avoidance, and we also know that people are morbidly curious and often self-expose themselves to negative material (even when it serves no real benefit),” Bridgland told PsyPost.

The researchers found no evidence that people with higher psychological vulnerability were more likely to avoid the content. Participants with greater posttraumatic stress symptoms, for example, were just as likely to view the material as those with fewer symptoms. This pattern held across several mental health measures, including depression, anxiety, and a history of trauma exposure.

Interestingly, people who did see trigger warnings tended to score higher on mental health symptom scales and lower on general well-being. The authors suggest that this could be because such individuals spend more time in online spaces where trigger warnings are common, or because the warnings feel more personally relevant and memorable to them. But even within this group, the presence of a warning did not increase the likelihood of avoidance.

The content people chose to avoid, when they did avoid it, varied widely. Some said they were simply uninterested, while others avoided it because it involved specific types of content they preferred not to see, such as animal cruelty or depictions of death. A small number of participants reported avoiding material that felt emotionally overwhelming or clashed with their current mood. Still, these decisions were the exception rather than the rule.

“I’d like for people to be conscious consumers of negative material online and be wary of extremes,” Bridgland said. “For example, if you are someone who finds they often need to avoid or becomes overly distressed or triggered by online content or someone who is deliberately searching for and binge consuming negative content in high volumes which is leading to distress—this is likely a sign that there is some underlying issue that likely warrants therapeutic attention. In either of these cases, be aware that a trigger warning may not be serving a beneficial function.”

As with all research, there are some limitations. First, the study did not measure emotional reactions after viewing the content, so it remains unclear whether the warnings helped people feel more prepared or less distressed. Prior research, however, suggests that trigger warnings tend not to influence emotional responses much, if at all.

Another limitation is that people might behave differently depending on the specific context or type of content. For example, someone might avoid a warning about sexual assault but not one about medical procedures. The study also didn’t capture real-time responses, so there may be subtle moment-to-moment factors—such as mood or fatigue—that influence decisions to view or avoid warned content.

“I’d like to clarify that me and my research team aren’t advocating that we should ban trigger warnings, but we just want people to be aware of the lack of benefits they provide,” Bridgland explained. “This way people can take other precautions to safeguard their mental health online.”

“Since it seems hard to improve antecedent based strategies to help people cope with negative content (as various recent studies have tried to “improve” trigger warnings with no success), I’m exploring ways we can help people after they are exposed. This will also help in the case where shocking/traumatic content exposure happens without warning (which is a common experience online).”

The study, “‘I’m always curious’: Tracking young adults exposure and responses to social media trigger warnings in daily life,” was authored by Victoria M.E. Bridgland, Ella K. Moeck, and Melanie K.T. Takarangi.

Study finds stronger fitness in countries with greater gender equality

A new study published in the Journal of Sport and Health Science provides evidence that cardiorespiratory fitness tends to be higher in countries with greater gender equality and higher levels of human development. The findings suggest that social conditions and national policies may shape people’s access to physical activity and their ability to maintain physical health.

There is strong scientific agreement that being physically active helps prevent disease and supports long-term health. Regular movement improves cardiorespiratory fitness, which refers to the ability of the heart and lungs to supply oxygen to the muscles during activity. Higher levels of cardiorespiratory fitness are linked with a lower risk of death from all causes, including heart disease and cancer.

However, researchers have long suspected that fitness levels are not solely determined by individual choices. Factors such as where people live, their income, access to safe outdoor spaces, social support, and even national policies may influence how active they can be. Gender may also play a role. In many societies, women face more barriers to physical activity than men, including caregiving responsibilities, fewer sports opportunities, or concerns about safety.

Despite these observations, the relationship between fitness levels and broader societal factors has not been studied in depth. Previous research has focused mostly on children or used indirect measures of fitness. The current study aimed to close this gap by examining how cardiorespiratory fitness in adults relates to two specific indicators: the Human Development Index, which includes education, income, and life expectancy, and the Gender Inequality Index, which measures disparities between men and women in areas such as health, political power, and the labor market.

The researchers reviewed thousands of studies and selected 95 that included direct measurements of peak oxygen uptake, a key marker of cardiorespiratory fitness, in healthy adults. This measurement, often referred to as VO2peak, is collected during a maximal exercise test in which participants exert themselves on a treadmill or bicycle while their breathing is analyzed. These tests are considered the gold standard for measuring fitness.

The final dataset included over 119,000 adults, with roughly 58 percent men and 42 percent women. The participants came from a diverse group of countries including the United States, Brazil, Germany, China, and Japan. Each study was matched with the relevant Human Development Index and Gender Inequality Index scores for the country and year in which data were collected.

The researchers found that fitness tends to decrease with age and that, on average, women had lower VO2peak values than men. However, when comparing countries, they noticed a pattern: adults in countries with higher levels of human development and lower levels of gender inequality had higher fitness levels.

The relationship between development and fitness was especially pronounced among women. Women living in countries with high human development scores had higher VO2peak levels across all age groups. For men, this trend was mainly observed in those under 40 years old. This suggests that women may benefit more from living in supportive and equitable societies when it comes to maintaining physical fitness.

A similar pattern emerged when looking at gender inequality. In countries with less gender inequality, both men and women had higher cardiorespiratory fitness, but the effect was again stronger for women. The difference was most notable among women under 40. Young women living in countries with low gender inequality had fitness levels that were on average 6.5 mL/kg/min higher than those in countries with high gender inequality. This difference is large enough to matter for health, as even small increases in VO2peak are linked with reduced risks of chronic disease and early death.

These results suggest that policies and social structures that promote equality and development may indirectly support better health by enabling more people, especially women, to engage in regular and vigorous physical activity.

Although this study includes one of the largest datasets of directly measured VO2peak values ever compiled, it is not without its limitations. The researchers were only able to include studies that used standardized testing methods and reported data by age and sex. This meant that many large population studies that estimated fitness indirectly had to be excluded. While this choice improved the reliability of the results, it also limited the diversity of countries included.

Most of the data came from countries with medium to high development levels. There was a lack of data from countries with low development scores, which makes it difficult to understand the full range of global fitness patterns. Additionally, many of the studies did not provide information on participants’ race, ethnicity, or socioeconomic background. These gaps are important because they could affect how fitness relates to social inequality in different contexts.

The authors suggest that future research should aim to collect more data from underrepresented populations and countries. They also recommend investigating how specific social policies, such as workplace fitness programs or community sports initiatives, might improve cardiorespiratory fitness, especially for women and vulnerable groups.

The study, “Human development and gender inequality are associated with cardiorespiratory fitness: A global systematic review of VO2peak,” was authored by Nicolas J. Pillon, Joaquin Ortiz de Zevallos, Juleen R. Zierath, and Barbara E. Ainsworth.

Experts warn of an ‘intimate authenticity crisis’ as AI enters the dating scene

Many dating app companies are enthusiastic about incorporating generative AI into their products. Whitney Wolfe Herd, founder of dating app Bumble, wants gen-AI to “help create more healthy and equitable relationships”. In her vision of the near future, people will have AI dating concierges who could “date” other people’s dating concierges for them, to find out which pairings were most compatible.

Dating app Grindr is developing an AI wingman, which it hopes to be up and running by 2027. Match Group, owner of popular dating apps including Tinder, Hinge and OK Cupid, have also expressed keen interest in using gen-AI in their products, believing recent advances in AI technology “have the power to be transformational, making it more seamless and engaging for users to participate in dating apps”. One of the ways they think gen-AI can do this is by enhancing “the authenticity of human connections”.

Use of gen-AI in online dating is not just some futuristic possibility, though. It’s already here.

Want to enhance your photos or present yourself in a different style? There are plenty of online tools for that. Similarly, if you want AI to help “craft the perfect, attention-grabbing bio” for you, it can do that. AI can even help you with making conversation, by analysing your chat history and suggesting ways to reply.

Extra help

It isn’t just dating app companies who are enthusiastic about AI use in dating apps either. A recent survey carried out by Cosmopolitan magazine and Bumble of 5,000 gen-Zers and millennials found that 69% of respondents were excited about “the ways AI could make dating easier and more efficient”.

An even higher proportion (86%) “believe it could help solve pervasive dating fatigue”. A surprising 86% of men and 77% of the women surveyed would share their message history with AI to help guide their dating app conversations.

It’s not hard to see why AI is so appealing for dating app users and providers. Dating apps seem to be losing their novelty: many users are reportedly abandoning them due to so-called “dating app fatigue” – feeling bored and burnt out with dating apps.

Apps and users might be hopeful that gen-AI can make dating apps fun again, or if not fun, then at least that it will make them actually lead to dates. Some AI dating companions claim to get you ten times more dates and better dates at that. Given that men tend to get fewer matches on dating apps than women, it’s also not surprising that we’re seeing more enthusiasm from men than women about the possibilities AI could bring.

Talk of gen-AI in connection to online dating gives rise to many ethical concerns. We at the Ethical Dating Online Network, an international network of over 30 multi-disciplinary academics interested in how online dating could be more ethical, think that dating app companies need to convincingly answer these worries before rushing new products to market. Here are a few standout issues.

Pitfalls of AI dating

Technology companies correctly identify some contemporary social issues, such as loneliness, anxiety at social interactions, and concerns about dating culture, as hindering people’s dating lives.

But turning to more technology to solve these issues puts us at risk of losing the skills we need to make close relationships work. The more we can reach for gen-AI to guide our interactions, the less we might be tempted to practise on our own, or to take accountability for what we communicate. After all, an AI “wingman” is of little use when meeting in person.

Also, AI tools risk entrenching much of dating culture that people find stressful. Norms around “banter”, attractiveness or flirting can make the search for intimacy seem like a competitive battleground. The way AI works – learning from existing conversations – means that it will reproduce these less desirable aspects.

Instead of embracing those norms and ideals, and trying to equip everyone with the tools to seemingly meet impossibly high standards, dating app companies could do more to “de-escalate” dating culture: make it calmer, more ordinary and help people be vulnerable. For example, they could rethink how they charge for their products, encourage a culture of honesty, and look at alternatives to the “swiping” interfaces.

The possibility of misrepresentation is another concern. People have always massaged the truth when it comes to dating, and the internet has made this easier. But the more we are encouraged to use AI tools, and as they are embedded in dating apps, bad actors can more simply take advantage of the vulnerable.

An AI-generated photo, or conversation, can lead to exchanges of bank details, grooming and sexual exploitation.

Stopping short of fraud, however, is the looming intimate authenticity crisis. Online dating awash with AI generated material risks becoming a murky experience. A sincere user might struggle to identify like-minded matches on apps where use of AI is common.

This interpretive burden is annoying for anyone, but it will exacerbate the existing frustrations women, more so than men, experience on dating apps as they navigate spaces full of with timewasting, abuse, harassment and unwanted sexualisation.

Indeed, women might worry that AI will turbo-charge the ability of some men to prove a nuisance online. Bots, automation, conversation-generating tools, can help some men to lay claim to the attention of many women simultaneously.

AI tools may seem like harmless fun, or a useful timesaver. Some people may even wholeheartedly accept that AI generated content is not “authentic” and love it anyway.

Without clear guardrails in place, however, and more effort by app companies to provide informed choices based on transparency about how their apps work, any potential benefits of AI will be obscured by the negative impact it has to intimacy online.The Conversation

 

This article is republished from The Conversation under a Creative Commons license. Read the original article.

New research show how tobacco may worsen brain-related outcomes in cannabis users

A new study suggests that people who use both cannabis and tobacco have elevated levels of a key enzyme in their brain compared to people who only use cannabis. This finding may offer a biological explanation for why combining these substances is often linked to more severe mental health symptoms and greater difficulty quitting. The research was published in the journal Drug and Alcohol Dependence Reports.

The high rate of co-use between cannabis and tobacco products has long been a concern for public health experts. Studies have shown that individuals who use both substances often report worse clinical outcomes, including higher rates of depression and anxiety, when compared to those who use cannabis alone. Researchers from McGill University sought to understand the potential brain mechanisms that could be driving this difference.

The scientific team focused on the body’s endocannabinoid system, a complex cell-signaling network that helps regulate mood, appetite, and memory. A key component of this system is a naturally produced compound called anandamide. Lower levels of anandamide have been associated with poorer mental health, including increased symptoms of anxiety and depression.

The amount of anandamide in the brain is controlled by an enzyme called fatty acid amide hydrolase, or FAAH. The job of FAAH is to break down anandamide. When FAAH levels are high, more anandamide is broken down, leading to lower overall levels of this beneficial compound. The researchers proposed that tobacco use might increase FAAH levels, providing a reason for the negative outcomes observed in people who co-use cannabis and tobacco.

To investigate this possibility, the researchers recruited 13 participants who were regular cannabis users. They then divided these individuals into two groups based on their tobacco use. The first group consisted of five people who used both cannabis and at least one cigarette daily. The second group was made up of eight people who used cannabis but had no current tobacco use.

The two groups were closely matched on several characteristics, including age, sex, and patterns of cannabis consumption, such as how long they had been using and how much they used per week. This matching was done to help ensure that any observed differences in the brain were more likely related to tobacco use rather than other factors.

Each participant underwent a sophisticated brain imaging procedure known as positron emission tomography. This technique allows scientists to visualize and measure the activity of specific molecules in the living human brain. To measure FAAH levels, the researchers injected participants with a special imaging agent called [11C]CURB, which is designed to bind directly to the FAAH enzyme.

By tracking this imaging agent, the scanner could produce a map showing the concentration of FAAH in different parts of the brain. The researchers focused their analysis on six brain regions known to be rich in both cannabinoid and nicotine receptors, including the prefrontal cortex, hippocampus, and cerebellum. They also accounted for each participant’s sex and a common genetic variation that is known to influence FAAH levels.

The results of the brain scans revealed a distinct difference between the two groups. The individuals who used both cannabis and tobacco had consistently higher levels of the FAAH enzyme across all brain regions examined. The difference was statistically significant in two areas: the substantia nigra, a region involved in reward and movement, and the cerebellum, an area critical for motor control and cognitive functions.

A similar, though not statistically significant, trend was observed in the sensorimotor striatum. The magnitude of the difference in the substantia nigra and cerebellum was considered large, indicating a substantial biological effect. These findings provide the first direct evidence in humans that co-using tobacco is associated with higher FAAH activity than using cannabis alone.

The researchers also explored whether the amount of substance use was related to FAAH levels. They found a positive correlation between the number of cigarettes smoked per day and the level of FAAH in the cerebellum. This means that individuals who smoked more cigarettes tended to have higher concentrations of the enzyme in that brain region. In contrast, the team found no significant association between the amount of cannabis used and FAAH levels.

The study’s authors suggest that these elevated FAAH levels could be the mechanism underlying the poorer clinical outcomes seen in people who co-use. Higher FAAH would lead to lower anandamide, which in turn is linked to mood and anxiety problems. This offers a neurobiological pathway that could explain why this group often experiences greater mental health challenges and more severe withdrawal symptoms.

The researchers acknowledged several limitations to their study. First and foremost, the sample size was very small, meaning the results should be considered preliminary. Larger studies are needed to confirm these findings and to determine if the same pattern holds true in other brain regions.

Additionally, the study did not include a group of people who only used tobacco or a control group of non-users. Without these comparison groups, it is difficult to determine if the increased FAAH is due to tobacco use itself or a specific interaction between tobacco and cannabis. The study also did not directly measure participants’ levels of depression or anxiety, so it could not draw a direct line between FAAH levels and clinical symptoms.

Future research is needed to address these points. Scientists recommend conducting larger studies that include groups of tobacco-only users and healthy controls. Such studies could clarify the independent and combined effects of cannabis and tobacco on the endocannabinoid system. Connecting these brain measurements with clinical assessments of mood and anxiety would also be an important next step.

Despite its preliminary nature, this research opens up a new avenue for understanding the risks of combining cannabis and tobacco. If confirmed, the findings could point toward new therapeutic strategies. Medications that inhibit the FAAH enzyme are already under development, and this work suggests they might one day be a useful tool for treating cannabis use disorder, especially for the large number of individuals who also use tobacco.

The study, “A preliminary investigation of tobacco co-use on endocannabinoid activity in people with cannabis use,” was authored by Rachel A. Rabin, Joseph Farrugia, Ranjini Garani, Romina Mizrahi, and Pablo Rusjan.

Contrary to common belief, research reveals some brain areas expand with age

I recently asked myself if I’ll still have a healthy brain as I get older. I hold a professorship at a neurology department. Nevertheless, it is difficult for me to judge if a particular brain, including my own, suffers from early neurodegeneration.

My new study, however, shows that part of your brain increases in size with age rather than degenerating.

The reason it’s so hard to measure neurodegeneration is because of how complicated it is to measure small structures in our brain.

Modern neuroimaging technology allows us to detect a brain tumour or to identify an epileptic lesion. These abnormalities are several millimetres in size and can be depicted by a magnetic resonance imaging (MRI) scanner, which operates at around 30,000-60,000 times stronger than the natural magnetic field of the Earth. The problem is that human thinking and perception operate at an even smaller scale.

Our thinking and perception happens in the neocortex. This outer part of our brain consists of six layers. When you feel touch to your body, layer four of your sensory cortex gets activated. This layer is the width of a grain of sand – much smaller than what MRI scanners at hospitals can usually depict. When you modulate your body sensation, for example by trying to read this text rather than feeling the pain from your bad back, layers five and six of your sensory cortex get activated – which are even smaller than layer four.

For my study, published in the journal Nature Neuroscience, I had access to a 7 Tesla MRI scanner which offers five times better image resolution than standard MRI scanners. It makes snapshots of the fine-scale brain networks during perception and thought visible.

Using a 7 Tesla scanner, my team and I investigated the sensory cortex in healthy younger adults (around 25 years old) and healthy older adults (around 65 years old) to better understand brain ageing. We found that only layers five and six, which modulate body perception, showed signs of age-related degeneration.

Layer four, needed to feel touch to your body, was enlarged in healthy older adults in my study. We also did a comparative study with mice. We found similar results in the older mice, in that they also had a more pronounced layer four than the younger mice. However evidence from our study of mice, which included a third group of very old mice, showed this part of the brain may degenerate in more advanced old age.

Current theories assume our brain gets smaller as we grow older. But my team’s findings contradict these theories in part. It is the first evidence that some parts of the brain get bigger with age in normal older adults.

Older adults with a thicker layer four would be expected to be more sensitive to touch and pain, and (due to the reduced deep layers) have difficulties modulating such sensations.

To understand this effect better, we studied a middle-aged patient who was born without one arm. This patient had a smaller layer four. This suggests their brain received fewer impulses in comparison to a person with two arms and therefore developed less mass in layer four. Parts of the brain that are used more develop more synapses, hence more mass.

Rather than systematically degenerating, older adults’ brains seem to preserve what they use, at least in part. Brain ageing may be compared with a complex machinery in which some often used parts are well oiled, while others less frequently used get roasted. From that perspective, brain ageing is individual, shaped by our lifestyle, including our sensory experiences, reading habits, and cognitive challenges that we take on in everyday life.

In addition, it shows that the brains of healthy older adults preserve their capacity to stay in tune with their surroundings.

A lifetime of experiences

There is another interesting aspect about the results. The pattern of brain changes that we found in older adults – a stronger sensory processing region and a reduced modulatory region – shows similarities to neurodivergent disorders such as the autism spectrum disorder or attention deficit hyperactivity disorder.

Neurodivergent disorders are characterised by enhanced sensory sensitivity and reduced filtering abilities, leading to problems in concentration and cognitive flexibility.

Do our findings indicate that ageing drives the brain in the direction of neurodivergent disorders? Older adults brains have been formed by a lifetime of experiences whereas neurodivergent people are born with these brain patterns. So it would be hard to know what other effects building brain mass with age might have.

Yet, our findings give us some us clues about why older adults sometimes have difficulties adapting to new sensory environments. In such situations, for example being confronted with a new technical device or visiting a new city, the reduced modulatory abilities of layers five and six may become particularly evident, and may increase the likelihood for disorientation or confusion. It may also explain reduced abilities for multitasking with age, such as using a mobile phone while walking. Sensory information needs to be modulated to avoid interference when you’re doing more than one thing.

Both the middle and the deep layers had more myelin, a fatty protective layer that is crucial for nerve function and communication, in the older mice as well as humans. This suggests that in people over the age of 65, there is a compensatory mechanism for the loss of modulatory function. This effect seemed to be breaking down in the very old mice though.

Our results provide evidence for the power of a person’s lifestyle for shaping the ageing brain.The Conversation

 

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Parkinson’s-linked protein clumps destroy brain’s primary energy molecule

A new scientific report reveals that the protein aggregates associated with Parkinson’s disease are not inert clumps of cellular waste, but rather are chemically active structures that can systematically destroy the primary energy molecule used by brain cells. The research, published in the journal Advanced Science, demonstrates that these protein plaques can function like tiny, rogue enzymes, breaking down adenosine triphosphate and potentially starving neurons of the power they need to survive and function.

Scientists have long sought to understand how the accumulation of protein clumps, known as amyloids, leads to the devastating neuronal death seen in neurodegenerative conditions like Parkinson’s disease. These clumps are primarily made of a misfolded protein called alpha-synuclein.

The prevailing view has been that these aggregates cause harm by physically disrupting cellular processes, poking holes in membranes, or sequestering other important proteins. However, a team of researchers led by Pernilla Wittung-Stafshede at Rice University suspected there might be more to the story.

Previous work from the same group had shown that alpha-synuclein amyloids were not chemically inactive. They could facilitate certain chemical reactions on simple model compounds in a test tube. This led the researchers to question if these amyloids could also act on biologically significant molecules inside a cell. They focused on one of the most fundamental molecules in all of life: adenosine triphosphate, the universal energy currency that powers nearly every cellular activity.

Neurons have exceptionally high energy demands and cannot store fuel, making them particularly vulnerable to any disruption in their adenosine triphosphate supply. The team hypothesized that if amyloids could break down this vital molecule, it would represent a completely new way these pathological structures exert their toxicity.

To investigate this possibility, the scientists conducted a series of experiments. First, they needed to confirm that adenosine triphosphate even interacts with the alpha-synuclein amyloids. They used a chemical reaction they had previously studied, where the amyloids break down a substance called para-nitrophenyl orthophosphate.

When they added adenosine triphosphate to this mixture, the original reaction stopped. This competitive effect suggested that adenosine triphosphate was binding to the same active location on the amyloid surface, pushing the other substance out of the way.

Having established that adenosine triphosphate binds to the amyloids, the researchers then tested whether it was being broken down. They mixed prepared alpha-synuclein amyloids with a solution of adenosine triphosphate and used a diagnostic tool called the Malachite Green assay, which changes color in the presence of free phosphate, a byproduct of adenosine triphosphate breakdown.

They observed a steady increase in free phosphate over time, confirming that the amyloids were indeed cleaving the phosphate bonds in adenosine triphosphate. This activity was catalytic, meaning a single amyloid structure could process many molecules of adenosine triphosphate, one after another. The same experiment performed with individual, non-clumped alpha-synuclein proteins showed no such effect, indicating this energy-draining ability is a feature specific to the aggregated, amyloid form.

To understand the mechanism behind this chemical activity, the team used a powerful imaging technique known as cryogenic electron microscopy. This method allowed them to visualize the structure of the alpha-synuclein amyloid at a near-atomic level of detail while it was bound to adenosine triphosphate.

The resulting images revealed a remarkable transformation. The amyloid itself was formed from two intertwined filaments, creating a cavity between them. When adenosine triphosphate entered this cavity, a normally flexible and disordered segment of the alpha-synuclein protein, consisting of amino acids 16 through 22, folded into an ordered beta-strand. This newly formed structure acted like a lid, closing over the cavity and trapping the adenosine triphosphate molecule inside.

This enclosed pocket was lined with several positively charged amino acids called lysines. Since the phosphate tail of adenosine triphosphate is strongly negatively charged, these lysines likely serve to attract and hold the energy molecule in a specific orientation. The structure suggested that this induced-fit mechanism, where the amyloid changes its shape upon binding its target, was a key part of its chemical function.

To prove that these specific lysine residues were responsible for the activity, the researchers genetically engineered several mutant versions of the alpha-synuclein protein. In each version, they replaced one or more of the key lysines in the cavity with a neutral amino acid, alanine. These mutant proteins were still able to form amyloid clumps that looked similar to the original ones.

When they tested the mutant amyloids for their ability to break down adenosine triphosphate, they found the activity was almost completely gone. This result confirmed that the positively charged lysines are essential for the amyloid’s ability to perform the chemical reaction.

In a final step, the scientists solved the high-resolution structure of one of the inactive mutant amyloids (K21A) while it was bound to adenosine triphosphate. The images showed that the energy molecule could still sit in the cavity, but its orientation was different from that seen in the active, non-mutant amyloid.

More importantly, in this inactive complex, the flexible protein segment did not fold over to form the enclosing lid. This finding provided strong evidence that both the proper positioning of adenosine triphosphate by the lysines and the structural rearrangement that closes the cavity are necessary for the breakdown to occur.

The study does have some limitations. The experiments were conducted in a controlled laboratory setting, not in living cells or organisms. The specific structural form of the alpha-synuclein amyloid studied, known as polymorph type 1A, has not yet been identified in the brains of Parkinson’s patients, although similar structures exist.

Also, the rate at which the amyloids broke down adenosine triphosphate was slow compared to natural enzymes. Future research will need to determine if this process occurs within the complex environment of a neuron and if other, more clinically relevant amyloid forms share this toxic capability.

Despite these caveats, the findings introduce a new and potentially significant mechanism of neurodegeneration. The researchers suggest that even a slow reaction could have a profound local effect. An amyloid plaque contains a very high density of these active sites. This could create a zone of severe energy depletion in the immediate vicinity of the plaque, disabling essential cellular machinery.

For instance, cells use chaperone proteins that require adenosine triphosphate to try to break up these very amyloids. If the chaperones approach an amyloid plaque and enter an energy-depleted zone, their rescue function could be disabled, effectively allowing the plaque to protect itself and persist. This work transforms the view of amyloids from passive obstacles into active metabolic drains, opening new avenues for understanding and potentially treating Parkinson’s disease.

The study, “ATP Hydrolysis by α-Synuclein Amyloids is Mediated by Enclosing β-Strand,” was authored by Lukas Frey, Fiamma Ayelen Buratti, Istvan Horvath, Shraddha Parate, Ranjeet Kumar, Roland Riek, and Pernilla Wittung-Stafshede.

Genetic predisposition for inflammation linked to a distinct metabolic subtype of depression

A new study suggests that a person’s genetic predisposition for chronic inflammation helps define a specific subtype of depression linked to metabolic issues. The research also found this genetic liability is connected to antidepressant treatment outcomes in a complex, nonlinear pattern. The findings were published in the journal Genomic Psychiatry.

Major depressive disorder is a condition with wide-ranging symptoms and variable responses to treatment. Many patients do not find relief from initial therapies, a reality that has pushed scientists to search for biological markers that could help explain this diversity and guide more personalized medical care. One area of growing interest is the connection between depression and the body’s immune system, specifically chronic low-grade inflammation. A key blood marker for inflammation is C-reactive protein, which is often found at elevated levels in people with depression.

However, measuring C-reactive protein directly from blood samples can be problematic for research because levels can fluctuate based on diet, infection, or stress. An international team of researchers, led by Alessandro Serretti of Kore University of Enna, Italy, sought a more stable way to investigate the link between inflammation and depression. They turned to genetics, using a tool known as a polygenic score. This score summarizes a person’s inherited, lifelong tendency to have higher or lower levels of C-reactive protein. While previous studies have connected this genetic score to specific depressive symptoms or to treatment outcomes separately, this new research aimed to examine both within the same large group of patients to build a more complete picture.

The investigation involved 1,059 individuals of Caucasian descent who were part of the European Group for the Study of Resistant Depression. All participants had a diagnosis of major depressive disorder and had been receiving antidepressant medication for at least four weeks. Researchers collected detailed clinical information, including the severity of depressive symptoms, which was assessed using the Montgomery–Åsberg Depression Rating Scale. Based on their response to medication, patients were categorized as responders, nonresponders, or as having treatment-resistant depression if they had not responded to two or more different antidepressants.

For each participant, the science team calculated a polygenic score for C-reactive protein. This was accomplished by analyzing each person’s genetic data and applying a statistical model developed from a massive genetic database, the UK Biobank. The resulting score provided a single, stable measure of each individual’s genetic likelihood of having high inflammation. The researchers then used statistical analyses to look for connections between these genetic scores and the patients’ symptoms, clinical characteristics, and their ultimate response to antidepressant treatment.

The results showed a clear link between a higher genetic score for C-reactive protein and a specific profile of symptoms and characteristics. Individuals with a greater genetic tendency for inflammation were more likely to have a higher body mass index and a lower employment status. They also reported less weight loss and appetite reduction during their depressive episodes, which are symptoms associated with metabolic function. The genetic score was not associated with the overall severity of depression or with core emotional symptoms like sadness or pessimism. This suggests that the genetic influence of inflammation is tied to a particular cluster of physical and metabolic symptoms, sometimes referred to as an immunometabolic subtype of depression.

When the researchers examined the connection to treatment outcomes, they discovered a more complicated relationship. The link was not a simple straight line where more inflammation meant a worse outcome. Instead, they observed what is described as a nonlinear or U-shaped pattern. Patients who did not respond to treatment tended to have the lowest genetic scores for C-reactive protein. In contrast, both patients who responded well to their medication and those with treatment-resistant depression had higher genetic scores. The very highest scores were observed in the group with treatment-resistant depression.

This complex finding remained significant even after the researchers statistically accounted for a range of other factors known to influence treatment success, such as the patient’s age, the duration of their illness, and the number of previous antidepressant trials. The genetic score for C-reactive protein independently explained an additional 1.9 percent of the variation in treatment outcomes. While a modest figure, it indicates that genetic information about inflammation provides a unique piece of the puzzle that is not captured by standard clinical measures. This U-shaped relationship echoes previous findings that used direct blood measurements of C-reactive protein, suggesting that both very high and very low levels of inflammation may be associated with different treatment pathways.

The researchers note some limitations of their work. The study’s design was cross-sectional, meaning it captures a single point in time and cannot prove that the genetic predisposition for inflammation causes certain symptoms or treatment outcomes. The participants were treated naturalistically with a variety of medications, which reflects real-world clinical practice but lacks the control of a randomized trial. Additionally, the sample consisted exclusively of individuals with European ancestry, so the findings may not be applicable to people from other backgrounds. The team also suggests that replication in other large studies is needed.

For future research, the authors propose integrating genetic scores with direct measurements of inflammatory biomarkers from blood tests. This combined approach could provide a more powerful tool for understanding both a person’s lifelong tendency and their current inflammatory state. Ultimately, this line of research could help refine psychiatric diagnosis and treatment. By identifying an immunometabolic subtype of depression, it may be possible to develop more targeted therapies. The findings contribute to a growing body of evidence supporting a move away from a “one-size-fits-all” approach to depression, opening the door for inflammation-guided strategies in personalized psychiatry.

The study, “Polygenic liability to C-reactive protein defines immunometabolic depression phenotypes and influences antidepressant therapeutic outcomes,” was authored by Alessandro Serretti, Daniel Souery, Siegfried Kasper, Lucie Bartova, Joseph Zohar, Stuart Montgomery, Panagiotis Ferentinos, Dan Rujescu, Raffaele Ferri, Giuseppe Fanelli, Raffaella Zanardi, Francesco Benedetti, Bernhard T. Baune, and Julien Mendlewicz.

Researchers identify the optimal dose of urban greenness for boosting mental well-being

A new analysis suggests that when it comes to the mental health benefits of urban green spaces, a moderate amount is best. The research, which synthesized four decades of studies, found that the relationship between the quantity of greenery and mental well-being follows an inverted U-shaped pattern, where benefits decline after a certain point. This finding challenges the simpler idea that more green space is always better and was published in the journal Nature Cities.

Researchers have long established a connection between exposure to nature and improved mental health for city dwellers. However, the exact nature of this relationship has been unclear. Bin Jiang, Jiali Li, and a team of international collaborators recognized a growing problem in the field. Early studies often suggested a straightforward linear connection, implying that any increase in greenness would lead to better mental health outcomes. This made it difficult for city planners to determine how much green space was optimal for public well-being.

More recent studies started to show curved, non-linear patterns, but because they used different methods and were conducted in various contexts, the evidence remained fragmented and inconclusive. Without a clear, general understanding of this dose-response relationship, urban planners and policymakers lack the scientific guidance needed to allocate land and resources to maximize mental health benefits for residents. The team aimed to resolve this by searching for a generalized pattern across the entire body of existing research.

To achieve their goal, the scientists conducted a meta-analysis, a type of study that statistically combines the results of many previous independent studies. Their first step was a systematic search of major scientific databases for all empirical studies published between 1985 and 2025 that examined the link between a measured “dose” of greenness and mental health responses. This exhaustive search initially identified over 128,000 potential articles. The researchers then applied a strict set of criteria to filter this large pool, narrowing it down to 133 studies that directly measured a quantitative relationship between greenness and mental health outcomes like stress, anxiety, depression, or cognitive function.

From this collection of 133 studies, the team focused on a subset of 69 that measured the “intensity” of greenness, as this was the most commonly studied variable and provided enough data for a robust analysis. They further divided these studies into two categories based on how greenness was measured. The first category was “eye-level greenness,” which captures the amount of vegetation a person sees from a ground-level perspective, such as when walking down a street. The second was “top-down greenness,” which is measured from aerial or satellite imagery and typically represents the percentage of an area covered by tree canopy or other vegetation.

A significant challenge in combining so many different studies is that they use various scales and metrics. To address this, the researchers standardized the data. They converted the mental health outcomes from all studies onto a common scale ranging from negative one to one. They also re-analyzed images from the original papers to calculate the percentage of greenness in a consistent way across all studies. After standardizing the data, they extracted representative points from each study’s reported dose-response curve and combined them into two large datasets, one for eye-level greenness and one for top-down greenness.

With all the data points compiled and standardized, the researchers performed a curve-fitting analysis. They tested several mathematical models, including a straight line (linear model), a power-law curve, and a quadratic model, which produces an inverted U-shape. The results showed that for both eye-level and top-down greenness, the quadratic model was the best fit for the collective data. This indicates that as the amount of greenness increases from zero, mental health benefits rise, reach a peak at a moderate level, and then begin to decline as the amount of greenness becomes very high.

The analysis identified specific thresholds for these effects. For eye-level greenness, the peak mental health benefit occurred at 53.1 percent greenness. The range considered “highly beneficial,” representing the top five percent of positive effects, was between 46.2 and 59.5 percent. Any positive effect, which the researchers termed a “non-adverse effect,” was observed in a broader range from 25.3 to 80.2 percent. Outside of this range, at very low or very high levels of eye-level greenness, the effects were associated with negative mental health responses.

The findings for top-down greenness were similar. The optimal dose for the best effect was found to be 51.2 percent. The highly beneficial range was between 43.1 and 59.2 percent, and the non-adverse range spanned from 21.1 to 81.7 percent. These specific figures provide practical guidance for urban design, suggesting target percentages for vegetation cover that could yield the greatest psychological rewards for communities.

The researchers propose several reasons why this inverted U-shaped pattern exists. At very low levels of greenness, an environment can feel barren or desolate, which may increase feelings of stress or anxiety. As greenery is introduced, the environment becomes more restorative.

However, at extremely high levels of greenness, a landscape can become too dense. This might reduce natural light, obstruct views, and create a feeling of being closed-in or unsafe, potentially leading to anxiety or a sense of unease. A dense, complex environment may also require more mental effort to process, leading to cognitive fatigue rather than restoration. A moderate dose appears to strike a balance, offering nature’s restorative qualities without becoming overwhelming or threatening.

The study’s authors acknowledge some limitations. By combining many diverse studies, some nuance is lost, as different populations, cultures, and types of mental health measures are grouped together. The analysis was also limited to the intensity of greenness; there was not enough consistent data available to perform a similar analysis on the frequency or duration of visits to green spaces, which are also important factors.

Additionally, very few of the original studies examined environments with extremely high levels of greenness, so the downward slope of the curve at the highest end is based more on statistical prediction than on a large volume of direct observation.

Future research could build on this foundation by investigating these other dimensions of nature exposure, such as the duration of visits or the biodiversity within green spaces. More studies are also needed that specifically test the effects of very high doses of greenness to confirm the predicted decline in benefits. Expanding this work to differentiate between types of vegetation, like trees versus shrubs or manicured parks versus wilder areas, could provide even more refined guidance for urban planning.

Despite these limitations, this comprehensive analysis provides a new, evidence-based framework for understanding how to design healthier cities, suggesting that the goal should not simply be to maximize greenness, but to optimize it.

The study, “A generalized relationship between dose of greenness and mental health response,” was authored by Bin Jiang, Jiali Li, Peng Gong, Chris Webster, Gunter Schumann, Xueming Liu, and Pongsakorn Suppakittpaisarn.

Are conservatives more rigid thinkers? Rival scientists have come to a surprising conclusion

A new pair of large-scale studies finds that while political conservatives and ideological extremists are slightly less likely to update their beliefs when presented with new evidence, these effects are very small. The research, published in the journal Political Psychology, suggests that broad, sweeping claims about a strong connection between a person’s political views and their cognitive rigidity are likely not justified.

The study was conducted as an “adversarial collaboration,” a unique scientific approach where researchers with opposing viewpoints team up to design a study they all agree is a fair test of their competing ideas. This method is intended to reduce the biases that can arise when scientists design studies that might favor their own pre-existing theories. The goal was to find a definitive answer to a long-debated question: Is a rigid way of thinking associated with a particular political ideology?

“There is a rich and longstanding history of examining the relations between political ideology and rigidity,” said corresponding author Shauan Bowes, an assistant professor at the University of Alabama in Huntsville. “Much of this research has been rife with debate, and it is a vast and complex literature. An adversarial collaboration brings together disagreeing scholars to examine a research question, affording the opportunity for more accurate and nuanced research. Here, the adversaries were hoping to provide additional clarity on the nature of the relations between political ideology and rigidity, testing three different primary hypotheses.”

For decades, psychologists have explored the underpinnings of political beliefs. One prominent idea has been the “rigidity-of-the-right” hypothesis. This perspective suggests that conservative ideology is rooted in a less flexible thinking style and a greater need for certainty. According to this view, these traits make conservatives less open to changing their minds.

A second perspective offers a different explanation, known as the symmetry model. Proponents of this view argue that psychological motivations to fit in with a group and avoid social punishment can lead to rigid thinking in people of any political persuasion. They propose that there is no inherent reason to believe one side of the political spectrum would be more or less flexible than the other; any differences would depend on the specific topic being discussed.

A third idea is the “rigidity-of-extremes” hypothesis. This theory posits that inflexibility is not about being left or right, but about being at the ideological fringes. People with extreme political views, whether on the far left or the far right, may be more rigid in their thinking than political moderates. Extreme ideologies often provide simple, clear-cut answers to complex societal problems, which can foster a high degree of certainty and a reluctance to consider alternative viewpoints.

A major challenge in this area of research has been defining and measuring “rigidity.” The term has been used in many different ways, and many popular measures have been criticized for containing questions that are already biased toward a certain political ideology.

To overcome this, the collaborating researchers first reviewed dozens of ways rigidity has been measured. After a thorough process of elimination, they unanimously agreed on one operationalization they all considered valid and unbiased: evidence-based belief updating. This simply means measuring how much a person changes their belief about a statement after being shown evidence that supports it. A person who shows less belief change is considered more rigid.

Before launching their main studies, the team conducted a pretest with over 2,000 participants. Their aim was to find pairs of political statements that were ideologically balanced. They generated statements that made arguments friendly to both liberal and conservative viewpoints on the same topic. For example, one statement suggested that people who are liberal on social issues score higher on intelligence tests, while its counterpart suggested people who are fiscally conservative score higher. By analyzing how people with different ideologies rated these statements, the researchers selected pairs that showed no overall bias, ensuring the main studies would be a fair test.

In the first study, nearly 2,500 American participants were asked to rate their agreement with several political statements. After giving an initial rating, they were shown a short piece of information from a credible source, like a university, that supported the statement. For example, a statement might read, “The U.S. economy performs better under Democratic presidents than under Republican presidents,” followed by evidence from a research institution supporting that claim. Participants then rated the same statement a second time. The researchers measured the change between the first and second ratings to calculate a belief updating score.

The results of this first study showed a weak but statistically significant relationship. People who identified as socially or generally conservative updated their beliefs slightly less than liberals did. The analysis also found that general political extremism was associated with less belief updating. However, the size of these effects was very small. For instance, a one standard deviation increase in conservatism or extremism resulted in a change of less than 1.5 points on a 200-point scale of belief updating.

“I was surprised by how the results were consistently quite small,” Bowes told PsyPost. “Previous studies may have conflated ideology and rigidity measures, which can artificially inflate effect sizes. Because the adversaries intentionally designed an ideology-neutral measure of rigidity, the results were small. And, from my perspective, they were smaller than I would have initially presumed.”

The second study aimed to replicate and build upon the first. This time, the research team recruited more than 3,700 U.S. participants, making a special effort to include more people from the extreme ends of the political spectrum. They also made the evidence presented to participants more engaging, designing it to look like a blog post from a research institution. The fundamental procedure remained the same: participants rated a statement before and after seeing evidence for it.

The findings from the second study mirrored those of the first. Once again, general and social conservatism were weakly associated with less belief updating. In this larger sample with more extremists, all measures of extremism were also significantly linked to less belief updating. People on the far right tended to be slightly more rigid than people on the far left. Despite these consistent patterns, the effects remained tiny and, from a practical standpoint, negligible.

By combining the data from both studies, the researchers created a large dataset of over 6,000 participants. This combined analysis confirmed the earlier findings. Conservatism and extremism were both associated with slightly less willingness to change one’s mind in the face of evidence. But the size of these relationships was consistently very small, suggesting that a person’s political ideology is a very poor predictor of how much they will update their beliefs in this kind of task.

The authors, representing all sides of the original debate, came to a shared conclusion. Centrists and moderates showed the most belief updating, or the least rigidity. When comparing groups, people on the political right, especially the far right, were slightly more rigid. However, the weakness and inconsistency of these effects across different measures of ideology mean that the practical importance of this connection is questionable.

“The relations between political ideology and rigidity, which in this context was belief rigidity (i.e., less willing to update one’s views after being presented with evidence), are generally small, which calls into question the practical importance of ideological differences in rigidity in this context,” Bowes explained. “There was semi-consistent support for the rigidity-of-the-right hypothesis (conservatives are more rigid than liberals) and rigidity-of-extremes (political extremes are more extreme than political moderates) hypothesis.”

“That said, the adversaries acknowledge that because the results are quite small and only semi-consistent, one could reasonably interpret the results as lending support to symmetry perspectives (the left and right are equally rigid but about different topics). ”

The team suggests that instead of asking the broad question of who is more rigid, researchers should focus on identifying the specific contexts and issues that might cause rigidity to appear more strongly in certain groups.

The study did have some limitations. The research was conducted with American participants at a specific point in time, and the findings might not apply to other countries or different political eras. It also focused on only one type of rigidity, belief updating, and did not examine other forms, such as personality traits associated with inflexibility. Future studies could explore these relationships over time or in different cultural contexts to see if the patterns hold.

“We only studied belief rigidity, which is one form of rigidity,” Bowes noted. “We do not want to make sweeping claims about rigidity writ large and encourage others to examine whether our results do or do not held when examining other manifestations of rigidity.”

“I think it would be immensely beneficial to examine additional forms of rigidity in relation to political ideology and consider boundary conditions. That is, there are likely contexts where the relationship is much stronger, and we should be focusing on that question rather than ‘overall, who is more rigid in general?'”

The study, “An adversarial collaboration on the rigidity-of-the-right, symmetry thesis, or rigidity-of-extremes: The answer depends on the question,” was authored by Shauna M. Bowes, Cory J. Clark, Lucian Gideon Conway III, Thomas Costello, Danny Osborne, Philip E. Tetlock, and Jan-Willem van Prooijen.

Neuroscientists uncover how the brain builds a unified reality from fragmented predictions

A new study provides evidence that the human brain constructs our seamless experience of the world by first breaking it down into separate predictive models. These distinct models, which forecast different aspects of reality like context, people’s intentions, and potential actions, are then unified in a central hub to create our coherent, ongoing subjective experience. The research was published in the journal Nature Communications.

The scientists behind the new study proposed that our world model is fragmented into at least three core domains. The first is a “State” model, which represents the abstract context or situation we are in. The second is an “Agent” model, which handles our understanding of other people, their beliefs, their goals, and their perspectives. The third is an “Action” model, which predicts the flow of events and possible paths through a situation.

“There’s a long-held tradition, and with good evidence that the mind is composed of many, different modules specialized for distinct computations. This is obvious in perception with modules dedicated to faces and places. This is not obvious in higher-order, more abstract domains which drives our subjective experience. The problem with this is non-trivial. If it does have multiple modules, how can we have our experience seemingly unified?” explained study author Fahd Yazin, a medical doctor who’s currently a doctoral candidate at the University of Edinburgh.

“In learning theories, there are distinct computations needed to form what is called a world model. We need to infer from sensory observations what state we are in (context). For e.g. if you go to a coffee shop, the state is that you’re about to get a coffee. But if you find that the machine is out-of- order, then the current state is you’re not going to get it. Similarly, you need to have a frame of reference (frame) to put these states in. For instance, you want to go to the next shop but your friend had a bad experience there previously, you need to take their perspective (or frame) into account. You possibly had a plan of getting a coffee and chat, but now you’re willing to adapt a new plan (action transitions) of getting a matcha drink instead.”

“You’re able to do all these things in a deceptively simple way because various modules can coordinate their output, or predictions together. And switch between various predictions effortlessly. So, if we disrupt their ongoing predictions in a natural and targeted way, you can get two things. The brain regions dedicated to these predictions, and how they influence our subjective experience.”

To explore this, the research team conducted a series of experiments using functional magnetic resonance imaging, a technique that measures brain activity by detecting changes in blood flow. In the main experiment, a group of 111 young adults watched an eight-minute suspenseful excerpt from an Alfred Hitchcock film, “Bang! You’re Dead!” while inside a scanner. They were given no specific instructions other than to watch the movie, allowing the scientists to observe brain activity during a naturalistic experience.

To understand when participants’ predictions were being challenged and updated, the researchers collected data from separate groups of people who watched the same film online. These participants were asked to press a key whenever their understanding of the movie’s context (State), a character’s beliefs (Agent), or the likely course of events (Action) suddenly changed. By combining the responses from many individuals, the scientists created timelines showing the precise moments when each type of belief was most likely to be updated.

Analyzing the brain scans from the movie-watching group, the scientists found a clear division of labor in the midline prefrontal cortex, a brain area associated with higher-level thought. When the online raters indicated a change in the movie’s context, the ventromedial prefrontal cortex became more active in the scanned participants. When a character’s perspective or intentions became clearer, the anteromedial prefrontal cortex showed more activity. And when the plot took a turn that changed the likely sequence of future events, the dorsomedial prefrontal cortex was engaged.

The researchers also found that these moments of belief updating corresponded to significant shifts in the brain’s underlying neural patterns. Using a computational method called a Hidden Markov Model, they identified moments when the stable patterns of activity in each prefrontal region abruptly transitioned. These neural transitions in the ventromedial prefrontal cortex aligned closely with updates to “State” beliefs.

Similarly, transitions in the anteromedial prefrontal cortex coincided with “Agent” updates, and those in the dorsomedial prefrontal cortex matched “Action” updates. This provides evidence that when our predictions about the world are proven wrong, it triggers not just a momentary spike in activity, but a more sustained shift in the neural processing of that specific brain region.

Having established that predictions are handled by separate modules, the researchers next sought to identify where these fragmented predictions come together. They focused on the precuneus, a region located toward the back of the brain that is known to be a major hub within the default mode network, a large-scale brain network involved in internal thought.

By analyzing the functional connectivity, or the degree to which different brain regions activate in sync, they found that during belief updates, each specialized prefrontal region showed increased communication with the precuneus. This suggests the precuneus acts as an integration center, receiving the updated information from each predictive module.

To further investigate this integration, the team examined the similarity of multivoxel activity patterns between brain regions. They discovered a dynamic process they call “multithreaded integration.” When participants’ beliefs about the movie’s context were being updated, the activity patterns in the precuneus became more similar to the patterns in the “State” region of the prefrontal cortex.

When beliefs about characters were changing, the precuneus’s patterns aligned more with the “Agent” region. This indicates that the precuneus flexibly syncs up with whichever predictive module is most relevant at a given moment, effectively weaving the separate threads of prediction into a single, coherent representation.

The scientists then connected this integration process to subjective experience. Using separate ratings of emotional arousal, a measure of how engaged and immersed viewers were in the film, they found that the activity of the precuneus closely tracked the emotional ups and downs of the movie. The individual prefrontal regions did not show this strong relationship.

What’s more, individuals whose brains showed stronger integration between the prefrontal cortex and the precuneus also had more similar overall brain responses to the movie. This suggests that the way our brain integrates these fragmented predictions directly shapes our shared subjective reality.

“At any given time, multiple predictions may compete or coexist, and our experience can shift depending on which predictions are integrated that best align with reality,” Yazin told PsyPost. “People whose brains make and integrate predictions in similar ways are likely to have more similar experiences, while differences in prediction patterns may explain why individuals perceive the same reality differently. This approach provides new insight into how shared realities and personal differences arise, offering a framework for understanding human cognition.”

To confirm these findings were not specific to one movie or to visual information, the team replicated the key analyses using a different dataset where participants listened to a humorous spoken-word story. They found the same modular system in the prefrontal cortex and the same integrative role for the Precuneus, demonstrating that this is a general mechanism for how the brain models the world, regardless of the sensory input.

“We replicated the main findings across a different cohort, sensory modality and emotional content (stimuli), making these findings robust to idiosyncratic factors,” Yazin said. “These results were observed when people were experiencing stimuli (movie/story) in a completely uninterrupted and uninstructed manner, meaning our experience is continuously rebuilt and adapted into a coherent unified stream despite it originating in a fragmented manner.”

“Our experience is not just a simple passive product of our sensory reality. It is actively driven by our predictions. And these come in different flavors; about our contexts we find ourselves in, about other people and about our plans of the immediate future. Each of these gets updated as the sensory reality agrees (or disagrees) with our predictions. And integrates with that reality to form our ‘current’ experience.”

“We have multiple such predictions internally, and at any given time our experience can toggle between these depending on how the reality fits them,” Yazin explained. “In other words, our original experience is a product of fragmented and distributed predictions integrated into a unified whole. And people with similar way of predicting and integrating, would have similar experiences from the reality than people who are dissimilar.”

“More importantly, it brings the default mode network, a core network in the human brain into the table as a central network driving our core phenomenal experience. It’s widely implicated in learning, inference, imagination, memory recall and in dysfunctions to these. Our results offer a framework to fractionate this network by computations of its core components.”

But as with all research, the study has some limitations. The analysis is correlational, meaning it shows associations between brain activity and belief updates but cannot definitively prove causation. Also, because the researchers used naturalistic stories, the different types of updates were not always completely independent; a single plot twist could sometimes cause a viewer to update their understanding of the context, a character, and the future plot all at once.

Still, the consistency of the findings across two very different naturalistic experiences provides strong support for a new model of human cognition. “Watching a suspenseful movie and listening to a comedic story feels like two very different experience but the fact that they have similar underlying regions with similar specialized processes for generating predictions was counterintuitive,” Yazin told PsyPost. “And that we could observe it in this data was something unexpected.”

Future research will use more controlled, artificially generated stimuli to better isolate the computations happening within each module.

“We’re currently exploring the nature of these computations in more depth,” Yazin said. “In naturalistic stimuli as we’ve used now, it is impossible to fully separate domains (the contributions of people and contexts are intertwined in such settings). It brings richness but you lose experimental control. Similarly, the fact that these prefrontal regions were sensitive regardless of content and sensory information means there is possibly an invariant computation going on within them. We’re currently investigating these using controlled stimuli and probabilistic models to answer these questions.”

“For the last decade or so, there’s been two cultures in cognitive neuroscience,” he added. “One is using highly controlled stimuli, and leveraging stimulus properties to ascertain regional involvement to that function to various degrees. Second is using full-on naturalistic stimuli (movies, narratives, games) to understand how humans experience the world with more ecological accuracy. Each has brought unique and incomparable insights.”

“We feel studies on subjective experience/phenomenal consciousness has focused more on the former because it is easier to control (perceptual features/changes), but there’s a rich tradition and methods in the latter school that may help uncover more intractable problems in novel ways. Episodic ,emory and semantic processing are two great examples of this, where using naturalistic stimuli opened up connections and findings that were completely new to each of those fields.”

The study, “Fragmentation and multithreading of experience in the default-mode network,” was authored by Fahd Yazin, Gargi Majumdar, Neil Bramley, and Paul Hoffman.

Controlled fear might temporarily alter brain patterns linked to depression

A study has found that engaging with frightening entertainment, such as horror films, is associated with temporary changes in brain network activity common in depression. The research also found that individuals with moderate depressive symptoms may require a more intense scare to experience peak enjoyment, hinting at an intriguing interplay between fear, pleasure, and emotion regulation. These findings were published in the journal Psychology Research and Behavior Management.

The investigation was conducted by researchers Yuting Zhan of Ningxia University and Xu Ding of Shandong First Medical University. Their work was motivated by a long-standing psychological puzzle known as the fear-pleasure paradox: why people voluntarily seek out and enjoy frightening experiences. While this phenomenon is common, little was known about how it functions in individuals with depression, a condition characterized by persistent low mood, difficulty experiencing pleasure, and altered emotional processing.

The researchers were particularly interested in specific brain network dysfunctions observed in depression. In many individuals with depression, the default mode network, a brain system active during self-referential thought and mind-wandering, is overly connected to the salience network, which detects important external and internal events. This hyperconnectivity is thought to contribute to rumination, where a person gets stuck in a cycle of negative thoughts about themselves. Zhan and Ding proposed that an intense, controlled fear experience might temporarily disrupt these patterns by demanding a person’s full attention, pulling their focus away from internal thoughts and onto the external environment.

To explore this, the researchers designed a two-part study. The first study aimed to understand the psychological and physiological reactions to recreational fear across a spectrum of depressive symptoms. It involved 216 adult participants who were grouped based on the severity of their depressive symptoms, ranging from minimal to severe. These participants were exposed to a professionally designed haunted attraction. Throughout the experience, their heart rate was monitored, and saliva samples were collected to measure cortisol, a hormone related to stress. After each scary scenario, participants rated their level of fear and enjoyment.

The results of this first study confirmed a pattern seen in previous research: the relationship between fear and enjoyment looked like an inverted “U”. This means that as fear intensity increased, enjoyment also increased, but only up to a certain point. After that “sweet spot” of optimal fear, more intense fear led to less enjoyment. The study revealed that the severity of a person’s depression significantly affected this relationship.

Individuals with moderate depression experienced their peak enjoyment at higher levels of fear compared to those with minimal depression. Their physiological data showed a similar pattern, with the moderate depression group showing the most pronounced cortisol stress response. In contrast, participants with the most severe depressive symptoms showed a much flatter response curve, indicating they experienced less differentiation in enjoyment across various fear levels.

The second study used neuroimaging to examine the brain mechanisms behind these responses. For this part, 84 participants with mild-to-moderate depression were recruited. While inside a functional magnetic resonance imaging scanner, which measures brain activity by detecting changes in blood flow, participants watched a series of short clips from horror films. They had resting-state scans taken before and after the film clips to compare their baseline brain activity with their activity after the fear exposure.

The neuroimaging data provided a window into the brain’s reaction. During the scary clips, participants showed increased activity in the ventromedial prefrontal cortex, a brain region critical for emotion regulation and processing safety signals. The analysis also revealed that after watching the horror clips, the previously observed hyperconnectivity between the default mode network and the salience network was temporarily reduced. For a short period after the fear exposure, the connectivity in the brains of these participants with depression more closely resembled patterns seen in individuals without depression. This change was temporary, beginning to revert to baseline by the end of the post-exposure scan.

Furthermore, the researchers found a direct link between these brain changes and the participants’ reported feelings. A greater reduction in the connectivity between the default mode network and salience network was correlated with higher ratings of enjoyment. Similarly, stronger activation in the ventromedial prefrontal cortex during the fear experience was associated with greater positive feelings after the experiment. These findings suggest that the controlled fear experience may have been engaging the brain’s emotion-regulation systems, momentarily shifting brain function away from patterns associated with rumination.

The authors acknowledge several limitations to their study. The research primarily included individuals with mild-to-moderate depression, so the findings may not apply to those with severe depression. The study was also unable to control for individual differences like prior exposure to horror media or co-occurring anxiety disorders, which could influence reactions. Another consideration is that a laboratory or controlled haunted house setting does not perfectly replicate how people experience recreational fear in the real world.

Additionally, the observed changes in brain connectivity were temporary, and the correlational design of the study means it cannot prove that the fear experience caused a change in mood, only that they are associated. The researchers also did not include a high-arousal, non-fearful control condition, such as watching thrilling action movie clips, making it difficult to say if the effects are specific to fear or to general emotional arousal.

Future research is needed to explore these findings further. Such studies could investigate a wider range of participants and fear stimuli, track individuals over a longer period to see if the neural changes have any lasting effects, and conduct randomized controlled trials to establish a causal link. Developing comprehensive safety protocols would be essential before any potential therapeutic application could be considered, as intense fear could be distressing for some vulnerable individuals.

The study, “Fear-Pleasure Paradox in Recreational Fear: Neural Correlates and Therapeutic Potential in Depression,” was published June 27, 2025.

LSD might have a small positive effect when used to treat substance use disorders

A meta-analytic study looking into the safety and efficacy of LSD for treating mental health disorders found that its effectiveness largely depends on the type of disorder. While the analysis found no conclusive evidence for treating anxiety or depression, the analyzed studies indicated that LSD has a small but statistically significant positive effect when used to treat substance use disorders. The paper was published in Psychiatry Research.

LSD, or lysergic acid diethylamide, is a powerful hallucinogenic drug first synthesized in 1938 by Swiss chemist Albert Hofmann. It is derived from lysergic acid, a substance found in the ergot fungus that grows on rye and other grains.

LSD is known for its profound psychological effects, called a “trip,” which can include visual and auditory hallucinations, an altered sense of time, and intense emotional experiences. It is usually taken orally, on small pieces of paper called “blotters” that are soaked in the drug. The effects typically begin within 30 to 90 minutes after ingestion and can last up to 12 hours. The experience can be pleasant or frightening depending on the user’s mood, environment, and dose. LSD does not cause physical addiction, but it can lead to psychological dependence and tolerance. Some users report lasting changes in perception, such as visual distortions or flashbacks, long after use.

In most countries, LSD is classified as an illegal substance due to its potent effects and potential risks. Despite this, it is being studied for potential therapeutic uses in treating anxiety, depression, and addiction under controlled medical conditions.

Study authors Maria Helha Fernandes-Nascimento and her colleagues wanted to evaluate the efficacy and safety of LSD in the treatment of various mental disorders, including depression, anxiety, and substance use disorders in patients over 18 years of age. They conducted a systematic review and a meta-analysis, a method that involves statistically integrating the findings of multiple previous studies.

These authors searched nine databases of scientific publications, including Embase, PubMed, and Scopus, to find studies conducted on adults that investigated the efficacy and safety of LSD. They focused on randomized controlled trials (RCTs)—studies where researchers actively assign participants to receive either LSD or a control treatment. They excluded observational studies where researchers only recorded participants’ pre-existing use of LSD.

Their initial search identified 3,133 records. However, after removing duplicates and publications that did not meet their strict criteria, they ended up with a set of 11 studies to be included in the analysis. All of these 11 studies were double-blind, meaning that neither the participants nor the researchers administering the treatment knew who was receiving LSD versus a control substance (like a placebo or a different active drug).

Results showed that LSD administration was associated with a small, statistically significant beneficial effect on substance use disorders. Notably, the effects on substance use disorders reported by different studies were very consistent with one another, which increases confidence in this particular finding.

Regarding safety, the authors noted that the evidence was difficult to interpret. While five of the 11 studies (45%) did not report any adverse events, the paper suggests this may reflect poor reporting standards in the older trials rather than an actual absence of side effects. Other studies in the analysis did report adverse events, including serious ones such as acute anxiety and delusions during an LSD session, seizures, and cases requiring extended hospitalization.

“The effectiveness of LSD appears to vary significantly depending on the type of mental disorder treated. Results suggest a positive effect on substance use disorders. High heterogeneity requires caution and highlights the need for more double-blind RCTs [randomized controlled trials],” the study authors concluded.

The study contributes to the exploration of the potential use of LSD in treating mental health disorders. However, the authors note that most of the studies included in their analysis were conducted in the 1960s and 1970s, with only three studies conducted in more recent years, underscoring a need for modern research.

The paper, “Efficacy and Safety of LSD in the treatment of mental and substance use disorders: A systematic review of randomized controlled trials,” was authored by Maria Helha Fernandes-Nascimento, Priscila Weber, and Andre Brooking Negrao.

New BDSM research reveals links between sexual roles, relationship hierarchy, and social standing

A new study explores how sexual preferences for dominance and submission relate to an individual’s general position in society and their behavior toward others outside of intimate activity. The research found that a person’s tendency toward submission in everyday life is strongly connected to experiencing subordination within their partner relationship, as well as holding a lower social status and less education. These findings offer insight into the vulnerability of some practitioners of bondage and discipline, dominance and submission, sadism and masochism (BDSM), suggesting that interpersonal power dynamics are often consistent across life domains. The research was published in Deviant Behavior.

Researchers, led by Eva Jozifkova of Jan Evangelista Purkyně University, aimed to clarify the complex relationship between sexual arousal by power dynamics and a person’s hierarchical behavior in daily life. Previous academic work had established that a person’s dominant or submissive personality often aligns with their sexual preferences. However, it remained uncertain whether the hierarchical roles people enjoy in sex translated directly into their conduct with their long-term partner outside of the bedroom, or how they behaved generally toward people in their community.

Many people who practice BDSM, often distinguish between the roles they adopt during sex and their roles in a long-term relationship. Some maintain a slight hierarchical difference in their relationships around the clock, while others strictly limit the power dynamic to sexual play. Given the variety of patterns, the researchers wanted to test several ideas about this alignment, ranging from the view that sexual hierarchy is merely playful and unrelated to daily life, to the perspective that sexual roles reflect a person’s consistent social rank.

The study sought to test whether an individual’s tendency to dominate or submit to others reflected their sexual preferences and their hierarchical arrangement with their partner. The concept being explored was whether a person’s position in the social world “coheres” with their position in intimate relationships and sexual behavior.

The researchers collected data using an online questionnaire distributed primarily through websites and social media forums geared toward practitioners of BDSM in the Czech Republic. The final analysis included data from 421 heterosexual and bisexual men and women who actively engaged in these practices with a partner.

Participants completed detailed questions about their socioeconomic status, education, age, and, importantly, their feelings of hierarchy during sexual encounters and in their ongoing partner relationships outside of sexual activity. To measure their general tendency toward submissiveness or dominance in daily life toward others, the researchers used a modified instrument called the Life Scale.

The Life Scale assessed an individual’s perceived hierarchical standing, based on how often they experienced feelings of subordination or felt their opinions were disregarded by others. The higher the score on this scale, the more submissive the person reported being in their interactions with people generally.

The researchers separated participants into groups based on their sexual arousal preference for dominance (Dominant), submissiveness (Submissive), both (called Switch), or neither (called Without). To analyze how these various factors affected the Life Scale score, a statistical method known as univariate analysis of variance models was employed. This method allowed the researchers to examine the influence of multiple variables simultaneously on the reported level of submissiveness in everyday life.

Analyzing the self-reported experiences of the participants, the study found a noticeable alignment between preferred sexual role and general relationship dynamics for many individuals. Among those who were sexually aroused by being dominant, 55 percent reported experiencing a feeling of superiority over their partner outside of sexual activity as well. Similarly, 46 percent of individuals sexually aroused by being submissive also experienced subordination in their relationship outside of sex. This shows that for nearly half of the sample, the preferred sexual role did extend partially into the non-sexual relationship.

For the group who reported being aroused by both dominance and submissiveness, the Switches, the pattern was different. A significant majority, 75 percent, reported experiencing both polarities during sexual activity. However, outside of sex, only 13 percent of Switches reported feeling both dominance and submissiveness in their relationship, while half of this group reported experiencing neither hierarchical feeling in the relationship. This suggests that the Switch group is less likely to carry hierarchical dynamics into their non-sexual partnership.

Experience of dominance and submission in sex was reported even by people who were not primarily aroused by hierarchy. More than half of those in the Without group, 60 percent, experienced such feelings during sex. Significantly, 75 percent of this group did not report feeling hierarchy in their relationship outside of sex.

In general, individuals who were aroused by only dominance or only submissiveness experienced the respective polarity they preferred more often in sex than in their relationships. The experience of the non-preferred, or opposite, polarity during sex and in relationships was infrequent for the Dominant and Submissive groups.

The main statistical findings emerged from the analysis linking these experiences to the Life Scale score, which measured submissiveness in interaction with all people, not just a partner. The final model revealed that several factors combined to predict higher levels of submissiveness in daily life.

Respondents who felt more submissive toward others were consistently those who reported experiencing subordination in their non-sexual relationship with their partner. This higher level of submissiveness was also observed in individuals who did not report feelings of superiority over their partner, either during sex or in the relationship generally.

Beyond partner dynamics, a person’s general social standing played a powerful role. Individuals who reported higher submissiveness toward others had lower socioeconomic status, lower education levels, and were younger than 55 years of age.

The effect of experiencing submissiveness in the partner relationship was particularly potent, increasing the measure of submissiveness toward other people by two and a half units on the Life Scale. Conversely, experiencing feelings of dominance in the relationship or during sex decreased the Life Scale score by about 1.4 to 1.5 units, indicating less submissiveness in daily life.

The researchers found that gender was not a decisive factor in predicting submissiveness in this model, suggesting that the underlying hierarchical patterns observed apply across both men and women in the sample. The findings overall supported the idea that a person’s hierarchical position in their intimate relationship is related to their hierarchical position in society, aligning with the “Social Rank Hypothesis” and the “Coherence Hypothesis” proposed by the authors. This means that, contrary to some popular notions, sex and relationship hierarchy do not typically function as a “compensation” for an individual’s status in the outside world.

The research points to the existence of a consistent behavioral pattern linked to tendencies toward dominance or submissiveness in interpersonal relationships that seems to be natural for some people. The researchers suggest that because power polarization in relationships and sex can be eroticizing, it should be practiced with consideration, especially given the observed link between submissiveness in relationships and lower social status in general. They stress the importance of moderation and maintaining a return to a non-polarized state, often referred to as aftercare, following intense sexual interactions.

The researchers acknowledged several limitations inherent in the study design. Since the data were collected solely through online platforms popular within the BDSM community, the sample may not fully represent all practitioners. People with limited internet access or older individuals may have been underrepresented. The Life Scale instrument, while simple and effective for an online survey, provides a basic assessment of hierarchical status, and future research could employ more extensive psychological measures.

Because the study focused exclusively on practitioners of BDSM, the researchers were unable to compare their level of general life submissiveness with individuals in the broader population who do not practice these sexual behaviors. Future studies should aim to include comparison groups from the general population to solidify the understanding of these personality patterns.

Despite these constraints, the results provide practical implications. The researchers suggest that simple questions about hierarchical feelings in sex and relationships can be useful in therapeutic settings to understand a client’s orientation and potentially predict their vulnerability to external pressures or relationship risk. The clear relationship observed between the Life Scale and social status highlights that submissive individuals may already face a great deal of pressure from society, pointing to the need for social support.

The study, “The Link Between Sexual Dominance Preference and Social Behavior in BDSM Sex Practitioners,” was authored by Eva Jozifkova, Marek Broul, Ivana Lovetinska, Jan Neugebauer, and Ivana Stolova.

A common cognitive bias is fueling distrust in election outcomes, according to new psychology research

A new scientific paper suggests that a common, unconscious mental shortcut may partly explain why many people believe in election fraud. The research indicates that the order in which votes are reported can bias perceptions, making a legitimate late comeback by a candidate seem suspicious. This work was published in the journal Psychological Science.

The research was motivated by the false allegations of fraud that followed the 2020 United States presidential election. Previous work by political scientists and psychologists has identified several factors that contribute to these beliefs. For example, messages from political leaders can influence the views of their supporters. Another explanation is the “winner effect,” which suggests people are more likely to see an election as illegitimate if their preferred party loses.

Similarly, research on motivated reasoning highlights how a person’s desire to maintain a positive view of their political party can lead them to question an unfavorable outcome. Personality differences may also play a part, as some individuals are more predisposed to viewing events as the result of a conspiracy.

Against this backdrop, a team of researchers led by André Vaz of Ruhr University Bochum proposed that a more fundamental cognitive mechanism could also be at play. They investigated whether the sequential reporting of partial vote counts, a standard practice in news media, could inadvertently sow distrust. They theorized that beliefs in fraud might be fueled by a phenomenon known as the cumulative redundancy bias.

This bias describes how our impressions are shaped by the progression of a competition. When we repeatedly see one competitor in the lead, it creates a strong mental impression of their dominance. This has been observed in various contexts, including judgments of sports teams and stock market performance. The core idea is that the repeated observation of a competitor being ahead leaves a lasting impression on observers that is not entirely erased even when the final result shows they have lost. The human mind seems to struggle with discounting information once it has been processed.

The order in which information is presented can be arbitrary, like the order in which votes are counted, yet it can leave a lasting, skewed perception of the competitors. This was evident in the 2020 election in states like Georgia, where early-counted ballots often favored Donald Trump. This occurred in part because his supporters were more likely to vote in person, and those votes were often tallied first.

In contrast, ballots counted later tended to favor Joe Biden, as his voters made greater use of mail-in voting, and many counties counted those mail-in ballots last. Additionally, populous urban counties, which tend to be more Democratic, were often slower to report their results than more rural counties. This created a dramatic late shift in the lead, which the study’s authors suggest is a prime scenario for the cumulative redundancy bias to take effect.

To test this hypothesis, the scientists conducted a series of seven studies with participants from the United States and the United Kingdom. The first study tested whether the cumulative redundancy bias would appear in a simulated election. Participants watched the vote count for a school representative election between two fictional candidates, “Peter” and “Robert.” In both scenarios, Peter won by the same final margin. The only difference was the order of the count. In an “early-lead” condition, Peter took the lead from the beginning. In a “late-lead” condition, he trailed Robert until the very last ballots were counted.

The results showed that participants rated Peter more favorably and predicted he would be more successful in the future when he had an early lead. When Peter won with a late lead, participants actually rated the loser, Robert, as the better candidate.

The second study used the same setup but tested for perceptions of fraud. After the simulated vote count, participants were told that rumors of a rigged election had emerged. When the winner had secured a late lead, participants found it significantly more likely that the vote count had been manipulated and that the wrong candidate had won compared to when the winner had an early lead.

To make the simulation more realistic, a third study presented the vote counts as percentages, similar to how news outlets report them, instead of raw vote totals. The researchers found the same results. Observing a candidate come from behind to win late in the count made participants more suspicious of fraud.

The fourth study brought the experiment even closer to reality. The researchers used the actual vote-count progression from the 2020 presidential election in the state of Georgia, which showed a candidate trailing for most of the count before winning at the end. To avoid partisan bias, participants were told they were observing a recent election in an unnamed Eastern European country. One group saw the actual vote progression, where the eventual winner took the lead late. The other group saw the same data but in a reversed order, creating a scenario where the winner led from the start. Once again, participants who saw the candidate come from behind were more likely to believe the election was manipulated.

Building on this, the fifth study investigated if these fraud suspicions could arise even before the election was decided. Participants watched a vote count that stopped just before completion, at a point when one candidate had just overtaken the longtime leader. Participants were then asked how likely it was that the vote was being manipulated in favor of either candidate. In the scenario mirroring the 2020 Georgia count, people found it more likely that the election was being manipulated in favor of the candidate who just took the lead. In the reversed scenario, they found it more likely that the election was being manipulated in favor of the candidate who was losing their early lead.

During the actual 2020 election, officials and news commentators provided explanations for the shifting vote counts, such as differences in when urban and rural counties reported their results. The sixth study tested if such explanations could reduce the bias. All participants saw the late-lead scenario, but one group was given an explanation for why the lead changed. The results showed that while the explanation did reduce the belief in fraud, it did not eliminate it. People were still significantly more suspicious of a late comeback than would be expected.

The final study addressed partisanship directly. American participants who identified as either Democrats or Republicans were shown a vote count explicitly labeled as being from the 2020 presidential election between Joe Biden and Donald Trump. As expected, political affiliation had a strong effect, with Republicans being more likely to suspect fraud in favor of Biden and Democrats being more likely to suspect fraud in favor of Trump.

However, the cumulative redundancy bias still had a clear impact. For both Republicans and Democrats, seeing Biden take a late lead increased suspicions of a pro-Biden manipulation compared to seeing a scenario where he led from the start. This suggests the cognitive bias operates independently of, and in addition to, partisan motivations.

The researchers note that their findings are based on participants recruited from an online platform and may not represent all populations. The studies also focus on the perception of vote counting, not on other potential election issues like voter registration or suppression. However, the consistent results across seven different experiments provide strong evidence that the way election results are communicated can unintentionally create distrust.

The authors suggest that the sequential reporting of vote counts could be revised to mitigate these effects. While simply waiting until all votes are counted could be one solution, they acknowledge that a lack of information might also breed suspicion. Better public education about vote counting procedures or the use of more advanced forecasting models that provide context beyond live totals could be alternative ways to present results without fueling false perceptions of fraud.

The study, “‘Stop the Count!’—How Reporting Partial Election Results Fuels Beliefs in Election Fraud,” was authored by André Vaz, Moritz Ingendahl, André Mata, and Hans Alves.

Scientists report the first molecular evidence connecting childhood intelligence to a longer life

A new scientific analysis has uncovered a direct genetic link between higher cognitive function in childhood and a longer lifespan. The findings suggest that some of the same genetic factors influencing a child’s intelligence are also associated with how long they will live. This research, published in the peer-reviewed journal Genomic Psychiatry, offers the first molecular evidence connecting childhood intellect and longevity through shared genetic foundations.

For many years, scientists in a field known as cognitive epidemiology have observed a consistent pattern: children who score higher on intelligence tests tend to live longer. A major review of this phenomenon, which analyzed data from over one million people, found that for a standard increase in cognitive test scores in youth, there was a 24 percent lower risk of death over several decades. The reasons for this connection have long been a subject of debate, with questions about whether it was due to lifestyle, socioeconomic status, or some underlying biological factor.

Previous genetic studies have identified an association between cognitive function in adults and longevity. A problem with using adult data, however, is the possibility of reverse causation. Poor health in later life can negatively affect a person’s cognitive abilities and simultaneously shorten their life. This makes it difficult to determine if genes are linking intelligence to longevity, or if later-life health issues are simply confounding the results by impacting both traits at the same time.

To overcome this challenge, a team of researchers led by W. David Hill at the University of Edinburgh sought to examine the genetic relationship using intelligence data from childhood, long before adult health problems could become a complicating factor. Their goal was to see if the well-documented association between youthful intelligence and a long life had a basis in shared genetics. This approach would provide a cleaner look at any potential biological connections between the two traits.

The researchers did not collect new biological samples or test individuals directly. Instead, they performed a sophisticated statistical analysis of data from two very large existing genetic databases. They used summary results from a genome-wide association study on childhood cognitive function, which contained genetic information from 12,441 individuals. This type of study scans the entire genetic code of many people to find tiny variations associated with a particular trait.

They then took this information and compared it to data from another genome-wide association study focused on longevity. This second dataset was much larger, containing genetic information related to the lifespan of the parents of 389,166 people. By applying a technique called linkage disequilibrium score regression, the scientists were able to estimate the extent to which the same genetic variants were associated with both childhood intelligence and a long life.

The analysis revealed a positive and statistically significant genetic correlation between childhood cognitive function and parental longevity. The correlation estimate was 0.35, which indicates a moderate overlap in the genetic influences on both traits. This result provides strong evidence that the connection between being a brighter child and living a longer life is, at least in part, explained by a shared genetic architecture. The same genes that contribute to higher intelligence in youth appear to also contribute to a longer lifespan.

The researchers explain that this shared genetic influence, a concept known as pleiotropy, could operate in a few different ways. The presence of a genetic correlation is consistent with multiple biological models, and the methods used in this study cannot definitively separate them. One possible explanation falls under a model of horizontal pleiotropy, where a set of genes independently affects both brain development and bodily health.

This idea supports what some scientists call the “system integrity” hypothesis. According to this view, certain genetic makeups produce a human system, both brain and body, that is inherently more robust. Such a system would be better at withstanding environmental challenges and the wear and tear of aging, leading to both better cognitive performance and greater longevity.

Another possibility is a model of vertical pleiotropy. In this scenario, the genetic link is more like a causal chain of events. Genes primarily influence childhood cognitive function. Higher cognitive function then enables individuals to make choices and navigate environments that are more conducive to good health and a long life. For example, higher intelligence is linked to achieving more education, which in turn is associated with better occupations, greater health literacy, and healthier behaviors, all of which promote longevity.

A limitation of this work is its inability to distinguish between these different potential mechanisms. The study confirms that a genetic overlap exists, but it does not tell us exactly how that overlap functions biologically. The research identifies an average shared genetic effect across the genome. It does not provide information about which specific genes or biological pathways are responsible for this link. Additional work is needed to identify the precise regions of the genome that drive this genetic correlation between early-life cognitive function and how long a person lives.

The study, “Shared genetic etiology between childhood cognitive function and longevity,” was authored by W. David Hill and Ian J. Deary.

❌