Normal view

Today — 28 October 2025Main stream

Horror films may help us manage uncertainty, a new theory suggests

28 October 2025 at 02:00

A new study proposes that horror films are appealing because they offer a controlled environment for our brains to practice predicting and managing uncertainty. This process of learning to master fear-inducing situations can be an inherently rewarding experience, according to the paper published in Philosophical Transactions of the Royal Society B.

The authors behind the paper, published in 2013, sought to address why people are drawn to entertainment that is designed to be frightening or disgusting. While some studies have shown psychological benefits from engaging with horror, many existing theories about its appeal seem to contradict one another. The authors aimed to provide a single, unifying framework that could explain how intentionally seeking out negative feelings like fear can result in positive psychological outcomes.

To do this, they applied a theory of brain function known as predictive processing. This framework suggests the brain operates as a prediction engine, constantly making forecasts about incoming sensory information from the world. When reality does not match the brain’s prediction, a “prediction error” occurs, which the brain then works to minimize by updating its internal models or by acting on the world to make it more predictable.

This does not mean humans always seek out calm and predictable situations. The theory suggests people are motivated to find optimal opportunities for learning, which often lie at the edge of their understanding. The brain is not just sensitive to the amount of prediction error, but to the rate at which that error is reduced over time. When we reduce uncertainty faster than we expected, it generates a positive feeling.

This search for the ideal rate of error reduction is what drives curiosity and play. We are naturally drawn to a “Goldilocks zone” of manageable uncertainty that is neither too boringly simple nor too chaotically complex. The researchers argue that horror entertainment is specifically engineered to place its audience within this zone.

According to the theory, horror films can be understood as a form of “affective technology,” designed to manipulate our predictive minds. Even though we know the monsters are not real, the brain processes the film as an improbable version of reality from which it can still learn. Many horror monsters tap into deep-seated, evolutionary fears of predators by featuring sharp teeth, claws, and stealthy, ambush-style behaviors.

The narrative structures of horror films are also built to play with our expectations. The slow build-up of suspense creates a state of high anticipation, and a “jump scare” works by suddenly violating our moment-to-moment predictions. The effectiveness of these techniques is heightened because they are not always predictable. Sometimes the suspense builds and nothing happens, which makes the audience’s response system even more alert.

At the same time, horror films often rely on familiar patterns and clichés, such as the “final girl” who survives to confront the villain. This combination of surprising events within a somewhat predictable structure provides the mix of uncertainty and resolvability that the predictive brain finds so engaging.

The authors propose that engaging with this controlled uncertainty has several benefits. One is that horror provides a low-stakes training ground for learning about high-stakes situations. This idea, known as morbid curiosity, suggests that we watch frightening content to gain information that could be useful for recognizing and avoiding real-world dangers. For example, the film Contagion saw a surge in popularity during the early days of the COVID-19 pandemic, as people sought to understand the potential realities of a global health crisis.

Another benefit is related to emotion regulation. By exposing ourselves to fear in a safe context, we can learn about our own psychological and physiological responses. The experience allows us to observe our own anxiety, increased heart rate, and other reactions as objects of attention, rather than just being swept away by them. This process can grant us a greater sense of awareness and control over our own emotional states, similar to the effects of mindfulness practices.

The theory also offers an explanation for why some people prone to anxiety might be drawn to horror. Anxiety can be associated with a feeling of uncertainty about one’s own internal bodily signals, a state known as noisy interoception. Watching a horror movie provides a clear, external source for feelings of fear and anxiety. For a short time, the rapid heartbeat and sweaty palms have an obvious and controllable cause: the monster on the screen, not some unknown internal turmoil.

The researchers note that this engagement is not always beneficial. For some individuals, particularly those with a history of trauma, horror media may serve to confirm negative beliefs about the world being a dangerous and threatening place. This can create a feedback loop where a person repeatedly seeks out horrifying content, reinforcing a sense of hopelessness or learned helplessness. Future work could examine when the engagement with scary media crosses from a healthy learning experience into a potentially pathological pattern.

The study, “Surfing uncertainty with screams: predictive processing, error dynamics and horror films,” was authored by Mark Miller, Ben White and Coltan Scrivner.

Long-term study shows romantic partners mutually shape political party support

28 October 2025 at 00:00

A new longitudinal study suggests that intimate partners mutually influence each other’s support for political parties over time. The research found that a shift in one person’s support for a party was predictive of a similar shift in their partner’s support the following year, a process that may contribute to political alignment within couples and broader societal polarization. The findings were published in Personality and Social Psychology Bulletin/em>.

Political preferences are often similar within families, particularly between parents and children. However, less is known about how political views might be shaped during adulthood, especially within the context of a long-term romantic relationship. Prior studies have shown that partners often hold similar political beliefs, but it has been difficult to determine if this is because people choose partners who already agree with them or if they gradually influence each other over the years.

The authors of the new study sought to examine if this similarity is a result of ongoing influence. They wanted to test whether a change in one partner’s political stance could predict a future change in the other’s. To do this, they used a large dataset from New Zealand, a country with a multi-party system. This setting allowed them to see if any influence was specific to one or two major parties or if it occurred across a wider ideological spectrum, including smaller parties focused on issues like environmentalism, indigenous rights, and libertarianism.

To conduct their investigation, the researchers analyzed data from the New Zealand Attitudes and Values Study, a large-scale project that has tracked thousands of individuals over many years. Their analysis focused on 1,613 woman-man couples who participated in the study for up to 10 consecutive years. Participants annually rated their level of support for six different political parties on a scale from one (strongly oppose) to seven (strongly support).

The study employed a sophisticated statistical model designed for longitudinal data from couples. This technique allowed the researchers to separate two different aspects of a person’s political support. First, it identified each individual’s stable, long-term average level of support for a given party. Second, it isolated the small, year-to-year fluctuations or deviations from that personal average. This separation is important because it allows for a more precise test of influence over time.

The analysis then examined whether a fluctuation in one partner’s party support in a given year could predict a similar fluctuation in the other partner’s support in the subsequent year. This was done while accounting for the fact that couples already tend to have similar average levels of support.

The results showed a consistent pattern of mutual influence. For all six political parties examined, a temporary increase in one partner’s support for that party was associated with a subsequent increase in the other partner’s support one year later. This finding suggests that partners are not just politically similar from the start of their relationship but continue to shape one another’s specific party preferences over time.

This influence also appeared to be a two-way street. The researchers tested whether men had a stronger effect on women’s views or if the reverse was true. They found that the strength of influence was generally equal between partners. With only one exception, the effect of men on women’s party support was just as strong as the effect of women on men’s support.

The single exception involved the libertarian Association of Consumers and Taxpayers Party, where men’s changing support had a slightly stronger influence on women’s subsequent support than the other way around. For the other five parties, including the two largest and three other smaller parties, the influence was symmetrical. This challenges the idea that one partner, typically the man, is the primary driver of a couple’s political identity.

An additional analysis explored whether this dynamic of influence applied to a person’s general political orientation, which was measured on a scale from extremely liberal to extremely conservative. In this case, the pattern was different. While partners tended to be similar in their overall political orientation, changes in one partner’s self-rated orientation did not predict changes in the other’s over time. This suggests that the influence partners have on each other may be more about support for specific parties and their platforms than about shifting a person’s fundamental ideological identity.

The researchers acknowledge some limitations of their work. The study focused on established, long-term, cohabiting couples in New Zealand, so the findings may not apply to all types of relationships or to couples in other countries with different political systems. Because the couples were already in established relationships, the study also cannot entirely separate the effects of ongoing influence from the possibility that people initially select partners who are politically similar to them.

Future research could explore these dynamics in newer relationships to better understand the interplay between partner selection and later influence. Additional studies could also investigate the specific mechanisms of this influence, such as how political discussions, media consumption, or conflict avoidance might play a role in this process. Examining whether these shifts in expressed support translate to actual behaviors like voting is another important avenue for exploration.

The study, “The Interpersonal Transmission of Political Party Support in Intimate Relationships,” was authored by Sam Fluit, Nickola C. Overall, Danny Osborne, Matthew D. Hammond, and Chris G. Sibley.

Yesterday — 27 October 2025Main stream

Study finds a shift toward liberal politics after leaving religion

27 October 2025 at 22:00

A new study suggests that individuals who leave their religion tend to become more politically liberal, often adopting views similar to those who have never been religious. This research, published in the Journal of Personality, provides evidence that the lingering effects of a religious upbringing may not extend to a person’s overall political orientation. The findings indicate a potential boundary for a psychological phenomenon known as “religious residue.”

Researchers conducted this study to investigate a concept called religious residue. This is the idea that certain aspects of a person’s former religion, such as specific beliefs, behaviors, or moral attitudes, can persist even after they no longer identify with that faith. Previous work has shown that these lingering effects can be seen in areas like moral values and consumer habits, where formerly religious people, often called “religious dones,” continue to resemble currently religious individuals more than those who have never been religious.

The research team wanted to determine if this pattern of residue also applied to political orientation. Given the strong link between religiosity and political conservatism in many cultures, it was an open question what would happen to a person’s politics after leaving their faith. They considered three main possibilities. One was that religious residue would hold, meaning religious dones would remain relatively conservative.

Another possibility was that they would undergo a “religious departure,” shifting to a liberal orientation similar to the never-religious. A third option was “religious reactance,” where they might react against their past by becoming even more liberal than those who were never religious.

To explore these possibilities, the researchers analyzed data from eight different samples across three multi-part studies. The first part involved a series of six cross-sectional analyses, which provide a snapshot in time. These studies included a total of 7,089 adults from the United States, the Netherlands, and Hong Kong. Participants were asked to identify as currently religious, formerly religious, or never religious, and to rate their political orientation on a scale from conservative to liberal.

In five of these six samples, the results pointed toward a similar pattern. Individuals who had left their religion reported significantly more liberal political views than those who were currently religious. Their political orientation tended to align closely with that of individuals who had never been religious. When the researchers combined all six samples for a more powerful analysis, they found that religious dones were, on average, more politically liberal than both currently religious and never-religious individuals. This combined result offered some initial evidence for the religious reactance hypothesis.

To gain a clearer picture of how these changes unfold over time, the researchers next turned to longitudinal data, which tracks the same individuals over many years. The second study utilized data from the National Study of Youth and Religion, a project that followed a representative sample of 2,071 American adolescents into young adulthood. This allowed the researchers to compare the political attitudes of those who remained affiliated with a religion, those who left their religion at different points, and those who were never religious.

The findings from this longitudinal sample provided strong support for the religious departure hypothesis. Individuals who left their religion during their youth or young adulthood reported more liberal political attitudes than those who remained religious. However, their political views were not significantly different from the views of those who had never been religious. This study also failed to find evidence for “residual decay,” the idea that religious residue might fade slowly over time. Instead, the shift toward a more liberal orientation appeared to be a distinct change associated with leaving religion, regardless of how long ago the person had de-identified.

The third study aimed to build on these findings with another longitudinal dataset, the Family Foundations of Youth Development project. This study followed 1,857 adolescents and young adults and had the advantage of measuring both religious identification and political orientation at multiple time points. This design allowed the researchers to use advanced statistical models to examine the sequence of these changes. Specifically, they could test whether becoming more liberal preceded leaving religion, or if leaving religion preceded becoming more liberal.

The results of this final study confirmed the findings of the previous ones. Religious dones again reported more liberal political attitudes, similar to their never-religious peers. The more advanced analysis revealed that changes in religious identity tended to precede changes in political orientation. In other words, the data suggests that an individual’s departure from religion came first, and this was followed by a shift toward a more liberal political stance. The reverse relationship, where political orientation predicted a later change in religious identity, was not statistically significant in this sample.

The researchers acknowledge some limitations in their work. The studies relied on a single, broad question to measure political orientation, which may not capture the complexity of political beliefs on specific social or economic issues. While the longitudinal designs provide a strong basis for inference, the data is observational, and experimental methods would be needed to make definitive causal claims. The modest evidence for religious reactance was only present in the combined cross-sectional data and may have been influenced by the age of the participants or other sample-specific factors.

Future research could explore these dynamics using more detailed assessments of political ideology to see if religious residue appears in certain policy areas but not others. Examining the role of personality traits like dogmatism could also offer insight into why some individuals shift their political views so distinctly.

Despite these limitations, the collection of studies provides converging evidence that for many people, leaving religion is associated with a clear and significant move toward a more liberal political identity. This suggests that as secularization continues in many parts of the world, it may be accompanied by corresponding shifts in the political landscape.

The study, “Religious Dones Become More Politically Liberal After Leaving Religion,” was authored by Daryl R. Van Tongeren, Sam A. Hardy, Emily M. Taylor, and Phillip Schwadel.

Psilocybin therapy linked to lasting depression remission five years later

27 October 2025 at 18:45

A new long-term follow-up study has found that a significant majority of individuals treated for major depressive disorder with psilocybin-assisted therapy were still in remission from their depression five years later. The research, which tracked participants from an earlier clinical trial, suggests that the combination of the psychedelic substance with psychotherapy can lead to lasting improvements in mental health and overall well-being. The findings were published in the Journal of Psychedelic Studies.

Psilocybin is the primary psychoactive compound found in certain species of mushrooms, often referred to as “magic mushrooms.” When ingested, it can produce profound alterations in perception, mood, and thought. In recent years, researchers have been investigating its potential as a therapeutic tool when administered in a controlled clinical setting alongside psychological support.

The rationale for this line of research stems from the limitations of existing treatments for major depressive disorder. While many people benefit from conventional antidepressants and psychotherapy, a substantial portion do not achieve lasting remission, and medications often come with undesirable side effects and require daily, long-term use.

Psychedelic-assisted therapy represents a different treatment model, one where a small number of high-intensity experiences might catalyze durable psychological changes. This new study was conducted to understand the longevity of the effects observed in an earlier, promising trial.

The research team, led by Alan Davis, an associate professor and director of the Center for Psychedelic Drug Research and Education at The Ohio State University, sought to determine if the initial antidepressant effects would hold up over a much longer period. Davis co-led the original 2021 trial at Johns Hopkins University, and this follow-up represents a collaborative effort between researchers at both institutions.

“We conducted this study to answer a critical question about the enduring effects of psilocybin therapy – namely, what happens after clinical trials end, and do participants experience enduring benefits from this treatment,” Davis told PsyPost.

The investigation was designed as a long-term extension of a clinical trial first published in 2021. That initial study involved 24 adults with a diagnosis of major depressive disorder. The participants were divided into two groups: one that received the treatment immediately and another that was placed on a wait-list before receiving the same treatment.

The therapeutic protocol was intensive, involving approximately 13 hours of psychotherapy in addition to two separate sessions where participants received a dose of psilocybin. The original findings were significant, showing a large and rapid reduction in depression symptoms for the participants, with about half reporting a complete remission from their depression that lasted for up to one year.

For the new follow-up, conducted an average of five years after the original treatment, the researchers contacted all 24 of the initial participants. Of those, 18 enrolled and completed the follow-up assessments. This process involved a series of online questionnaires designed to measure symptoms of depression and anxiety, as well as any functional impairment in their daily lives.

Participants also underwent a depression rating assessment administered by a clinician and took part in in-depth interviews. These interviews were intended to capture a more nuanced understanding of their experiences and life changes since the trial concluded, going beyond what numerical scores alone could convey.

The researchers found that 67% of the original participants were in remission from their depression. This percentage was slightly higher than the 58% who were in remission at the one-year follow-up point.

“We found that most people reported enduring benefits in their life since participating in psilocybin therapy,” Davis said. “Overall, many reported that even if depression came back, that it was more manageable, less tied to their identity, and that they found it was less interfering in their life.”

To ensure their analysis was robust, the scientists took a conservative approach when handling the data for the six individuals who did not participate in the long-term follow-up. They made the assumption that these participants had experienced a complete relapse and that their depression symptoms had returned to their pre-treatment levels.

“Even controlling for those baseline estimates from the people who didn’t participate in the long-term follow-up, we still see a very large and significant reduction in depression symptoms,” said Davis, who also holds faculty positions in internal medicine and psychology at Ohio State. “That was really exciting for us because this showed that the number of participants still in complete remission from their depression had gone up slightly.”

The study also revealed that these lasting improvements were not solely the product of the psilocybin therapy sessions from five years earlier. The reality of the participants’ lives was more complex. Through the interviews, the researchers learned that only three of the 18 follow-up participants had not received any other form of depression-related treatment in the intervening years. The others had engaged in various forms of support, including taking antidepressant medications, undergoing traditional psychotherapy, or trying other treatments like ketamine or psychedelics on their own.

However, the qualitative data provided important context for these decisions. Many participants described a fundamental shift in their relationship with depression after the trial. Before undergoing psilocybin-assisted therapy, they often felt their depression was a debilitating and all-encompassing condition that prevented them from engaging with life. After the treatment, even if symptoms sometimes returned, they perceived their depression as more situational and manageable.

Participants reported a greater capacity for positive emotions and enthusiasm. Davis explained that these shifts appeared to lead to important changes in how they related to their depressive experiences. This newfound perspective may have made other forms of therapy more effective or made navigating difficult periods less impairing.

“Five years later, most people continued to view this treatment as safe, meaningful, important, and something that catalyzed an ongoing betterment of their life,” said Davis, who co-led the 2021 trial at Johns Hopkins University. “It’s important for us to understand the details of what comes after treatment. I think this is a sign that regardless of what the outcomes are, their lives were improved because they participated in something like this.”

Some participants who had tried using psychedelics on their own reported that the experiences were not as helpful without the supportive framework provided by the clinical trial, reinforcing the idea that the therapeutic context is a vital component of the treatment’s success.

Regarding safety, 11 of the participants reported no negative effects since the trial. A few recalled feeling unprepared for the heightened emotional sensitivity they experienced after the treatment, while others noted that the process of weaning off their previous medications before the trial was difficult.

The researchers acknowledge several limitations of their work. The small sample size of the original trial means that the findings need to be interpreted with caution and require replication in larger studies. Because the study was a long-term follow-up without a continuing control group, it is not possible to definitively attribute all the observed benefits to the psilocybin-assisted therapy, especially since most participants sought other forms of treatment during the five-year period. It is also difficult to know how natural fluctuations in mood and life circumstances may have influenced the outcomes.

“I’d like for people to know that this treatment is not a magic bullet, and these findings support that notion,” Davis noted. “Not everyone was in remission, and some had depression that was ongoing and a major negative impact in their lives. Thankfully, this was not the case for the majority of folks in the study, but readers should know that this treatment does not work for everyone even under the most rigorous and clinically supported conditions.”

Future research should aim to include larger and more diverse groups of participants, including individuals with a high risk for suicide, who were excluded from this trial. Despite these limitations, this study provides a first look at the potential for psilocybin-assisted therapy to produce durable, long-term positive effects for people with major depressive disorder. The findings suggest the treatment may not be a simple cure but rather a catalyst that helps people re-engage with their lives and other therapeutic processes, ultimately leading to sustained improvements in functioning and well-being.

“Next steps are to continue evaluating the efficacy of psilocybin therapy among larger samples and in special populations,” Davis said. “Our work at OSU involves exploring this treatment for Veterans with PTSD, lung cancer patients with depression, gender and sexual minorities with PTSD, and adolescents with depression.”

The study, “Five-year outcomes of psilocybin-assisted therapy for Major Depressive Disorder,” was authored by Alan K. Davis, Nathan D. Sepeda, Adam W. Levin, Mary Cosimano, Hillary Shaub, Taylor Washington, Peter M. Gooch, Shoval Gilead, Skylar J. Gaughan, Stacey B. Armstrong, and Frederick S. Barrett.

Music engagement is associated with substantially lower dementia risk in older adults

27 October 2025 at 02:00

A new study provides evidence that older adults who frequently engage with music may have a significantly lower risk of developing dementia. The research, published in the International Journal of Geriatric Psychiatry, indicates that consistently listening to music was associated with up to a 39 percent reduced risk, while regularly playing an instrument was linked to a 35 percent reduced risk. These findings suggest that music-related activities could be an accessible way to support cognitive health in later life.

Researchers were motivated to conduct this study because of the growing global health challenge posed by aging populations and the corresponding rise in dementia cases. As life expectancy increases, so does the prevalence of age-related conditions like cognitive decline. With no current cure for dementia, identifying lifestyle factors that might help prevent or delay its onset has become a major focus of scientific inquiry.

While some previous research pointed to potential cognitive benefits from music, many of those studies were limited. They often involved small groups of participants, included people who already had cognitive problems, or were susceptible to selection bias. This new study aimed to overcome these limitations by using a large, long-term dataset of older adults who were cognitively healthy at the beginning of the research period. The team also wanted to explore how education level might influence the relationship between music engagement and cognitive outcomes.

The investigation utilized data from a large-scale Australian study called ASPirin in Reducing Events in the Elderly (ASPREE) and its sub-study. The final analysis included 10,893 community-dwelling adults who were 70 years of age or older and did not have a dementia diagnosis when they enrolled. These participants were followed for a median of 4.7 years, with some observational follow-up extending beyond that period.

About three years into the study, participants answered questions about their social activities, including how often they listened to music or played a musical instrument. Their responses ranged from “never” to “always.” Researchers then tracked the participants’ cognitive health over subsequent years through annual assessments. Dementia diagnoses were made by an expert panel based on rigorous criteria, while a condition known as cognitive impairment no dementia (CIND), a less severe form of cognitive decline, was also identified.

The findings indicate a strong association between music engagement and a lower risk of dementia. Individuals who reported “always” listening to music had a 39 percent decreased risk of developing dementia compared to those who listened never, rarely, or sometimes. This group also showed a 17 percent decreased risk of developing CIND.

Regularly playing a musical instrument was also associated with positive outcomes. Those who played an instrument “often” or “always” had a 35 percent decreased dementia risk compared to those who played rarely or never. However, playing an instrument did not show a significant association with a reduced risk of CIND.

When researchers looked at individuals who engaged in both activities, they found a combined benefit. Participants who frequently listened to music and played an instrument had a 33 percent decreased risk of dementia. This group also showed a 22 percent decreased risk of CIND.

Beyond the risk of dementia or CIND, the study also examined changes in performance on specific cognitive tests over time. Consistently listening to music was associated with better scores in global cognition, which is a measure of overall thinking abilities, as well as in memory. Playing an instrument was not linked to significant changes in scores on these cognitive tests. Neither listening to nor playing music appeared to be associated with changes in participants’ self-reported quality of life or mental wellbeing.

The research team also explored whether a person’s level of education affected these associations. The results suggest that education may play a role, particularly for music listening. The association between listening to music and a lower dementia risk was most pronounced in individuals with 16 or more years of education. In this highly educated group, always listening to music was linked to a 63 percent reduced risk.

The findings were less consistent for those with 12 to 15 years of education, where no significant protective association was observed. The researchers note this particular result was unexpected and may warrant further investigation to understand potential underlying factors.

The study has several limitations that are important to consider. Because it is an observational study, it can only identify associations between music and cognitive health; it cannot establish that music engagement directly causes a reduction in dementia risk. It is possible that individuals with healthier brains are simply more likely to engage with music, a concept known as reverse causation. The study’s participants were also generally healthier than the average older adult population, which may limit how broadly the findings can be applied.

Additionally, the data on music engagement was self-reported, which could introduce inaccuracies. The survey did not collect details on the type of music, the duration of listening or playing sessions, or whether listening to the radio involved music or talk-based content. Such details could be important for understanding the mechanisms behind the observed associations.

Future research could build on these findings by examining longer-term outcomes and exploring which specific aspects of music engagement might be most beneficial. Studies involving more diverse populations could also help determine if these associations hold true across different groups. Ultimately, randomized controlled trials would be needed to determine if actively encouraging music engagement as an intervention can directly improve cognitive function and delay the onset of dementia in older adults.

The study, “What Is the Association Between Music-Related Leisure Activities and Dementia Risk? A Cohort Study,” was authored by Emma Jaffa, Zimu Wu, Alice Owen, Aung Azw Zaw Phyo, Robyn L. Woods, Suzanne G. Orchard, Trevor T.-J. Chong, Raj C. Shah, Anne Murray, and Joanne Ryan.

AI chatbots often violate ethical standards in mental health contexts

27 October 2025 at 00:00

A new study suggests that popular large language models like ChatGPT can systematically breach established ethical guidelines for mental health care, even when specifically prompted to use accepted therapeutic techniques. The research, which will be presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, provides evidence that these AI systems may pose risks to individuals who turn to them for mental health support.

The motivation for this research stems from the rapidly growing trend of people using publicly available AI chatbots for advice on mental health issues. While these systems can offer immediate and accessible conversational support, their alignment with the professional standards that govern human therapists has remained largely unexamined. Researchers from Brown University sought to bridge this gap by creating a systematic way to evaluate the ethical performance of these models in a therapeutic context. They collaborated with mental health practitioners to ensure their analysis was grounded in the real-world principles that guide safe and effective psychotherapy.

To conduct their investigation, the researchers first developed a comprehensive framework outlining 15 distinct ethical risks. This framework was informed by the ethical codes of professional organizations, including the American Psychological Association, translating core therapeutic principles into measurable behaviors for an AI. The team then designed a series of simulated conversations between a user and a large language model, or LLM, which is an AI system trained on vast amounts of text to generate human-like conversation. In these simulations, the AI was instructed to act as a counselor employing evidence-based psychotherapeutic methods.

The simulated scenarios were designed to present the AI with common and challenging mental health situations. These included users expressing feelings of worthlessness, anxiety about social situations, and even statements that could indicate a crisis, such as thoughts of self-harm. By analyzing the AI’s responses across these varied prompts, the researchers could map its behavior directly onto their practitioner-informed framework of ethical risks. This allowed for a detailed assessment of when and how the models tended to deviate from professional standards.

The study’s findings indicate that the large language models frequently engaged in behaviors that would be considered ethical violations for a human therapist. One of the most significant areas of concern was in the handling of crisis situations. When a simulated user expressed thoughts of self-harm, the AI models often failed to respond appropriately. Instead of prioritizing safety and providing direct access to crisis resources, some models offered generic advice or conversational platitudes that did not address the severity of the situation.

Another pattern observed was the reinforcement of negative beliefs. In psychotherapy, a practitioner is trained to help a person identify and gently challenge distorted or unhelpful thought patterns, such as believing one is a complete failure after a single mistake. The study found that the AIs, in an attempt to be agreeable and supportive, would sometimes validate these negative self-assessments. This behavior can inadvertently strengthen a user’s harmful beliefs about themselves or their circumstances, which is counterproductive to therapeutic goals.

The research also points to the issue of what the authors term a “false sense of empathy.” While the AI models are proficient at generating text that sounds empathetic, this is a simulation of emotion, not a genuine understanding of the user’s experience. This can create a misleading dynamic where a user may form an attachment to the AI or develop a dependency based on this perceived empathy. Such a one-sided relationship lacks the authentic human connection and accountability that are foundational to effective therapy.

Beyond these specific examples, the broader framework developed by the researchers suggests other potential ethical pitfalls. These include issues of competence, where an AI might provide advice on a topic for which it has no genuine expertise or training, unlike a licensed therapist who must practice within their scope. Similarly, the nature of data privacy and confidentiality is fundamentally different with an AI. Conversations with a chatbot may be recorded and used for model training, a practice that is in direct conflict with the strict confidentiality standards of human-centered therapy.

The study suggests that these ethical violations are not necessarily flaws to be fixed with simple tweaks but may be inherent to the current architecture of large language models. These systems are designed to predict the next most probable word in a sequence, creating coherent and contextually relevant text. They do not possess a true understanding of psychological principles, ethical reasoning, or the potential real-world impact of their words. Their programming prioritizes a helpful and plausible response, which in a therapeutic setting can lead to behaviors that are ethically inappropriate.

The researchers acknowledge certain limitations to their work. The study relied on simulated interactions, which may not fully capture the complexity and unpredictability of conversations with real individuals seeking help. Additionally, the field of artificial intelligence is evolving rapidly, and newer versions of these models may behave differently than the ones tested. The specific prompts used by the research team also shape the AI’s responses, and different user inputs could yield different results.

For future research, the team calls for the development of new standards specifically designed for AI-based mental health tools. They suggest that the current ethical and legal frameworks for human therapists are not sufficient for governing these technologies. New guidelines would need to be created to address the unique challenges posed by AI, from data privacy and algorithmic bias to the management of user dependency and crisis situations.

In their paper, the researchers state, “we call on future work to create ethical, educational, and legal standards for LLM counselors—standards that are reflective of the quality and rigor of care required for human-facilitated psychotherapy.” The study ultimately contributes to a growing body of evidence suggesting that while AI may have a future role in mental health, its current application requires a cautious and well-regulated approach to ensure user safety and well-being.

The study, “How LLM Counselors Violate Ethical Standards in Mental Health Practice: A Practitioner-Informed Framework,” was authored by Zainab Iftikhar, Amy Xiao, Sean Ransom, Jeff Huang, and Harini Suresh.

A religious upbringing in childhood is linked to poorer mental and cognitive health in later life

26 October 2025 at 22:00

A new large-scale study of European adults suggests that, on average, being religiously educated as a child is associated with slightly poorer self-rated health after the age of 50. The research, published in the journal Social Science & Medicine, also indicates that this association is not uniform, varying significantly across different aspects of health and among different segments of the population.

Past research has produced a complex and sometimes contradictory picture regarding the connections between religiousness and health. Some studies indicate that religious involvement can offer health benefits, such as reduced suicide risk and fewer unhealthy behaviors. Other research points to negative associations, linking religious attendance with increased depression in some populations.

Most of this work has focused on religious practices in adulthood, leaving the long-term health associations of childhood religious experiences less understood. To address this gap, researchers set out to investigate how a religious upbringing might be linked to health outcomes decades later, taking into account the diverse life experiences that can shape a person’s well-being.

The researchers proposed several potential pathways through which a religious upbringing could influence long-term health. These include psychosocial mechanisms, where religion might foster positive emotions and coping strategies but could also lead to internal conflict or distress. Social and economic mechanisms might involve access to supportive communities and resources, while also potentially exposing individuals to group tensions.

Finally, behavioral mechanisms suggest religion may encourage healthier lifestyles, such as avoiding smoking or excessive drinking, which could have lasting positive effects on physical health. Given these varied and sometimes opposing potential influences, the researchers hypothesized that the link between a religious upbringing and late-life health would not be simple or consistent for everyone.

To explore these questions, the study utilized data from the Survey of Health, Aging, and Retirement in Europe, a major cross-national project. The analysis included information from 10,346 adults aged 50 or older from ten European countries. Participants were asked a straightforward question about their childhood: “Were you religiously educated by your parents?” Their current health was assessed through self-ratings on a five-point scale from “poor” to “excellent.” The study also examined more specific health indicators, including physical health (chronic diseases and limitations in daily activities), mental health (symptoms of depression), and cognitive health (numeracy and orientation skills).

The researchers employed an advanced statistical method known as a causal forest approach. This machine learning technique is particularly well-suited for identifying complex and non-linear patterns in large datasets. Unlike traditional methods that often look for straightforward, linear relationships, the causal forest model can uncover how the association between a religious upbringing and health might change based on a wide array of other factors. The analysis accounted for 19 different variables, including early-life circumstances, late-life demographics like age and marital status, and current religious involvement.

The overall results indicated that, on average, having a religious upbringing was associated with poorer self-rated health in later life. The average effect was modest, representing a -0.10 point difference on the five-point health scale. The analysis showed that for a majority of individuals in the sample, the association was negative.

However, the model also identified a smaller portion of individuals for whom the association was positive, suggesting that for some, a religious upbringing was linked to better health outcomes. This variation highlights that an average finding does not tell the whole story.

When the researchers examined different domains of health, a more nuanced picture emerged. A religious upbringing was associated with poorer mental health, specifically a higher level of depressive symptoms. It was also linked to poorer cognitive health, as measured by lower numeracy, or mathematical ability.

In contrast, the same childhood experience was associated with better physical health, indicated by fewer limitations in activities of daily living, which include basic self-care tasks like bathing and dressing. This suggests that a religious childhood may have different, and even opposing, associations with the physical, mental, and cognitive aspects of a person’s well-being in later life.

The study provided further evidence that the link between a religious upbringing and poorer self-rated health was not the same for all people. The negative association appeared to be stronger for certain subgroups. For example, individuals who grew up with adverse family circumstances, such as a parent with mental health problems or a parent who drank heavily, showed a stronger negative link between their religious education and later health.

Late-life demographic factors also seemed to modify the association. The negative link was more pronounced among older individuals (aged 65 and above), females, those who were not married or partnered, and those with lower levels of education. These findings suggest that disadvantages or vulnerabilities experienced later in life may interact with early experiences to shape health outcomes.

The analysis also considered how adult religious practices related to the findings. The negative association between a religious upbringing and later health was stronger for individuals who reported praying in adulthood. It was also stronger for those who reported that they never attended a religious organization as an adult. This combination suggests a complex interplay between past experiences and present behaviors.

The study does have some limitations. The data on religious upbringing and other childhood circumstances were based on participants’ retrospective self-reports, which can be subject to memory biases. The study’s design is cross-sectional, meaning it captures a snapshot in time and cannot establish a direct causal link between a religious upbringing and health outcomes. It is possible that other unmeasured factors, such as parental socioeconomic status, could play a role in this relationship. The measure of religious upbringing was also broad and did not capture the intensity, type, or strictness of the education received.

Future research could build on these findings by using longitudinal data to track individuals over time, providing a clearer view of how early experiences unfold into later life health. More detailed measures of religious education could also help explain why the experience appears beneficial for some health domains but detrimental for others. Researchers also suggest that exploring the mechanisms, such as coping strategies or social support, would provide a more complete understanding.

The study, “Heterogeneous associations between early-life religious upbringing and late-life health: Evidence from a machine learning approach,” was authored by Xu Zong, Xiangjiao Meng, Karri Silventoinen, Matti Nelimarkka, and Pekka Martikainen.

Before yesterdayMain stream

New study challenges a leading theory on how noise affects ADHD traits

25 October 2025 at 22:00

A new study challenges a leading explanation for why auditory stimulation, such as pink noise, can improve cognitive performance in people with traits of attention deficit hyperactivity disorder. The research found that both random noise and a non-random pure tone had similar effects on a brain activity measure linked to neural noise, which contradicts key assumptions of the prominent moderate brain arousal model. These findings were published in the Journal of Attention Disorders.

For years, scientists have observed that listening to random auditory noise, like white or pink noise, can benefit cognitive functioning in individuals with ADHD or elevated traits of the condition. The moderate brain arousal model was proposed to explain this phenomenon. This model is built on two primary assumptions. First, it suggests that ADHD is associated with lower-than-optimal levels of internal neural noise.

Second, it proposes that external random noise boosts this internal neural noise through a mechanism called stochastic resonance, improving the brain’s ability to process signals. However, these foundational ideas had not been sufficiently tested, particularly because most studies lacked a direct measure of neural noise or a proper non-random sound condition to isolate the effects of stochastic resonance.

Joske Rijmen and her colleagues at Ghent University aimed to directly test these two core assumptions of the moderate brain arousal model. They designed an experiment to measure neural noise directly while participants listened to different types of sound. The researchers wanted to see if ADHD traits were indeed linked to lower neural noise at baseline. They also sought to determine if the effects of sound on brain activity were specific to random noise, as the theory of stochastic resonance would predict.

To conduct their investigation, the researchers recruited 69 neurotypical adults. Participants first completed the Adult ADHD Self-Report Scale, a questionnaire used to assess the number and frequency of symptoms associated with the condition. This allowed the scientists to examine ADHD as a spectrum of traits rather than a simple diagnostic category.

Each participant then underwent a resting-state electroencephalogram, a non-invasive procedure that records the brain’s electrical activity. While their brain activity was monitored, participants sat with their eyes closed for three distinct two-minute periods: one in silence, one while listening to continuous pink noise (a random signal), and one while listening to a continuous 100 Hz pure tone (a non-random signal).

The research team analyzed the electroencephalogram data by focusing on a specific feature known as the aperiodic slope of the power spectral density. This measure reflects background brain activity that is not part of rhythmic brain waves and is considered a direct index of neural noise. A steeper slope in this measurement corresponds to less neural noise, while a flatter slope indicates more neural noise. By examining how this slope changed across the different sound conditions and in relation to participants’ ADHD traits, the scientists could test the predictions of the moderate brain arousal model.

The study’s findings presented a direct challenge to the model’s first assumption. During the silent condition, the researchers found a relationship between ADHD traits and the aperiodic slope. Individuals who reported more traits of ADHD tended to have a flatter slope. This finding suggests that they had more background neural noise, not less. The result is the opposite of what the moderate brain arousal model predicted and aligns with other recent studies that have also found evidence for increased neural noise in older children and adolescents with ADHD.

The results also contradicted the model’s second assumption regarding the mechanism of stochastic resonance. When participants with elevated ADHD traits listened to pink noise, their aperiodic slope became steeper. This change signifies a reduction in their neural noise. This outcome is contrary to the model’s suggestion that random noise should increase neural noise in this group.

Most significantly, the researchers found that the non-random pure tone had a virtually identical effect on brain activity as the pink noise. Listening to the 100 Hz tone also led to a steeper aperiodic slope, or a decrease in neural noise, in participants with higher levels of ADHD traits. The fact that a non-random sound produced the same effect as a random sound strongly questions the idea that stochastic resonance, which requires a random signal, is the necessary mechanism behind the benefits of auditory stimulation. If stochastic resonance were the driving force, only the pink noise should have produced this effect.

The authors propose that an alternative explanation may be needed. Rather than relying on stochastic resonance, both types of sound might have a more general effect on brain arousal. This idea is more consistent with the state regulation deficit account of ADHD, which suggests that individuals with the condition have difficulty regulating their arousal levels to match situational demands.

According to this view, any form of additional stimulation, not just random noise, could help modulate arousal to a more optimal state. The researchers also noted the puzzling observation that stimulation appeared to decrease brain arousal in individuals with higher ADHD traits. They speculate this might relate to difficulties these individuals have in achieving a truly restful state, and the continuous sound may have helped them to calm or regulate their brain activity.

The study has some limitations that the authors acknowledge. The research was conducted with neurotypical adults who varied in their traits of ADHD, so the findings need to be replicated in a group of individuals with a formal clinical diagnosis. Another point is that the brain activity was measured during a resting state, not while participants were engaged in a cognitive task where the benefits of noise are typically observed.

Future research should explore whether these same brain activity patterns occur during tasks that require attention and focus. Investigating these effects in a clinical sample of people with diagnosed ADHD will be an important next step to confirm these conclusions.

The study, “Pink Noise and a Pure Tone Both Reduce 1/f Neural Noise in Adults With Elevated ADHD Traits: A Critical Appraisal of the Moderate Brain Arousal Model,” was authored by Joske Rijmen, Mehdi Senoussi, and Jan R. Wiersema.

A 35-day study of couples reveals the daily interpersonal benefits of sexual mindfulness

25 October 2025 at 18:00

A new study finds that being present and non-judgmental during sex is associated with greater sexual well-being, not only for oneself but for one’s partner as well. The research, which tracked couples over 35 days, suggests that the benefits of sexual mindfulness can be observed on a daily basis within a relationship. The findings were published in the scientific journal Mindfulness.

Many individuals in established relationships report problems with their sexual health, such as low desire or dissatisfaction. Previous research has suggested that mindfulness, a state of present-moment awareness without judgment, could help address these issues. Researchers believe that cognitive distractions during sex, like concerns about performance or body image, can interfere with sexual well-being. Mindfulness may act as an antidote to these distractions by helping individuals redirect their attention to the physical sensations and emotional connection of the moment.

Led by Simone Y. Goldberg of the University of British Columbia, a team of researchers noted that most prior studies had significant limitations. Much of the research focused on general mindfulness as a personality trait rather than the specific state of being mindful during a sexual encounter. Additionally, studies often sampled individuals instead of couples, missing the interpersonal dynamics of sex. Finally, no research had used a daily diary design, which is needed to capture the natural fluctuations in a person’s ability to be mindful across different sexual experiences. Goldberg and her colleagues designed their study to address these gaps.

To conduct their research, the scientists recruited 297 couples who were living together. For 35 consecutive days, each partner independently completed a brief online survey every evening before going to sleep. This daily diary method allowed the researchers to gather information about the couples’ experiences in near real-time, reducing reliance on long-term memory which can be unreliable. The daily survey asked about each person’s level of sexual desire and any sexually related distress they felt that day.

On the days that participants reported having sex with their partner, they were asked additional questions. They completed a 5-item questionnaire to measure their level of sexual mindfulness during that specific encounter. This included rating their agreement with statements about their ability to stay in the present moment, notice physical sensations, and not judge their thoughts or feelings. They also answered questions to assess their level of sexual satisfaction with that day’s experience. This design allowed the researchers to analyze how a person’s mindfulness during sex on a given day related to their own and their partner’s sexual well-being on that same day.

The results showed a clear link between daily sexual mindfulness and sexual well-being for both partners. On days when individuals reported being more sexually mindful than their own personal average, they also reported higher levels of sexual satisfaction and sexual desire. At the same time, they reported lower levels of sexual distress. This demonstrates that fluctuations in a person’s ability to be mindful during sex are connected to their own sexual experience from one day to the next.

The study also revealed significant interpersonal benefits. On the days when one person was more sexually mindful, their partner also reported better outcomes. The partner experienced higher sexual satisfaction, increased sexual desire, and less sexual distress. This suggests that one person’s mental state during a sexual encounter has a direct and immediate association with their partner’s experience. The researchers propose that a mindful partner may be more attentive and responsive, which in turn enhances the other person’s enjoyment and sense of connection.

When the researchers analyzed the overall averages across the 35-day period, they found a slightly different pattern. Individuals who were, on average, more sexually mindful throughout the study reported greater sexual well-being for themselves. However, a person’s average level of sexual mindfulness was not linked to their partner’s average sexual well-being. This suggests that the benefit to a partner may be more of an in-the-moment phenomenon tied to specific sexual encounters, rather than a general effect of being with a typically mindful person.

The study also explored the role of gender in these associations. The connection between a person’s own daily sexual mindfulness and their own sexual well-being was stronger for women than for men. The researchers speculate that since women sometimes report higher levels of cognitive distraction during sex, the practice of mindfulness might offer a particularly powerful benefit for them. In contrast, the association between one person’s mindfulness and their partner’s sexual satisfaction was stronger when the mindful partner was a man.

These findings contribute to a growing body of evidence supporting the idea that being present and aware during sex is beneficial for couples. The study highlights that these benefits are not just personal but are shared within the relationship. By focusing on physical sensations and letting go of distracting or self-critical thoughts, individuals may not only improve their own sexual satisfaction but also contribute positively to their partner’s experience. This points to the potential of clinical interventions that teach mindfulness skills specifically within a sexual context.

The researchers acknowledged some limitations of their work. The participant sample was predominantly White and heterosexual, which means the results may not be generalizable to couples from other ethnic backgrounds or to same-sex couples. Future research could explore these dynamics in more diverse populations to see if the same patterns hold.

Another important point is that the study’s design is correlational, meaning it identifies a relationship between variables but cannot prove causation. It is not possible to say for certain that being more mindful causes better sexual well-being. The relationship could potentially work in the other direction, where a more positive sexual experience allows a person to be more mindful. Future studies using experimental methods, where mindfulness is actively manipulated, could help clarify the direction of this effect. Despite these limitations, the study provides a detailed picture of the day-to-day connections between mindfulness and sexual health in romantic partners.

The study, “Daily Sexual Mindfulness is Linked with Greater Sexual Well‑Being in Couples,” was authored by Simone Y. Goldberg, Marie‑Pier Vaillancourt‑Morel, Marta Kolbuszewska, Sophie Bergeron, and Samantha J. Dawson.

New research show how tobacco may worsen brain-related outcomes in cannabis users

24 October 2025 at 18:00

A new study suggests that people who use both cannabis and tobacco have elevated levels of a key enzyme in their brain compared to people who only use cannabis. This finding may offer a biological explanation for why combining these substances is often linked to more severe mental health symptoms and greater difficulty quitting. The research was published in the journal Drug and Alcohol Dependence Reports.

The high rate of co-use between cannabis and tobacco products has long been a concern for public health experts. Studies have shown that individuals who use both substances often report worse clinical outcomes, including higher rates of depression and anxiety, when compared to those who use cannabis alone. Researchers from McGill University sought to understand the potential brain mechanisms that could be driving this difference.

The scientific team focused on the body’s endocannabinoid system, a complex cell-signaling network that helps regulate mood, appetite, and memory. A key component of this system is a naturally produced compound called anandamide. Lower levels of anandamide have been associated with poorer mental health, including increased symptoms of anxiety and depression.

The amount of anandamide in the brain is controlled by an enzyme called fatty acid amide hydrolase, or FAAH. The job of FAAH is to break down anandamide. When FAAH levels are high, more anandamide is broken down, leading to lower overall levels of this beneficial compound. The researchers proposed that tobacco use might increase FAAH levels, providing a reason for the negative outcomes observed in people who co-use cannabis and tobacco.

To investigate this possibility, the researchers recruited 13 participants who were regular cannabis users. They then divided these individuals into two groups based on their tobacco use. The first group consisted of five people who used both cannabis and at least one cigarette daily. The second group was made up of eight people who used cannabis but had no current tobacco use.

The two groups were closely matched on several characteristics, including age, sex, and patterns of cannabis consumption, such as how long they had been using and how much they used per week. This matching was done to help ensure that any observed differences in the brain were more likely related to tobacco use rather than other factors.

Each participant underwent a sophisticated brain imaging procedure known as positron emission tomography. This technique allows scientists to visualize and measure the activity of specific molecules in the living human brain. To measure FAAH levels, the researchers injected participants with a special imaging agent called [11C]CURB, which is designed to bind directly to the FAAH enzyme.

By tracking this imaging agent, the scanner could produce a map showing the concentration of FAAH in different parts of the brain. The researchers focused their analysis on six brain regions known to be rich in both cannabinoid and nicotine receptors, including the prefrontal cortex, hippocampus, and cerebellum. They also accounted for each participant’s sex and a common genetic variation that is known to influence FAAH levels.

The results of the brain scans revealed a distinct difference between the two groups. The individuals who used both cannabis and tobacco had consistently higher levels of the FAAH enzyme across all brain regions examined. The difference was statistically significant in two areas: the substantia nigra, a region involved in reward and movement, and the cerebellum, an area critical for motor control and cognitive functions.

A similar, though not statistically significant, trend was observed in the sensorimotor striatum. The magnitude of the difference in the substantia nigra and cerebellum was considered large, indicating a substantial biological effect. These findings provide the first direct evidence in humans that co-using tobacco is associated with higher FAAH activity than using cannabis alone.

The researchers also explored whether the amount of substance use was related to FAAH levels. They found a positive correlation between the number of cigarettes smoked per day and the level of FAAH in the cerebellum. This means that individuals who smoked more cigarettes tended to have higher concentrations of the enzyme in that brain region. In contrast, the team found no significant association between the amount of cannabis used and FAAH levels.

The study’s authors suggest that these elevated FAAH levels could be the mechanism underlying the poorer clinical outcomes seen in people who co-use. Higher FAAH would lead to lower anandamide, which in turn is linked to mood and anxiety problems. This offers a neurobiological pathway that could explain why this group often experiences greater mental health challenges and more severe withdrawal symptoms.

The researchers acknowledged several limitations to their study. First and foremost, the sample size was very small, meaning the results should be considered preliminary. Larger studies are needed to confirm these findings and to determine if the same pattern holds true in other brain regions.

Additionally, the study did not include a group of people who only used tobacco or a control group of non-users. Without these comparison groups, it is difficult to determine if the increased FAAH is due to tobacco use itself or a specific interaction between tobacco and cannabis. The study also did not directly measure participants’ levels of depression or anxiety, so it could not draw a direct line between FAAH levels and clinical symptoms.

Future research is needed to address these points. Scientists recommend conducting larger studies that include groups of tobacco-only users and healthy controls. Such studies could clarify the independent and combined effects of cannabis and tobacco on the endocannabinoid system. Connecting these brain measurements with clinical assessments of mood and anxiety would also be an important next step.

Despite its preliminary nature, this research opens up a new avenue for understanding the risks of combining cannabis and tobacco. If confirmed, the findings could point toward new therapeutic strategies. Medications that inhibit the FAAH enzyme are already under development, and this work suggests they might one day be a useful tool for treating cannabis use disorder, especially for the large number of individuals who also use tobacco.

The study, “A preliminary investigation of tobacco co-use on endocannabinoid activity in people with cannabis use,” was authored by Rachel A. Rabin, Joseph Farrugia, Ranjini Garani, Romina Mizrahi, and Pablo Rusjan.

Parkinson’s-linked protein clumps destroy brain’s primary energy molecule

24 October 2025 at 14:00

A new scientific report reveals that the protein aggregates associated with Parkinson’s disease are not inert clumps of cellular waste, but rather are chemically active structures that can systematically destroy the primary energy molecule used by brain cells. The research, published in the journal Advanced Science, demonstrates that these protein plaques can function like tiny, rogue enzymes, breaking down adenosine triphosphate and potentially starving neurons of the power they need to survive and function.

Scientists have long sought to understand how the accumulation of protein clumps, known as amyloids, leads to the devastating neuronal death seen in neurodegenerative conditions like Parkinson’s disease. These clumps are primarily made of a misfolded protein called alpha-synuclein.

The prevailing view has been that these aggregates cause harm by physically disrupting cellular processes, poking holes in membranes, or sequestering other important proteins. However, a team of researchers led by Pernilla Wittung-Stafshede at Rice University suspected there might be more to the story.

Previous work from the same group had shown that alpha-synuclein amyloids were not chemically inactive. They could facilitate certain chemical reactions on simple model compounds in a test tube. This led the researchers to question if these amyloids could also act on biologically significant molecules inside a cell. They focused on one of the most fundamental molecules in all of life: adenosine triphosphate, the universal energy currency that powers nearly every cellular activity.

Neurons have exceptionally high energy demands and cannot store fuel, making them particularly vulnerable to any disruption in their adenosine triphosphate supply. The team hypothesized that if amyloids could break down this vital molecule, it would represent a completely new way these pathological structures exert their toxicity.

To investigate this possibility, the scientists conducted a series of experiments. First, they needed to confirm that adenosine triphosphate even interacts with the alpha-synuclein amyloids. They used a chemical reaction they had previously studied, where the amyloids break down a substance called para-nitrophenyl orthophosphate.

When they added adenosine triphosphate to this mixture, the original reaction stopped. This competitive effect suggested that adenosine triphosphate was binding to the same active location on the amyloid surface, pushing the other substance out of the way.

Having established that adenosine triphosphate binds to the amyloids, the researchers then tested whether it was being broken down. They mixed prepared alpha-synuclein amyloids with a solution of adenosine triphosphate and used a diagnostic tool called the Malachite Green assay, which changes color in the presence of free phosphate, a byproduct of adenosine triphosphate breakdown.

They observed a steady increase in free phosphate over time, confirming that the amyloids were indeed cleaving the phosphate bonds in adenosine triphosphate. This activity was catalytic, meaning a single amyloid structure could process many molecules of adenosine triphosphate, one after another. The same experiment performed with individual, non-clumped alpha-synuclein proteins showed no such effect, indicating this energy-draining ability is a feature specific to the aggregated, amyloid form.

To understand the mechanism behind this chemical activity, the team used a powerful imaging technique known as cryogenic electron microscopy. This method allowed them to visualize the structure of the alpha-synuclein amyloid at a near-atomic level of detail while it was bound to adenosine triphosphate.

The resulting images revealed a remarkable transformation. The amyloid itself was formed from two intertwined filaments, creating a cavity between them. When adenosine triphosphate entered this cavity, a normally flexible and disordered segment of the alpha-synuclein protein, consisting of amino acids 16 through 22, folded into an ordered beta-strand. This newly formed structure acted like a lid, closing over the cavity and trapping the adenosine triphosphate molecule inside.

This enclosed pocket was lined with several positively charged amino acids called lysines. Since the phosphate tail of adenosine triphosphate is strongly negatively charged, these lysines likely serve to attract and hold the energy molecule in a specific orientation. The structure suggested that this induced-fit mechanism, where the amyloid changes its shape upon binding its target, was a key part of its chemical function.

To prove that these specific lysine residues were responsible for the activity, the researchers genetically engineered several mutant versions of the alpha-synuclein protein. In each version, they replaced one or more of the key lysines in the cavity with a neutral amino acid, alanine. These mutant proteins were still able to form amyloid clumps that looked similar to the original ones.

When they tested the mutant amyloids for their ability to break down adenosine triphosphate, they found the activity was almost completely gone. This result confirmed that the positively charged lysines are essential for the amyloid’s ability to perform the chemical reaction.

In a final step, the scientists solved the high-resolution structure of one of the inactive mutant amyloids (K21A) while it was bound to adenosine triphosphate. The images showed that the energy molecule could still sit in the cavity, but its orientation was different from that seen in the active, non-mutant amyloid.

More importantly, in this inactive complex, the flexible protein segment did not fold over to form the enclosing lid. This finding provided strong evidence that both the proper positioning of adenosine triphosphate by the lysines and the structural rearrangement that closes the cavity are necessary for the breakdown to occur.

The study does have some limitations. The experiments were conducted in a controlled laboratory setting, not in living cells or organisms. The specific structural form of the alpha-synuclein amyloid studied, known as polymorph type 1A, has not yet been identified in the brains of Parkinson’s patients, although similar structures exist.

Also, the rate at which the amyloids broke down adenosine triphosphate was slow compared to natural enzymes. Future research will need to determine if this process occurs within the complex environment of a neuron and if other, more clinically relevant amyloid forms share this toxic capability.

Despite these caveats, the findings introduce a new and potentially significant mechanism of neurodegeneration. The researchers suggest that even a slow reaction could have a profound local effect. An amyloid plaque contains a very high density of these active sites. This could create a zone of severe energy depletion in the immediate vicinity of the plaque, disabling essential cellular machinery.

For instance, cells use chaperone proteins that require adenosine triphosphate to try to break up these very amyloids. If the chaperones approach an amyloid plaque and enter an energy-depleted zone, their rescue function could be disabled, effectively allowing the plaque to protect itself and persist. This work transforms the view of amyloids from passive obstacles into active metabolic drains, opening new avenues for understanding and potentially treating Parkinson’s disease.

The study, “ATP Hydrolysis by α-Synuclein Amyloids is Mediated by Enclosing β-Strand,” was authored by Lukas Frey, Fiamma Ayelen Buratti, Istvan Horvath, Shraddha Parate, Ranjeet Kumar, Roland Riek, and Pernilla Wittung-Stafshede.

Genetic predisposition for inflammation linked to a distinct metabolic subtype of depression

23 October 2025 at 20:00

A new study suggests that a person’s genetic predisposition for chronic inflammation helps define a specific subtype of depression linked to metabolic issues. The research also found this genetic liability is connected to antidepressant treatment outcomes in a complex, nonlinear pattern. The findings were published in the journal Genomic Psychiatry.

Major depressive disorder is a condition with wide-ranging symptoms and variable responses to treatment. Many patients do not find relief from initial therapies, a reality that has pushed scientists to search for biological markers that could help explain this diversity and guide more personalized medical care. One area of growing interest is the connection between depression and the body’s immune system, specifically chronic low-grade inflammation. A key blood marker for inflammation is C-reactive protein, which is often found at elevated levels in people with depression.

However, measuring C-reactive protein directly from blood samples can be problematic for research because levels can fluctuate based on diet, infection, or stress. An international team of researchers, led by Alessandro Serretti of Kore University of Enna, Italy, sought a more stable way to investigate the link between inflammation and depression. They turned to genetics, using a tool known as a polygenic score. This score summarizes a person’s inherited, lifelong tendency to have higher or lower levels of C-reactive protein. While previous studies have connected this genetic score to specific depressive symptoms or to treatment outcomes separately, this new research aimed to examine both within the same large group of patients to build a more complete picture.

The investigation involved 1,059 individuals of Caucasian descent who were part of the European Group for the Study of Resistant Depression. All participants had a diagnosis of major depressive disorder and had been receiving antidepressant medication for at least four weeks. Researchers collected detailed clinical information, including the severity of depressive symptoms, which was assessed using the Montgomery–Åsberg Depression Rating Scale. Based on their response to medication, patients were categorized as responders, nonresponders, or as having treatment-resistant depression if they had not responded to two or more different antidepressants.

For each participant, the science team calculated a polygenic score for C-reactive protein. This was accomplished by analyzing each person’s genetic data and applying a statistical model developed from a massive genetic database, the UK Biobank. The resulting score provided a single, stable measure of each individual’s genetic likelihood of having high inflammation. The researchers then used statistical analyses to look for connections between these genetic scores and the patients’ symptoms, clinical characteristics, and their ultimate response to antidepressant treatment.

The results showed a clear link between a higher genetic score for C-reactive protein and a specific profile of symptoms and characteristics. Individuals with a greater genetic tendency for inflammation were more likely to have a higher body mass index and a lower employment status. They also reported less weight loss and appetite reduction during their depressive episodes, which are symptoms associated with metabolic function. The genetic score was not associated with the overall severity of depression or with core emotional symptoms like sadness or pessimism. This suggests that the genetic influence of inflammation is tied to a particular cluster of physical and metabolic symptoms, sometimes referred to as an immunometabolic subtype of depression.

When the researchers examined the connection to treatment outcomes, they discovered a more complicated relationship. The link was not a simple straight line where more inflammation meant a worse outcome. Instead, they observed what is described as a nonlinear or U-shaped pattern. Patients who did not respond to treatment tended to have the lowest genetic scores for C-reactive protein. In contrast, both patients who responded well to their medication and those with treatment-resistant depression had higher genetic scores. The very highest scores were observed in the group with treatment-resistant depression.

This complex finding remained significant even after the researchers statistically accounted for a range of other factors known to influence treatment success, such as the patient’s age, the duration of their illness, and the number of previous antidepressant trials. The genetic score for C-reactive protein independently explained an additional 1.9 percent of the variation in treatment outcomes. While a modest figure, it indicates that genetic information about inflammation provides a unique piece of the puzzle that is not captured by standard clinical measures. This U-shaped relationship echoes previous findings that used direct blood measurements of C-reactive protein, suggesting that both very high and very low levels of inflammation may be associated with different treatment pathways.

The researchers note some limitations of their work. The study’s design was cross-sectional, meaning it captures a single point in time and cannot prove that the genetic predisposition for inflammation causes certain symptoms or treatment outcomes. The participants were treated naturalistically with a variety of medications, which reflects real-world clinical practice but lacks the control of a randomized trial. Additionally, the sample consisted exclusively of individuals with European ancestry, so the findings may not be applicable to people from other backgrounds. The team also suggests that replication in other large studies is needed.

For future research, the authors propose integrating genetic scores with direct measurements of inflammatory biomarkers from blood tests. This combined approach could provide a more powerful tool for understanding both a person’s lifelong tendency and their current inflammatory state. Ultimately, this line of research could help refine psychiatric diagnosis and treatment. By identifying an immunometabolic subtype of depression, it may be possible to develop more targeted therapies. The findings contribute to a growing body of evidence supporting a move away from a “one-size-fits-all” approach to depression, opening the door for inflammation-guided strategies in personalized psychiatry.

The study, “Polygenic liability to C-reactive protein defines immunometabolic depression phenotypes and influences antidepressant therapeutic outcomes,” was authored by Alessandro Serretti, Daniel Souery, Siegfried Kasper, Lucie Bartova, Joseph Zohar, Stuart Montgomery, Panagiotis Ferentinos, Dan Rujescu, Raffaele Ferri, Giuseppe Fanelli, Raffaella Zanardi, Francesco Benedetti, Bernhard T. Baune, and Julien Mendlewicz.

Researchers identify the optimal dose of urban greenness for boosting mental well-being

23 October 2025 at 18:00

A new analysis suggests that when it comes to the mental health benefits of urban green spaces, a moderate amount is best. The research, which synthesized four decades of studies, found that the relationship between the quantity of greenery and mental well-being follows an inverted U-shaped pattern, where benefits decline after a certain point. This finding challenges the simpler idea that more green space is always better and was published in the journal Nature Cities.

Researchers have long established a connection between exposure to nature and improved mental health for city dwellers. However, the exact nature of this relationship has been unclear. Bin Jiang, Jiali Li, and a team of international collaborators recognized a growing problem in the field. Early studies often suggested a straightforward linear connection, implying that any increase in greenness would lead to better mental health outcomes. This made it difficult for city planners to determine how much green space was optimal for public well-being.

More recent studies started to show curved, non-linear patterns, but because they used different methods and were conducted in various contexts, the evidence remained fragmented and inconclusive. Without a clear, general understanding of this dose-response relationship, urban planners and policymakers lack the scientific guidance needed to allocate land and resources to maximize mental health benefits for residents. The team aimed to resolve this by searching for a generalized pattern across the entire body of existing research.

To achieve their goal, the scientists conducted a meta-analysis, a type of study that statistically combines the results of many previous independent studies. Their first step was a systematic search of major scientific databases for all empirical studies published between 1985 and 2025 that examined the link between a measured “dose” of greenness and mental health responses. This exhaustive search initially identified over 128,000 potential articles. The researchers then applied a strict set of criteria to filter this large pool, narrowing it down to 133 studies that directly measured a quantitative relationship between greenness and mental health outcomes like stress, anxiety, depression, or cognitive function.

From this collection of 133 studies, the team focused on a subset of 69 that measured the “intensity” of greenness, as this was the most commonly studied variable and provided enough data for a robust analysis. They further divided these studies into two categories based on how greenness was measured. The first category was “eye-level greenness,” which captures the amount of vegetation a person sees from a ground-level perspective, such as when walking down a street. The second was “top-down greenness,” which is measured from aerial or satellite imagery and typically represents the percentage of an area covered by tree canopy or other vegetation.

A significant challenge in combining so many different studies is that they use various scales and metrics. To address this, the researchers standardized the data. They converted the mental health outcomes from all studies onto a common scale ranging from negative one to one. They also re-analyzed images from the original papers to calculate the percentage of greenness in a consistent way across all studies. After standardizing the data, they extracted representative points from each study’s reported dose-response curve and combined them into two large datasets, one for eye-level greenness and one for top-down greenness.

With all the data points compiled and standardized, the researchers performed a curve-fitting analysis. They tested several mathematical models, including a straight line (linear model), a power-law curve, and a quadratic model, which produces an inverted U-shape. The results showed that for both eye-level and top-down greenness, the quadratic model was the best fit for the collective data. This indicates that as the amount of greenness increases from zero, mental health benefits rise, reach a peak at a moderate level, and then begin to decline as the amount of greenness becomes very high.

The analysis identified specific thresholds for these effects. For eye-level greenness, the peak mental health benefit occurred at 53.1 percent greenness. The range considered “highly beneficial,” representing the top five percent of positive effects, was between 46.2 and 59.5 percent. Any positive effect, which the researchers termed a “non-adverse effect,” was observed in a broader range from 25.3 to 80.2 percent. Outside of this range, at very low or very high levels of eye-level greenness, the effects were associated with negative mental health responses.

The findings for top-down greenness were similar. The optimal dose for the best effect was found to be 51.2 percent. The highly beneficial range was between 43.1 and 59.2 percent, and the non-adverse range spanned from 21.1 to 81.7 percent. These specific figures provide practical guidance for urban design, suggesting target percentages for vegetation cover that could yield the greatest psychological rewards for communities.

The researchers propose several reasons why this inverted U-shaped pattern exists. At very low levels of greenness, an environment can feel barren or desolate, which may increase feelings of stress or anxiety. As greenery is introduced, the environment becomes more restorative.

However, at extremely high levels of greenness, a landscape can become too dense. This might reduce natural light, obstruct views, and create a feeling of being closed-in or unsafe, potentially leading to anxiety or a sense of unease. A dense, complex environment may also require more mental effort to process, leading to cognitive fatigue rather than restoration. A moderate dose appears to strike a balance, offering nature’s restorative qualities without becoming overwhelming or threatening.

The study’s authors acknowledge some limitations. By combining many diverse studies, some nuance is lost, as different populations, cultures, and types of mental health measures are grouped together. The analysis was also limited to the intensity of greenness; there was not enough consistent data available to perform a similar analysis on the frequency or duration of visits to green spaces, which are also important factors.

Additionally, very few of the original studies examined environments with extremely high levels of greenness, so the downward slope of the curve at the highest end is based more on statistical prediction than on a large volume of direct observation.

Future research could build on this foundation by investigating these other dimensions of nature exposure, such as the duration of visits or the biodiversity within green spaces. More studies are also needed that specifically test the effects of very high doses of greenness to confirm the predicted decline in benefits. Expanding this work to differentiate between types of vegetation, like trees versus shrubs or manicured parks versus wilder areas, could provide even more refined guidance for urban planning.

Despite these limitations, this comprehensive analysis provides a new, evidence-based framework for understanding how to design healthier cities, suggesting that the goal should not simply be to maximize greenness, but to optimize it.

The study, “A generalized relationship between dose of greenness and mental health response,” was authored by Bin Jiang, Jiali Li, Peng Gong, Chris Webster, Gunter Schumann, Xueming Liu, and Pongsakorn Suppakittpaisarn.

Controlled fear might temporarily alter brain patterns linked to depression

23 October 2025 at 04:00

A study has found that engaging with frightening entertainment, such as horror films, is associated with temporary changes in brain network activity common in depression. The research also found that individuals with moderate depressive symptoms may require a more intense scare to experience peak enjoyment, hinting at an intriguing interplay between fear, pleasure, and emotion regulation. These findings were published in the journal Psychology Research and Behavior Management.

The investigation was conducted by researchers Yuting Zhan of Ningxia University and Xu Ding of Shandong First Medical University. Their work was motivated by a long-standing psychological puzzle known as the fear-pleasure paradox: why people voluntarily seek out and enjoy frightening experiences. While this phenomenon is common, little was known about how it functions in individuals with depression, a condition characterized by persistent low mood, difficulty experiencing pleasure, and altered emotional processing.

The researchers were particularly interested in specific brain network dysfunctions observed in depression. In many individuals with depression, the default mode network, a brain system active during self-referential thought and mind-wandering, is overly connected to the salience network, which detects important external and internal events. This hyperconnectivity is thought to contribute to rumination, where a person gets stuck in a cycle of negative thoughts about themselves. Zhan and Ding proposed that an intense, controlled fear experience might temporarily disrupt these patterns by demanding a person’s full attention, pulling their focus away from internal thoughts and onto the external environment.

To explore this, the researchers designed a two-part study. The first study aimed to understand the psychological and physiological reactions to recreational fear across a spectrum of depressive symptoms. It involved 216 adult participants who were grouped based on the severity of their depressive symptoms, ranging from minimal to severe. These participants were exposed to a professionally designed haunted attraction. Throughout the experience, their heart rate was monitored, and saliva samples were collected to measure cortisol, a hormone related to stress. After each scary scenario, participants rated their level of fear and enjoyment.

The results of this first study confirmed a pattern seen in previous research: the relationship between fear and enjoyment looked like an inverted “U”. This means that as fear intensity increased, enjoyment also increased, but only up to a certain point. After that “sweet spot” of optimal fear, more intense fear led to less enjoyment. The study revealed that the severity of a person’s depression significantly affected this relationship.

Individuals with moderate depression experienced their peak enjoyment at higher levels of fear compared to those with minimal depression. Their physiological data showed a similar pattern, with the moderate depression group showing the most pronounced cortisol stress response. In contrast, participants with the most severe depressive symptoms showed a much flatter response curve, indicating they experienced less differentiation in enjoyment across various fear levels.

The second study used neuroimaging to examine the brain mechanisms behind these responses. For this part, 84 participants with mild-to-moderate depression were recruited. While inside a functional magnetic resonance imaging scanner, which measures brain activity by detecting changes in blood flow, participants watched a series of short clips from horror films. They had resting-state scans taken before and after the film clips to compare their baseline brain activity with their activity after the fear exposure.

The neuroimaging data provided a window into the brain’s reaction. During the scary clips, participants showed increased activity in the ventromedial prefrontal cortex, a brain region critical for emotion regulation and processing safety signals. The analysis also revealed that after watching the horror clips, the previously observed hyperconnectivity between the default mode network and the salience network was temporarily reduced. For a short period after the fear exposure, the connectivity in the brains of these participants with depression more closely resembled patterns seen in individuals without depression. This change was temporary, beginning to revert to baseline by the end of the post-exposure scan.

Furthermore, the researchers found a direct link between these brain changes and the participants’ reported feelings. A greater reduction in the connectivity between the default mode network and salience network was correlated with higher ratings of enjoyment. Similarly, stronger activation in the ventromedial prefrontal cortex during the fear experience was associated with greater positive feelings after the experiment. These findings suggest that the controlled fear experience may have been engaging the brain’s emotion-regulation systems, momentarily shifting brain function away from patterns associated with rumination.

The authors acknowledge several limitations to their study. The research primarily included individuals with mild-to-moderate depression, so the findings may not apply to those with severe depression. The study was also unable to control for individual differences like prior exposure to horror media or co-occurring anxiety disorders, which could influence reactions. Another consideration is that a laboratory or controlled haunted house setting does not perfectly replicate how people experience recreational fear in the real world.

Additionally, the observed changes in brain connectivity were temporary, and the correlational design of the study means it cannot prove that the fear experience caused a change in mood, only that they are associated. The researchers also did not include a high-arousal, non-fearful control condition, such as watching thrilling action movie clips, making it difficult to say if the effects are specific to fear or to general emotional arousal.

Future research is needed to explore these findings further. Such studies could investigate a wider range of participants and fear stimuli, track individuals over a longer period to see if the neural changes have any lasting effects, and conduct randomized controlled trials to establish a causal link. Developing comprehensive safety protocols would be essential before any potential therapeutic application could be considered, as intense fear could be distressing for some vulnerable individuals.

The study, “Fear-Pleasure Paradox in Recreational Fear: Neural Correlates and Therapeutic Potential in Depression,” was published June 27, 2025.

New BDSM research reveals links between sexual roles, relationship hierarchy, and social standing

23 October 2025 at 00:00

A new study explores how sexual preferences for dominance and submission relate to an individual’s general position in society and their behavior toward others outside of intimate activity. The research found that a person’s tendency toward submission in everyday life is strongly connected to experiencing subordination within their partner relationship, as well as holding a lower social status and less education. These findings offer insight into the vulnerability of some practitioners of bondage and discipline, dominance and submission, sadism and masochism (BDSM), suggesting that interpersonal power dynamics are often consistent across life domains. The research was published in Deviant Behavior.

Researchers, led by Eva Jozifkova of Jan Evangelista Purkyně University, aimed to clarify the complex relationship between sexual arousal by power dynamics and a person’s hierarchical behavior in daily life. Previous academic work had established that a person’s dominant or submissive personality often aligns with their sexual preferences. However, it remained uncertain whether the hierarchical roles people enjoy in sex translated directly into their conduct with their long-term partner outside of the bedroom, or how they behaved generally toward people in their community.

Many people who practice BDSM, often distinguish between the roles they adopt during sex and their roles in a long-term relationship. Some maintain a slight hierarchical difference in their relationships around the clock, while others strictly limit the power dynamic to sexual play. Given the variety of patterns, the researchers wanted to test several ideas about this alignment, ranging from the view that sexual hierarchy is merely playful and unrelated to daily life, to the perspective that sexual roles reflect a person’s consistent social rank.

The study sought to test whether an individual’s tendency to dominate or submit to others reflected their sexual preferences and their hierarchical arrangement with their partner. The concept being explored was whether a person’s position in the social world “coheres” with their position in intimate relationships and sexual behavior.

The researchers collected data using an online questionnaire distributed primarily through websites and social media forums geared toward practitioners of BDSM in the Czech Republic. The final analysis included data from 421 heterosexual and bisexual men and women who actively engaged in these practices with a partner.

Participants completed detailed questions about their socioeconomic status, education, age, and, importantly, their feelings of hierarchy during sexual encounters and in their ongoing partner relationships outside of sexual activity. To measure their general tendency toward submissiveness or dominance in daily life toward others, the researchers used a modified instrument called the Life Scale.

The Life Scale assessed an individual’s perceived hierarchical standing, based on how often they experienced feelings of subordination or felt their opinions were disregarded by others. The higher the score on this scale, the more submissive the person reported being in their interactions with people generally.

The researchers separated participants into groups based on their sexual arousal preference for dominance (Dominant), submissiveness (Submissive), both (called Switch), or neither (called Without). To analyze how these various factors affected the Life Scale score, a statistical method known as univariate analysis of variance models was employed. This method allowed the researchers to examine the influence of multiple variables simultaneously on the reported level of submissiveness in everyday life.

Analyzing the self-reported experiences of the participants, the study found a noticeable alignment between preferred sexual role and general relationship dynamics for many individuals. Among those who were sexually aroused by being dominant, 55 percent reported experiencing a feeling of superiority over their partner outside of sexual activity as well. Similarly, 46 percent of individuals sexually aroused by being submissive also experienced subordination in their relationship outside of sex. This shows that for nearly half of the sample, the preferred sexual role did extend partially into the non-sexual relationship.

For the group who reported being aroused by both dominance and submissiveness, the Switches, the pattern was different. A significant majority, 75 percent, reported experiencing both polarities during sexual activity. However, outside of sex, only 13 percent of Switches reported feeling both dominance and submissiveness in their relationship, while half of this group reported experiencing neither hierarchical feeling in the relationship. This suggests that the Switch group is less likely to carry hierarchical dynamics into their non-sexual partnership.

Experience of dominance and submission in sex was reported even by people who were not primarily aroused by hierarchy. More than half of those in the Without group, 60 percent, experienced such feelings during sex. Significantly, 75 percent of this group did not report feeling hierarchy in their relationship outside of sex.

In general, individuals who were aroused by only dominance or only submissiveness experienced the respective polarity they preferred more often in sex than in their relationships. The experience of the non-preferred, or opposite, polarity during sex and in relationships was infrequent for the Dominant and Submissive groups.

The main statistical findings emerged from the analysis linking these experiences to the Life Scale score, which measured submissiveness in interaction with all people, not just a partner. The final model revealed that several factors combined to predict higher levels of submissiveness in daily life.

Respondents who felt more submissive toward others were consistently those who reported experiencing subordination in their non-sexual relationship with their partner. This higher level of submissiveness was also observed in individuals who did not report feelings of superiority over their partner, either during sex or in the relationship generally.

Beyond partner dynamics, a person’s general social standing played a powerful role. Individuals who reported higher submissiveness toward others had lower socioeconomic status, lower education levels, and were younger than 55 years of age.

The effect of experiencing submissiveness in the partner relationship was particularly potent, increasing the measure of submissiveness toward other people by two and a half units on the Life Scale. Conversely, experiencing feelings of dominance in the relationship or during sex decreased the Life Scale score by about 1.4 to 1.5 units, indicating less submissiveness in daily life.

The researchers found that gender was not a decisive factor in predicting submissiveness in this model, suggesting that the underlying hierarchical patterns observed apply across both men and women in the sample. The findings overall supported the idea that a person’s hierarchical position in their intimate relationship is related to their hierarchical position in society, aligning with the “Social Rank Hypothesis” and the “Coherence Hypothesis” proposed by the authors. This means that, contrary to some popular notions, sex and relationship hierarchy do not typically function as a “compensation” for an individual’s status in the outside world.

The research points to the existence of a consistent behavioral pattern linked to tendencies toward dominance or submissiveness in interpersonal relationships that seems to be natural for some people. The researchers suggest that because power polarization in relationships and sex can be eroticizing, it should be practiced with consideration, especially given the observed link between submissiveness in relationships and lower social status in general. They stress the importance of moderation and maintaining a return to a non-polarized state, often referred to as aftercare, following intense sexual interactions.

The researchers acknowledged several limitations inherent in the study design. Since the data were collected solely through online platforms popular within the BDSM community, the sample may not fully represent all practitioners. People with limited internet access or older individuals may have been underrepresented. The Life Scale instrument, while simple and effective for an online survey, provides a basic assessment of hierarchical status, and future research could employ more extensive psychological measures.

Because the study focused exclusively on practitioners of BDSM, the researchers were unable to compare their level of general life submissiveness with individuals in the broader population who do not practice these sexual behaviors. Future studies should aim to include comparison groups from the general population to solidify the understanding of these personality patterns.

Despite these constraints, the results provide practical implications. The researchers suggest that simple questions about hierarchical feelings in sex and relationships can be useful in therapeutic settings to understand a client’s orientation and potentially predict their vulnerability to external pressures or relationship risk. The clear relationship observed between the Life Scale and social status highlights that submissive individuals may already face a great deal of pressure from society, pointing to the need for social support.

The study, “The Link Between Sexual Dominance Preference and Social Behavior in BDSM Sex Practitioners,” was authored by Eva Jozifkova, Marek Broul, Ivana Lovetinska, Jan Neugebauer, and Ivana Stolova.

A common cognitive bias is fueling distrust in election outcomes, according to new psychology research

22 October 2025 at 22:00

A new scientific paper suggests that a common, unconscious mental shortcut may partly explain why many people believe in election fraud. The research indicates that the order in which votes are reported can bias perceptions, making a legitimate late comeback by a candidate seem suspicious. This work was published in the journal Psychological Science.

The research was motivated by the false allegations of fraud that followed the 2020 United States presidential election. Previous work by political scientists and psychologists has identified several factors that contribute to these beliefs. For example, messages from political leaders can influence the views of their supporters. Another explanation is the “winner effect,” which suggests people are more likely to see an election as illegitimate if their preferred party loses.

Similarly, research on motivated reasoning highlights how a person’s desire to maintain a positive view of their political party can lead them to question an unfavorable outcome. Personality differences may also play a part, as some individuals are more predisposed to viewing events as the result of a conspiracy.

Against this backdrop, a team of researchers led by André Vaz of Ruhr University Bochum proposed that a more fundamental cognitive mechanism could also be at play. They investigated whether the sequential reporting of partial vote counts, a standard practice in news media, could inadvertently sow distrust. They theorized that beliefs in fraud might be fueled by a phenomenon known as the cumulative redundancy bias.

This bias describes how our impressions are shaped by the progression of a competition. When we repeatedly see one competitor in the lead, it creates a strong mental impression of their dominance. This has been observed in various contexts, including judgments of sports teams and stock market performance. The core idea is that the repeated observation of a competitor being ahead leaves a lasting impression on observers that is not entirely erased even when the final result shows they have lost. The human mind seems to struggle with discounting information once it has been processed.

The order in which information is presented can be arbitrary, like the order in which votes are counted, yet it can leave a lasting, skewed perception of the competitors. This was evident in the 2020 election in states like Georgia, where early-counted ballots often favored Donald Trump. This occurred in part because his supporters were more likely to vote in person, and those votes were often tallied first.

In contrast, ballots counted later tended to favor Joe Biden, as his voters made greater use of mail-in voting, and many counties counted those mail-in ballots last. Additionally, populous urban counties, which tend to be more Democratic, were often slower to report their results than more rural counties. This created a dramatic late shift in the lead, which the study’s authors suggest is a prime scenario for the cumulative redundancy bias to take effect.

To test this hypothesis, the scientists conducted a series of seven studies with participants from the United States and the United Kingdom. The first study tested whether the cumulative redundancy bias would appear in a simulated election. Participants watched the vote count for a school representative election between two fictional candidates, “Peter” and “Robert.” In both scenarios, Peter won by the same final margin. The only difference was the order of the count. In an “early-lead” condition, Peter took the lead from the beginning. In a “late-lead” condition, he trailed Robert until the very last ballots were counted.

The results showed that participants rated Peter more favorably and predicted he would be more successful in the future when he had an early lead. When Peter won with a late lead, participants actually rated the loser, Robert, as the better candidate.

The second study used the same setup but tested for perceptions of fraud. After the simulated vote count, participants were told that rumors of a rigged election had emerged. When the winner had secured a late lead, participants found it significantly more likely that the vote count had been manipulated and that the wrong candidate had won compared to when the winner had an early lead.

To make the simulation more realistic, a third study presented the vote counts as percentages, similar to how news outlets report them, instead of raw vote totals. The researchers found the same results. Observing a candidate come from behind to win late in the count made participants more suspicious of fraud.

The fourth study brought the experiment even closer to reality. The researchers used the actual vote-count progression from the 2020 presidential election in the state of Georgia, which showed a candidate trailing for most of the count before winning at the end. To avoid partisan bias, participants were told they were observing a recent election in an unnamed Eastern European country. One group saw the actual vote progression, where the eventual winner took the lead late. The other group saw the same data but in a reversed order, creating a scenario where the winner led from the start. Once again, participants who saw the candidate come from behind were more likely to believe the election was manipulated.

Building on this, the fifth study investigated if these fraud suspicions could arise even before the election was decided. Participants watched a vote count that stopped just before completion, at a point when one candidate had just overtaken the longtime leader. Participants were then asked how likely it was that the vote was being manipulated in favor of either candidate. In the scenario mirroring the 2020 Georgia count, people found it more likely that the election was being manipulated in favor of the candidate who just took the lead. In the reversed scenario, they found it more likely that the election was being manipulated in favor of the candidate who was losing their early lead.

During the actual 2020 election, officials and news commentators provided explanations for the shifting vote counts, such as differences in when urban and rural counties reported their results. The sixth study tested if such explanations could reduce the bias. All participants saw the late-lead scenario, but one group was given an explanation for why the lead changed. The results showed that while the explanation did reduce the belief in fraud, it did not eliminate it. People were still significantly more suspicious of a late comeback than would be expected.

The final study addressed partisanship directly. American participants who identified as either Democrats or Republicans were shown a vote count explicitly labeled as being from the 2020 presidential election between Joe Biden and Donald Trump. As expected, political affiliation had a strong effect, with Republicans being more likely to suspect fraud in favor of Biden and Democrats being more likely to suspect fraud in favor of Trump.

However, the cumulative redundancy bias still had a clear impact. For both Republicans and Democrats, seeing Biden take a late lead increased suspicions of a pro-Biden manipulation compared to seeing a scenario where he led from the start. This suggests the cognitive bias operates independently of, and in addition to, partisan motivations.

The researchers note that their findings are based on participants recruited from an online platform and may not represent all populations. The studies also focus on the perception of vote counting, not on other potential election issues like voter registration or suppression. However, the consistent results across seven different experiments provide strong evidence that the way election results are communicated can unintentionally create distrust.

The authors suggest that the sequential reporting of vote counts could be revised to mitigate these effects. While simply waiting until all votes are counted could be one solution, they acknowledge that a lack of information might also breed suspicion. Better public education about vote counting procedures or the use of more advanced forecasting models that provide context beyond live totals could be alternative ways to present results without fueling false perceptions of fraud.

The study, “‘Stop the Count!’—How Reporting Partial Election Results Fuels Beliefs in Election Fraud,” was authored by André Vaz, Moritz Ingendahl, André Mata, and Hans Alves.

Scientists report the first molecular evidence connecting childhood intelligence to a longer life

22 October 2025 at 20:00

A new scientific analysis has uncovered a direct genetic link between higher cognitive function in childhood and a longer lifespan. The findings suggest that some of the same genetic factors influencing a child’s intelligence are also associated with how long they will live. This research, published in the peer-reviewed journal Genomic Psychiatry, offers the first molecular evidence connecting childhood intellect and longevity through shared genetic foundations.

For many years, scientists in a field known as cognitive epidemiology have observed a consistent pattern: children who score higher on intelligence tests tend to live longer. A major review of this phenomenon, which analyzed data from over one million people, found that for a standard increase in cognitive test scores in youth, there was a 24 percent lower risk of death over several decades. The reasons for this connection have long been a subject of debate, with questions about whether it was due to lifestyle, socioeconomic status, or some underlying biological factor.

Previous genetic studies have identified an association between cognitive function in adults and longevity. A problem with using adult data, however, is the possibility of reverse causation. Poor health in later life can negatively affect a person’s cognitive abilities and simultaneously shorten their life. This makes it difficult to determine if genes are linking intelligence to longevity, or if later-life health issues are simply confounding the results by impacting both traits at the same time.

To overcome this challenge, a team of researchers led by W. David Hill at the University of Edinburgh sought to examine the genetic relationship using intelligence data from childhood, long before adult health problems could become a complicating factor. Their goal was to see if the well-documented association between youthful intelligence and a long life had a basis in shared genetics. This approach would provide a cleaner look at any potential biological connections between the two traits.

The researchers did not collect new biological samples or test individuals directly. Instead, they performed a sophisticated statistical analysis of data from two very large existing genetic databases. They used summary results from a genome-wide association study on childhood cognitive function, which contained genetic information from 12,441 individuals. This type of study scans the entire genetic code of many people to find tiny variations associated with a particular trait.

They then took this information and compared it to data from another genome-wide association study focused on longevity. This second dataset was much larger, containing genetic information related to the lifespan of the parents of 389,166 people. By applying a technique called linkage disequilibrium score regression, the scientists were able to estimate the extent to which the same genetic variants were associated with both childhood intelligence and a long life.

The analysis revealed a positive and statistically significant genetic correlation between childhood cognitive function and parental longevity. The correlation estimate was 0.35, which indicates a moderate overlap in the genetic influences on both traits. This result provides strong evidence that the connection between being a brighter child and living a longer life is, at least in part, explained by a shared genetic architecture. The same genes that contribute to higher intelligence in youth appear to also contribute to a longer lifespan.

The researchers explain that this shared genetic influence, a concept known as pleiotropy, could operate in a few different ways. The presence of a genetic correlation is consistent with multiple biological models, and the methods used in this study cannot definitively separate them. One possible explanation falls under a model of horizontal pleiotropy, where a set of genes independently affects both brain development and bodily health.

This idea supports what some scientists call the “system integrity” hypothesis. According to this view, certain genetic makeups produce a human system, both brain and body, that is inherently more robust. Such a system would be better at withstanding environmental challenges and the wear and tear of aging, leading to both better cognitive performance and greater longevity.

Another possibility is a model of vertical pleiotropy. In this scenario, the genetic link is more like a causal chain of events. Genes primarily influence childhood cognitive function. Higher cognitive function then enables individuals to make choices and navigate environments that are more conducive to good health and a long life. For example, higher intelligence is linked to achieving more education, which in turn is associated with better occupations, greater health literacy, and healthier behaviors, all of which promote longevity.

A limitation of this work is its inability to distinguish between these different potential mechanisms. The study confirms that a genetic overlap exists, but it does not tell us exactly how that overlap functions biologically. The research identifies an average shared genetic effect across the genome. It does not provide information about which specific genes or biological pathways are responsible for this link. Additional work is needed to identify the precise regions of the genome that drive this genetic correlation between early-life cognitive function and how long a person lives.

The study, “Shared genetic etiology between childhood cognitive function and longevity,” was authored by W. David Hill and Ian J. Deary.

❌
❌