Reading view

Maternal depression’s link to child outcomes is strongest with high ADHD

A new study suggests that when mothers experience both depressive symptoms and symptoms of attention-deficit/hyperactivity disorder, their two-year-old children may face a heightened risk of developing their own depressive symptoms and attention difficulties. The combination of these maternal conditions appears to create a compounded challenge for early child development. The findings were published in the journal Research on Child and Adolescent Psychopathology.

Researchers have long understood that a mother’s mental health can influence her child’s development. Conditions like depression and ADHD have been studied independently, with each showing links to certain challenges in parenting and child outcomes. However, these two conditions frequently occur together in individuals, creating a more complex set of difficulties. The combined impact of these co-occurring symptoms on very young children has not been well explored.

This gap in knowledge prompted the study led by Michal Levy and a team of researchers at Ben-Gurion University of the Negev in Israel. They wanted to understand how maternal depression and ADHD symptoms might jointly predict a child’s emotional and attentional development. The researchers focused on the period from pregnancy through the first two years of a child’s life. This early stage is a time of rapid brain growth and development, where a child is highly dependent on caregivers for emotional regulation and support, making it a particularly sensitive period.

To investigate this, the researchers conducted a longitudinal study, following a group of families over an extended period. The study began with 156 mothers and their children, who were recruited during the second trimester of pregnancy. Data was collected at three different times: during pregnancy, when the infants were three months old, and again when the children reached two years of age. This multi-wave approach allowed the researchers to track how symptoms and behaviors changed over time.

During the pregnancy assessment, mothers completed questionnaires to report on their symptoms of ADHD . They also reported on their own depressive symptoms at all three time points: during pregnancy, at three months postpartum, and at the two-year follow-up. When the children were two years old, their development was assessed in two ways. First, mothers filled out a standardized checklist to report on any depressive symptoms their child might be exhibiting, such as sadness, irritability, or loss of interest in play.

Second, the children’s ability to sustain attention was measured directly through a structured play session. Each two-year-old was brought into a lab setting and given a set of colorful blocks to play with independently for up to four minutes. An experimenter was present but did not interact with the child. These play sessions were video-recorded. Later, trained research assistants watched the recordings and coded the child’s level of focused attention in five-second intervals. High attention was marked by a steady gaze and active, engaged play with the blocks, while low attention was noted by off-task glances and passive handling of the toys.

The analysis of the data revealed a complex interplay between the two maternal conditions. The most significant developmental difficulties in children at age two were seen when mothers reported high levels of both ADHD symptoms and depressive symptoms. The findings showed that a mother’s depressive symptoms were associated with worse outcomes for her child, but primarily when her ADHD symptoms were also elevated.

Specifically, the researchers found that higher maternal depressive symptoms at three months after birth were associated with more depressive symptoms in their two-year-old children. However, this connection was only statistically significant for mothers who also had moderate to high levels of ADHD symptoms. For mothers with low levels of attention-deficit/hyperactivity disorder symptoms, their level of depression did not show a significant link to their child’s depressive symptoms.

A similar pattern emerged when looking at the children’s focused attention. The study found that a mother’s depressive symptoms at three months were linked to lower focused attention in her two-year-old during the block-playing task. Again, this relationship was only present when the mother reported high levels of ADHD symptoms. The presence of both conditions in the mother appeared to create a dual risk factor that amplified the potential for challenges in the child.

The study also noted that depressive symptoms reported by mothers at three months postpartum were a stronger predictor of child outcomes than depressive symptoms reported during pregnancy. The authors suggest that this may point to the importance of the postnatal caregiving environment. After a child is born, disruptions in mother-child interactions caused by maternal mental health challenges may have a more direct effect on a child’s emerging emotional and attentional skills.

The researchers acknowledge some limitations in their work. The assessment of children’s depressive symptoms was based on reports from their mothers, which could be influenced by the mothers’ own mental state. Future research could benefit from including observations from other caregivers or clinicians to get a more comprehensive picture of the child’s emotional state.

Additionally, the study did not directly measure parenting behaviors. While it is likely that the combination of maternal depression and ADHD affects children through disruptions in parenting, such as inconsistent routines or reduced emotional availability, this study did not observe those mechanisms. Future studies could include observations of parent-child interactions to better understand how these maternal symptoms translate into behaviors that shape child development. Finally, maternal ADHD symptoms were only measured once, during pregnancy.

Despite these limitations, the research provides important insights into the compounded risks associated with co-occurring maternal mental health conditions. The findings suggest that the combination of maternal depression and ADHD symptoms may create a uniquely challenging environment for a young child. This highlights a need for more integrated approaches to maternal mental health screening and support, recognizing that addressing one condition without considering the other may not be enough to promote optimal child development.

The study, “The Interplay between Maternal Depression and ADHD Symptoms in Predicting Emotional and Attentional Functioning in Toddlerhood,” was authored by Michal Levy, Andrea Berger, Alisa Egotubov, Avigail Gordon-Hacker, Eyal Sheiner, and Noa Gueron-Sela.

For young Republicans and men, fear of mass shootings fuels opposition to gun control

A new study suggests that while a majority of young American adults worry about mass shootings, their shared fear does not unite them on the issue of gun control. Instead, for certain groups, higher levels of fear are linked to stronger opposition to firearm restrictions, a finding that complicates predictions about the nation’s future gun policy. The research was published in the journal Social Science Quarterly.

“This is a generation of people who live with significant fear and anxiety over mass violence,” said senior author Jillian Turanovic, associate professor of sociology. “But we found that those shared fears do not unite them in attitudes on gun policy. In fact, they polarize them.”

The researchers sought to investigate a common assumption about the generation of Americans aged 18 to 29. Often called the “massacre generation,” these emerging adults grew up in an era defined by high-profile school shootings and constant media coverage of mass violence. Given these formative experiences, many observers have predicted that as this generation gains political power, they will form a unified front in favor of stricter gun legislation. The research team wanted to examine if this belief held up to scrutiny, or if shared anxiety over mass violence might produce more complex and even contradictory outcomes.

To explore this question, the scientists conducted a survey of 1,674 emerging adults from all 50 states in May 2023. The survey was designed to measure participants’ attitudes toward gun control by asking their level of agreement with statements about firearm access, such as whether owning more guns enhances safety or if guns should be allowed on college campuses.

Separately, the survey assessed their fear of mass shootings by asking how much they worried about an attack occurring in different public settings, including schools, shopping malls, and large events. The researchers then used statistical analysis to determine the relationship between fear and gun policy sentiment, while also accounting for other factors like political affiliation, gender, race, education, and personal experiences with crime.

The study confirmed that fear of mass shootings is widespread among this age group, with more than 60 percent of respondents reporting that they worry a mass shooting will affect their lives. In general, the researchers found a modest connection between higher levels of fear and greater support for gun control policies. This overall trend, however, masked deep divisions within the generation.

When the researchers analyzed the data by political identity, a starkly different pattern emerged. For young adults who identified as Republicans or conservatives, experiencing greater fear of mass shootings was associated with less support for gun control. This finding suggests that for these individuals, the fear of violence may reinforce a belief in armed self-defense, often described as the “good guy with a gun” perspective, rather than a desire for more government regulation of firearms.

A similar polarizing effect was observed among young men. While men and women with low levels of fear had similar views on gun policy, the gap between them widened as fear increased. Among young men, higher levels of fear were connected to increased opposition to gun restrictions. This may reflect cultural ideas that link masculinity with the roles of protector and provider, where owning a firearm is seen as a tool for ensuring personal and family safety.

The researchers also looked at whether the relationship between fear and gun attitudes differed by region. They found an unexpected pattern in the Northeast. In contrast to other parts of the country where fear tended to increase support for gun control, in the Northeast, higher levels of fear were associated with a slight decrease in support for such policies. The authors speculate this could be because some of the nation’s strictest gun laws are already in place in the Northeast, and high-profile attacks in the region may lead some residents to question the effectiveness of these laws.

The study did not find that race, ethnicity, or educational attainment significantly altered the relationship between fear of mass shootings and views on gun control. This indicates that political ideology and gender may be more powerful drivers of gun policy attitudes within this generation, at least when it comes to responding to the threat of mass violence.

The authors note some limits to their work. The survey provides a snapshot in time and cannot establish whether fear directly causes a shift in policy attitudes or if pre-existing attitudes shape how individuals react to fear. Because the sample, while diverse, was not perfectly representative of all young adults in the U.S., the findings should be seen as exploratory.

Future research could track individuals over time to better understand how their views evolve, particularly after they experience a mass shooting event in their community. Additional studies could also examine a broader range of specific gun policies, such as waiting periods or red flag laws, to get a more detailed picture of young adults’ preferences.

Ultimately, the research indicates that the political future of gun legislation is not as straightforward as some might assume. The shared experience of growing up under the shadow of mass shootings does not automatically create a consensus on solutions. For policymakers and advocates, these findings suggest that addressing gun violence will require acknowledging the deep-seated ideological divides that persist even within America’s youngest generation of voters.

The study, “Fear of Mass Shootings and Gun Control Sentiment: A Study of Emerging Adults in Contemporary America,” was authored by Jillian J. Turanovic, Kristin M. Lloyd, and Antonia La Tosa.

A major psychology study finds the U-shape of happiness has been turned on its head

For years now, research studies across the world looking at happiness across our lifetimes have found a U-shape: happiness falls from a high point in youth, and then rises again after middle age. This has been mirrored in studies on unhappiness, which show a peak in middle age and a decline thereafter.

Our new research on ill-being, based on data from 44 countries including the US and UK, shows this established pattern has changed. We now see a peak of unhappiness among the young, which then declines with age. The change isn’t due to middle-aged and older people getting happier, but to a deterioration in young people’s mental health.

A closer look at data from the US shows this clearly. We used publicly available health data, which surveys more than 400,000 people each year, to identify the percentage of people in the US in despair between 1993 and 2024. Those we define as being in despair were the people who had answered that their mental health was not good every day in the 30 days preceding the survey.

Across most of the period, among both men and women, levels of despair were highest among the oldest age group (45-70) and higher for the middle-aged (25-44) than the young (18-24). However, the percentage of young people in despair has risen rapidly. It’s more than doubled for men, from 2.5% in 1993 to 6.6% in 2024, and almost trebled for women – from 3.2% to 9.3%.

Despair also rose markedly among the middle-aged, but less rapidly. It’s gone up from 4.2% to 8.5% for women and from 3.1% to 6.9% for men. The percentage of older men and women in despair rose only a little over the period.

As a result, by 2023-24 relative levels of despair across age groups were reversed for women. The youngest age group has the highest levels of despair, and the oldest age group the lowest. For men, the level of despair was similar for the youngest and middle-aged groups, and lowest for the oldest age group.

These trends have resulted in a very different relationship between age and ill-being over time in the US.

Between 2009 and 2018, despair is hump-shaped in age. However, the rapid rise in despair before the age of 45, and especially before the mid-20s, has fundamentally changed the lifecycle profile of despair. This means that the hump-shape is no longer apparent between 2019 and 2024.

Despair rose the most for the youngest group but also rose for those up to age 45; it remained unchanged for those aged over 45.

Our study found similar trends for Britain, based on analyses of despair in the UK Household Longitudinal Survey and anxiety in the Annual Population Survey. It also shows that the percentage in despair declines with age in another 42 countries between 2020 and 2025, based on analyses of data from the Global Minds Project.

Investigating causes

Research into the reasons for these changes is underway but remains inconclusive. The growth in despair predates the COVID pandemic by a number of years, although COVID may have contributed to an increasing rate of deterioration in young people’s mental health.

There is a growing body of evidence that identifies a link between the rise in ill-being of the young and heavy use of the internet and smartphones. Some research suggests that smartphone use is indeed a cause of worsening youth mental health. Research that limited access to smartphones found significant improvements in adults’ self-reported wellbeing.

However, even if screen time is a contributory factor, it is unlikely to be the sole or even the chief reason for the rising despair among the young. Our very recent research, which has not yet been peer-reviewed, points to a reduction in the power of paid work to protect young people from poor mental health. While young people in paid work tend to have better mental health than those who are unemployed or unable to work, the gap has been closing recently as despair among young workers rises.

Although the causes of the changes we describe have yet to be fully understood, it would be prudent for policymakers to place the issue of rising despair among young people at the heart of any wellbeing strategy.The Conversation

 

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Two weeks of paternity leave linked to improved child development

A study published in the Journal of Marriage and Family examined the connection between fathers taking paternity leave and the developmental progress of their young children in Singapore. The researchers found that when fathers took two weeks or more of paternity leave, it was associated with increased involvement in childcare, stronger father-child bonds, and improved family dynamics. These factors, in turn, were linked to better academic performance and fewer behavioral challenges in children as they grew from preschool into early primary school.

Previous research , mostly from Western countries, has found that paternity leave was connected to fathers being more involved in childcare and to stronger family ties. However, there was less understanding of how this policy directly influenced the development of young children, especially over a longer period. This gap in knowledge was particularly notable in Asian societies, where paternity leave policies are often newer and offer shorter durations compared to European nations.

In Asia, many regions have only recently introduced paternity leave policies, or they do not have them at all. The length of leave available to fathers in these countries is generally shorter. For example, some countries offer only a few days, while others, like South Korea and Japan, have expanded leave to up to a year.

“Many Asian societies, including Singapore, are facing the challenges of raising fertility rates and the related issues of gender inequality within the family. Some western governments (especially Nordic countries) had introduced longer parental leave to alleviate parents’ work-life conflict and encourage fathers’ participation in childcare decades ago,” said study author Wei-Jun Jean Yeung, a professor and chair of the Family, Children, and Youth Research Cluster at the National University of Singapore.

“In Asian countries, while maternity leave has been widely provided, paternity leave is either relatively short compared to Nordic countries, or non-existing. We believe paternity leave is very important because it helps fathers build stronger bonds with their children and improve couples’ relationships, which could indirectly reduce gender inequality and potentially affect couples’ intention to have a child.”

“However, no study has comprehensively examined how paternity leave affects family relationships and early childhood development. This gap led us to start our research on the topic. This paper is our second study, following our first one published in 2022. We believe the results will be useful for Singapore and other Asian countries, particularly East Asian countries such as South Korea, Japan, and China, which also shares more prevalent patriarchal norms and ‘ultra-low’ fertility levels.”

The research was guided by two main theoretical perspectives: family systems theory and social capital theory. Family systems theory suggests that a family operates as a connected unit, where the actions and experiences of one member, such as a father’s involvement in childcare, can influence other parts of the family, including children’s development and the relationships between parents.

Social capital theory posits that strong relationships and bonds within a family, such as those between parents and children, contribute positively to a child’s development. Paternity leave is seen as a way to enhance this family social capital by giving fathers time to become more competent and involved caregivers.

The researchers analyzed data from the Singapore Longitudinal Early Development Study (SG-LEADS), which collected information from a large, representative sample of Singaporean children and their primary caregivers in two waves: 2018/2019 and 2021. The study focused on children who were born after May 1, 2013, which is when Singapore’s paternity leave policy began.

The final sample included 3895 children who lived with two parents and whose primary caregiver was their mother. For analyses focusing on developmental outcomes, the sample was further narrowed to children aged three and above who had reported data on both behavioral problems and academic achievements in both waves.

To measure children’s development, the study used the Children’s Behavior Problems Index (BPI) for children aged three and above, which assesses externalizing behaviors like aggression and internalizing behaviors like anxiety. Academic achievements were measured using test scores for letter-word identification and applied problems from the Woodcock-Johnson Test of Achievement. The key independent variable was paternity leave-taking, categorized based on whether fathers took no leave, one week of leave, or two weeks or more of leave, as reported by the mothers.

The researchers also examined several factors as potential intermediaries. Fathers’ involvement was measured by mothers’ reports of how much fathers participated in childcare activities like bathing, changing diapers, and playing. Father-child closeness was assessed by mothers’ statements about how close their child felt to their father. Family dynamics was a broader concept encompassing family conflict, marital satisfaction, and parenting aggravation, all reported by mothers.

The results showed that taking two weeks or more of paternity leave was associated with higher scores in children’s letter-word identification when they were three to six years old, and again when they were five to eight years old. This suggests a direct and lasting benefit for verbal skills.

For children’s applied problems, which measure numeracy skills, taking two weeks or more of leave was positively related to scores when children were three to six years old. Taking one week of leave was linked to better applied problems scores when children were five to eight years old, after accounting for earlier scores. This indicates some direct benefits for numerical abilities as well.

The researchers also found positive connections between paternity leave and the intermediary factors. Specifically, taking two weeks or more of paternity leave was linked to greater fathers’ involvement in childcare activities, stronger father-child closeness, and more positive family dynamics.

Fathers’ involvement, in turn, was positively related to father-child closeness, and both of these were associated with better family dynamics. While fathers’ involvement and father-child closeness did not directly influence children’s verbal academic scores, father-child closeness was directly related to children’s applied problems scores when they were three to six years old.

For children’s behavioral outcomes, paternity leave did not have a direct effect. Instead, its impact was entirely indirect. Taking two weeks or more of paternity leave was associated with fewer behavioral problems in children when they were three to six years old, and also later when they were five to eight years old, primarily through improved family dynamics. This suggests that paternity leave helps reduce children’s behavioral challenges by fostering a more supportive and cohesive family environment.

“Paternity leave is good for family relations and for children’s development,” Yeung told PsyPost. “It has the potential to improve spousal relations and parent-child relation. Our results show that 2 weeks or longer paternity leave was linked to greater fathers’ involvement in childcare, closer father-child relationships, and enhanced family dynamics (i.e, family members have fewer conflicts, mothers have higher marital satisfaction and feel less stressed about raising children). It can also have long-term benefits for children’s cognitive development and social-emotional well-being during early childhood.

“However, paternity leave should be at least two weeks or longer. We found one-week paternity leave does not have a positive impact on family dynamics and child development. It is possible that one week is too short for fathers to build a routine, learn the many new skills needed to care for a baby, and figure out how to work together with the mother. Two weeks gives fathers and mothers more time to adjust emotionally and practically, and to enjoy time with their new baby.”

“We should encourage countries to provide government-subsidized paternity leave that is at least two weeks long, and enable fathers to take paternity leave, because of its potential benefits to family and child well-being.”

The researchers controlled for a range of other influences, such as parents’ education, income, age, children’s age and gender, and household living arrangements, including the presence of domestic helpers or grandparents.

“A common misinterpretation of the results is that fathers who are more likely to take paternity leave are of higher socioeconomic status (SES), and it is the higher SES that makes their children do better cognitively and behaviorally,” Yeung said. “In our study, we have used rigorous methodology to address this selectivity issue, including using data from a nationally representative longitudinal study and taking into account a large number of parents’ and family characteristics to “isolate” the net impact of paternity leave taking on children’s developmental outcomes. ”

But there are still some limitations to consider. The study did not have information on fathers’ gender attitudes or their involvement before the child’s birth, which could influence their decision to take leave and their subsequent parenting behaviors. The measures for fathers’ involvement and family relationships were based on mothers’ reports, which might introduce some bias.

Future research could benefit from including perspectives from both parents. The measure of fathers’ involvement could also be expanded to include engagement in children’s educational and social activities more broadly. The researchers also acknowledge that while they used robust methods to account for pre-existing differences between fathers who took leave and those who did not, it cannot definitively prove a causal link due to the potential for unmeasured factors to play a role.

The study, “Paternity leave-taking and early childhood development: A longitudinal analysis in Singapore,” was authored by Nanxun Li and Wei-Jun Jean Yeung.

Dark personality traits are linked to poorer family functioning

A new study has found that young adults who exhibit higher levels of manipulative, self-centered, and callous personality traits tend to report having lower quality family interactions. The research, published in the Journal of Professional & Applied Psychology, suggests a distinct connection between these so called “Dark Triad” traits and the health of family dynamics.

Researchers have long been interested in how personality develops, often focusing on widely recognized models of personality. Recently, attention has shifted toward understanding the less socially desirable aspects of human nature, collectively known as the Dark Triad, which includes Machiavellianism, narcissism, and psychopathy. These traits are associated with behaviors that can strain social bonds, yet their specific impact within the family unit has been a less explored area.

The study’s authors wanted to examine this connection in a specific cultural and demographic context. They focused on young adults in Pakistan, a country where a large portion of the population falls within the 18 to 25 age range. This period is a formative time when an individual’s personality and perspective are still evolving, heavily influenced by their immediate environment, especially the family. By investigating this group, the researchers aimed to add a non-Western perspective to a field of study that has predominantly been centered on European and North American populations.

“The motivation for this study stemmed from the fact that this area remains largely understudied in Pakistan, leaving a significant research gap,” said study author Quratul Ain Arshad, who is currently a Bachelor of Laws student at the University of London.

“This topic represents a real-world issue that has not received the attention it deserves. I have personally observed several families affected by these dark traits, struggling to cope due to a lack of awareness and understanding. Through this research, I aimed to shed light on this issue so that individuals can better recognize what is happening to them and those around them and seek the help and guidance they need.”

To conduct their investigation, the researchers recruited a sample of 300 young adults between the ages of 18 and 25 from various universities and corporate offices in Lahore, Pakistan. Participation was voluntary, and the confidentiality of the responses was protected. Each participant completed two self-report questionnaires designed to measure different psychological constructs.

The first questionnaire was the Short Dark Triad scale, which assesses the three core traits. Machiavellianism is characterized by a manipulative and cynical worldview, narcissism involves a sense of grandiosity and entitlement, and psychopathy is marked by impulsivity and a lack of empathy. The second questionnaire was a modified version of the Family Assessment Device, which measures the quality of family interactions across several dimensions. These dimensions include problem solving, communication, assigned roles, emotional responsiveness, emotional involvement, and behavior control.

After collecting the data, the research team performed a statistical analysis to determine if there was a relationship between the scores for Dark Triad traits and the scores for family functioning. This type of analysis reveals whether two variables tend to move together, either in the same direction or in opposite directions. The study specifically tested four hypotheses about these potential connections.

The primary finding confirmed the researchers’ main prediction. There was a clear negative relationship between overall scores on the Dark Triad scale and the overall quality of family interaction. This indicates that as an individual’s levels of Machiavellianism, narcissism, and psychopathy increased, their reported level of healthy family functioning tended to decrease. This suggests that these aversive personality traits are indeed connected to difficulties within the family environment.

When the researchers examined the traits individually, the results were more nuanced. The connection between Machiavellianism and a family’s general functioning was found to be very weak and not statistically meaningful. This suggests that a person’s tendency toward manipulation may not have a direct, measurable link to their perception of the family’s overall effectiveness.

A different pattern emerged for psychopathy. This trait was found to have a modest but statistically significant negative relationship with what is known as “affective responsiveness,” which is a family’s capacity to respond to situations with appropriate emotions. In simple terms, young adults with higher psychopathy scores were more likely to come from families they perceived as being less emotionally attuned.

The final hypothesis looked at the link between narcissism and “affective involvement,” which refers to the extent to which family members show interest and care for one another. Much like the finding for Machiavellianism, this connection was also very weak and not considered statistically significant. This outcome suggests that a person’s level of narcissism may not be directly tied to the degree of emotional investment they perceive within their family.

“The key takeaway from this study is the importance of self-awareness,” Arshad told PsyPost. “Every individual should strive to understand their own personality traits and reflect on their behaviors. By doing so, they can not only improve themselves but also better support those around them who may exhibit these traits.”

The study did have some limitations. The findings are based on self-report questionnaires, which means participants’ responses could have been influenced by a desire to present themselves or their families in a positive light. The sample was also drawn exclusively from one city in Pakistan and was limited to young adults, which means the results might not be generalizable to other age groups or cultures.

For future research, the authors suggest that longitudinal studies, which follow individuals over a long period, could provide deeper insight into how Dark Triad traits and family dynamics influence each other over time. Using multiple methods of assessment, beyond just self-reports, could also help create a more complete picture of these complex interactions. Such work could help in designing interventions aimed at improving family relationships and promoting healthier personality development.

“The size of the sample used in this study is not big enough to represent the total young adult population in Pakistan, but this study is significant in understanding how these traits shape interactions on a microlevel,” Arshad said. “The effect of this study is such that it will help researchers dig towards the developmental aspects of these traits and also conduct longitudinal studies in future to understand the implications of the Dark Triad traits in both older and younger populations than young adults.”

The study, “The Relationship Between Dark Triad and Quality of Family Interaction among Young Adults,” was authored by Quratul Ain Arshad, Uzma Ashiq, and Khadija Malik.

Emotional intelligence predicts success in student teamwork

A new study has found that a student team’s collective emotional intelligence is a significant predictor of its success in collaborative problem-solving. Specifically, the abilities to understand and manage emotions were linked to both better teamwork processes and a higher quality final product. The findings, which also examined the role of personality, were published in the Journal of Intelligence.

While individual intelligence and personality traits like conscientiousness are known to predict individual success, much less is understood about what drives performance when students are required to work together in teams. This form of learning, known as collaborative problem solving, is increasingly common in modern education, prompting a need to identify the skills and dispositions that help groups succeed.

The study’s authors aimed to investigate how two sets of characteristics, emotional intelligence and the Big Five personality traits, might influence the performance of high school students working in small groups.

“This study was actually part of a larger project, called PEERSolvers, in which we were looking for scientifically supported ways to enhance the quality of students’ collaborative problem solving,” said study author Ana Altaras, a full professor in the Department of Psychology at the University of Belgrade.

“This naturally led us to explore the role played by emotional intelligence and personality in student collaborations. Having previously conducted two systematic reviews (Altaras et al., 2025; Jolić Marjanović et al., 2024), we knew that both emotional intelligence and the Big Five personality traits indeed act as ‘deep-level composition variables’ shaping the processes and outcomes of teamwork in higher-education and professional contexts.”

“We also knew that both variable sets contribute to the prediction of individual students’ school performance. However, we also saw an obvious research gap when it comes to exploring their joint effects on the performance of student teams in high school. Hence, we digged into this topic.”

The researchers recruited 162 tenth-grade students from twelve secondary schools. The students first completed assessments to measure their emotional intelligence and personality. Emotional intelligence was evaluated using the Mayer-Salovey-Caruso Emotional Intelligence Test, a performance-based test that measures a person’s actual ability to perceive, use, understand, and manage emotions. Personality was assessed with the Big Five Inventory, a questionnaire that measures neuroticism, extraversion, openness, agreeableness, and conscientiousness.

Following the initial assessments, the students were organized into 54 teams of three. Each team was then tasked with solving a complex social problem over a 2.5-hour session. The problems were open-ended and required creative thinking, covering topics such as regulating adolescent media use or balancing economic development with ecological protection. The entire collaborative session for each team was video-recorded, and each team submitted a final written solution.

Trained observers analyzed the video recordings to rate the quality of each team’s collaborative processes. They assessed four distinct aspects of teamwork: the exchange of ideas and information, the emotional atmosphere and level of respect, how the team managed its tasks and time, and how it managed interpersonal relationships and conflicts. In a separate analysis, a different set of evaluators rated the quality of the team’s final written solution based on criteria like realism, creativity, and the strength of its arguments.

The researchers found that emotional intelligence was a strong predictor of team performance. Teams with higher average scores in understanding and managing emotions showed superior teamwork processes. This improvement in collaboration, in turn, was associated with producing a better final solution. The ability to understand emotions also appeared to have a direct positive effect on the quality of the written solution. This suggests that knowledge about human emotions was directly applicable to solving the complex social problems presented in the task.

“Looking at the results of our study, emotional intelligence–particularly its ‘strategic branches’ or the ability to understand and manage emotions–had a lot to do with students’ performance in collaborative problem solving,” Altaras told PsyPost. “Student teams with higher team-average emotional intelligence engaged in a more constructive exchange of ideas, had a friendlier way of communicating, and were more efficient in managing both task and relationship-related challenges throughout the problem-solving process. Ultimately, these teams also came up with better solutions to the problems at hand. In sum, students’ emotional intelligence seems to contribute substantially to the quality of their collaborative problem solving.”

The role of personality traits was more nuanced and produced some unexpected results. As expected, the personality trait of openness to experience was positively associated with the quality of the final solution. This connection is likely due to the creative and open-ended nature of the problem-solving task.

But teams with a higher average level of neuroticism, a trait associated with anxiety and stress, were actually better at managing their tasks. The researchers propose that a tendency toward distress may have prompted these teams to plan their approach more diligently. In contrast, teams with higher average extraversion were less effective at relationship management, perhaps because they were less inclined to formally address group tensions.

“Contrary to our expectations, we found only few statistically significant associations between the Big Five personality traits and the quality of students’ collaboration,” Altaras said. “Moreover, the effects that did surface as significant–a positive effect of neuroticism on task management and a negative effect of extraversion on relationship management–seem counterintuitive in terms of their direction.”

When the researchers examined emotional intelligence and personality together in a combined model, emotional intelligence emerged as the more consistent and powerful predictor of overall performance. The contribution of personality was largely limited to the link between neuroticism and task management, suggesting emotional skills were more influential in this context.

As with all research, the study does have some limitations. The sample size was relatively small due to the intensive nature of analyzing hours of video footage. The teams were also composed of students of the same gender, which might not fully represent the dynamics of mixed-gender groups common in schools. Additionally, the study did not measure the students’ general academic intelligence, which could also be a factor in their performance.

“In our defense, emotional intelligence has already been shown to have incremental predictive value in so many instances–including the prediction of students’ individual school performance–that we would not expect it to lose much of its predictive weight when analyzed concurrently with academic abilities,” Altaras noted. “Still, the picture would be more complete had we been able to also test participants’ academic intelligence and include this variable as another potential predictor of their performance in collaborative problem solving.”

For future research, the authors suggest exploring these dynamics in larger and more diverse student groups. It would also be informative to see if these findings hold when teams are faced with different kinds of problems, such as those that are less social and more technical in nature. Examining these factors could provide a more complete picture of the interplay between ability, personality, and group success in educational settings.

“Within the PEERSolvers project, we have already developed a training (PDF) that targets, among other things, students’ emotional intelligence abilities and knowledge of personality differences, hoping to enhance the quality of their collaborative problem solving in this manner,” Altaras said. “In an experimental study, the training was shown to make a difference–i.e., to have a positive effect on students’ performance in collaborative problem solving (Krstić et al., 2025)–and we are now looking forward to having it more widely implemented in schools. When it comes to further research, we will certainly continue to explore the role of emotional intelligence abilities in the educational context, considering the performance and well-being of both students and teachers.”

The study, “Emotional Intelligence and the Big Five as Predictors of Students’ Performance in Collaborative Problem Solving,” was authored by Ana Altaras, Zorana Jolić Marjanović, Kristina Mojović Zdravković, Ksenija Krstić, and Tijana Nikitović.

Virtual reality training improves the body’s ability to regulate stress

A new study has demonstrated that a virtual reality game can successfully teach people a breathing technique to regulate their physiological stress responses. This training led to improved biological markers of stress regulation during a tense virtual experience, suggesting such games could be a practical way to practice stress management skills. The research was published in the journal Psychophysiology.

While physiological regulation strategies like paced breathing are known to be effective, they are typically learned and practiced in calm, controlled environments. This setting is very different from the high-stress situations where such techniques are most needed, which may make it difficult for people to apply their training in real life. The study authors proposed that virtual reality could offer a unique solution by providing an immersive platform to both teach a regulation skill and then immediately create a stressful context in which to practice it.

The project involved two separate studies. The first study was designed as a proof of concept to see if the approach was feasible. Researchers recruited healthy adult participants and first recorded their baseline heart rate, heart rate variability, and breathing rate while they sat quietly. Heart rate variability is a measure of the variation in time between consecutive heartbeats, with higher variability often indicating better physiological regulation and a greater capacity to cope with stress.

Following the baseline recording, participants put on a virtual reality headset and played a training game. In this game, they found themselves on a boat in a calm sea and were guided through a slow, paced breathing exercise. On-screen prompts instructed them to inhale for five seconds, hold for five seconds, and exhale for five seconds, with the goal of achieving a slow breathing rate of four breaths per minute. Immediately after this training, they were immersed in a stressful game set in a dark dungeon. The objective was to avoid being detected by a creature that could supposedly hear their heartbeats.

A biofeedback display, visible at all times, showed participants a simplified “stress score” based on their heart rate. A green light indicated they were safe, while amber and red lights signaled increasing danger of being discovered. To succeed, participants had to use the breathing technique they had just learned to keep their heart rate down.

The study found that participants were able to apply the breathing technique effectively. Their breathing rate during the stressful dungeon game was significantly lower than their initial resting rate, showing they were following the training. Although their heart rate naturally increased from the stress of the game, their heart rate variability also increased compared to their baseline levels. This suggested an enhanced state of physiological regulation, likely driven by the controlled breathing.

The second study was designed to more formally test the effectiveness of the training by comparing a trained group to an untrained control group. Both groups attended two sessions, separated by about a week. In the first session, all participants experienced an initial stressful virtual reality scenario involving an intruder in a house. This was done to establish a baseline measure of each person’s stress reactivity. After this initial stressor, only the training group played the boat game twice to learn the breathing technique.

When they returned for the second session, the training group received a refresher by playing the boat game two more times. Then, both the trained and untrained groups played the same stressful dungeon game from the first study. The results showed a clear effect of the training. During the dungeon game, the trained group had a significantly lower breathing rate and a significantly higher heart rate variability compared to the untrained control group.

When the researchers compared physiological responses across the two different stressors, they found a notable interaction. The trained group showed a significant improvement in their heart rate variability from the pre-training “intruder” stressor to the post-training “dungeon” stressor. This pattern of improvement was not observed in the control group, providing stronger evidence that the breathing training was responsible for the effect.

An unexpected observation was that participants in the trained group reported feeling subjectively more stressed than the control group. The authors speculate this could be related to a sense of performance anxiety, as the trained group was aware their application of the technique was being evaluated.

The researchers acknowledged some limitations in their work. The first study was affected by technical issues with the respiratory measurement equipment, which led to the loss of some data. Additionally, a minor coding error in the training game meant that the boat’s speed was incorrectly linked to heart rate, though the authors believe this was unlikely to have affected the learning of the breathing pattern.

Future research could explore the surprising finding that physiological regulation did not align with subjective feelings of stress. It may also examine whether skills learned in an unrealistic game scenario can be generalized to manage stress in real-world situations.

The study, “Using a virtual reality game to train biofeedback-based regulation under stress conditions,” was authored by Lucie Daniel-Watanabe, Benjamin Cook, Grace Leung, Marino Krstulović, Johanna Finnemann, Toby Woolley, Craig Powell, and Paul Fletcher.

Why a quest for a psychologically rich life may lead us to choose unpleasant experiences

New research suggests that the desire for a psychologically rich life, one filled with varied and perspective-altering experiences, is a significant driver behind why people choose activities that are intentionally unpleasant or challenging. The series of studies, published in the journal Psychology & Marketing, indicates that this preference is largely fueled by a motivation for personal growth.

Researchers have long been interested in why people sometimes opt for experiences that are not traditionally pleasurable, such as watching horror movies, eating intensely sour foods, or enduring grueling physical challenges. This behavior, known as counterhedonic consumption, seems to contradict the basic human drive to seek pleasure and avoid pain. While previous explanations have pointed to factors like sensation-seeking or a desire to accumulate a diverse set of life experiences, researchers proposed a new motivational framework to explain this phenomenon.

They theorized that some individuals are driven by a search for psychological richness, a dimension of well-being distinct from happiness or a sense of meaning. A psychologically rich life is characterized by novelty, complexity, and experiences that shift one’s perspective. The researchers hypothesized that this drive could lead people to embrace discomfort, not for the discomfort itself, but for the personal transformation and growth such experiences might offer.

To investigate this idea, the researchers conducted a series of ten studies involving a total of 2,275 participants. In an initial study, participants were presented with a poster for a haunted house pass and asked how likely they would be to try it. They also completed questionnaires measuring their desire for a psychologically rich life, as well as their desire for a happy or meaningful life and their tendency toward sensation-seeking.

The results showed a positive relationship between the search for psychological richness and a preference for the haunted house experience. This connection remained even when accounting for the other factors.

To see if this finding extended beyond fear-based activities, a subsequent study presented participants with a detailed description of an intensely sour chicken dish. Again, individuals who scored higher on the scale for psychological richness expressed a greater likelihood of ordering the dish.

A third study solidified these findings in a choice-based scenario, asking participants to select between a “blissful garden” experience and a “dark maze” designed to be disorienting. Those with a stronger desire for psychological richness were more likely to choose the dark maze, a finding that held even after controlling for general risk-taking tendencies.

Having established a consistent link, the research team sought to determine causality. In another experiment, they temporarily prompted one group of participants to focus on psychological richness by having them write about what it means to make choices based on a desire for interesting and perspective-changing outcomes. A control group wrote about their daily life. Afterward, both groups were asked about their interest in a horror movie streaming service.

The group primed to think about psychological richness showed a significantly higher preference for the service, suggesting that this mindset can directly cause an increased interest in counterhedonic experiences.

The next step was to understand the psychological process behind this link. The researchers proposed that a focus on self-growth was the key mechanism. One study tested this by again presenting the sour food scenario and then asking participants to what extent their choice was motivated by a desire for self-discovery and personal development. A statistical analysis revealed that the desire for self-growth fully explained the connection between a search for psychological richness and the preference for the sour dish.

To ensure self-growth was the primary driver, another study tested it against an alternative explanation: the desire to create profound memories. While a rich life might involve creating interesting stories to tell, the results showed that self-growth was the significant factor explaining the choice for the sour dish, whereas the desire for profound memories was not.

Further strengthening the causal claim, another experiment first manipulated participants’ focus on psychological richness and then measured their self-growth motivation. The results showed that the manipulation increased a focus on self-growth, which in turn increased the preference for the counterhedonic food item.

A final, more nuanced experiment provided further support for the self-growth mechanism. In this study, the researchers manipulated self-growth motivation directly. One group was asked to write about making choices that foster personal growth, while a control group was not. In the control condition, the expected pattern emerged: people higher in the search for psychological richness were more interested in the sour dish.

However, in the group where self-growth was made salient, preferences for the sour dish increased across the board. This effectively reduced the predictive power of a person’s baseline level of psychological richness, indicating that when the need for self-growth is met, the underlying trait becomes less of a deciding factor.

The research has some limitations. Many of the studies relied on hypothetical scenarios and self-reported preferences, which may not perfectly reflect real-world consumer behavior. The researchers suggest that future work could use field experiments to observe actual choices in natural settings. They also note that cultural differences could play a role, as some cultures may place a higher value on experiences of discomfort as a pathway to wisdom or personal development. Exploring these boundary conditions could provide a more complete picture of this motivational system.

The study, “The Allure of Pain: How the Quest for Psychological Richness Drives Counterhedonic Consumption,” was authored by Sarah Su Lin Lee, Ritesh Saini, and Shashi Minchael.

Depression may lead to cognitive decline via social isolation

An analysis of the China Health and Retirement Longitudinal Study data found that individuals with more severe depressive symptoms tend to report higher levels of social isolation at a later time point. In turn, individuals who are more socially isolated tend to report slightly worse cognitive functioning. Analyses showed that social isolation mediates a small part of the link between depressive symptoms and worse cognitive functioning. The paper was published in the Journal of Affective Disorders.

Depression is a mental health disorder characterized by persistent sadness, loss of interest or pleasure, and feelings of hopelessness that interfere with daily functioning. It adversely affects the way a person thinks, feels, and behaves. It can lead to difficulties in work, relationships, and self-care.

People with depression may experience fatigue, changes in appetite, and sleep disturbances. Concentration and decision-making can become harder, reducing productivity and motivation. Physical symptoms such as pain, headaches, or digestive issues may also appear without clear medical causes.

Depression can diminish the ability to enjoy previously pleasurable activities, leading to social withdrawal. This isolation can worsen depressive symptoms, creating a cycle of loneliness and despair. Social isolation itself is both a risk factor for developing depression and a common consequence of it.

Study author Jia Fang and her colleagues note that depressed individuals also tend to show worse cognitive functioning. They conducted a study aiming to explore the likely causal direction underpinning the longitudinal association between depressive symptoms and cognitive decline, and a possible mediating role social isolation has in this link among Chinese adults aged 45 years and above. These authors hypothesized that social isolation mediates the association between depressive symptoms and cognitive function.

Study authors analyzed data from the China Health and Retirement Longitudinal Study (CHARLS). CHARLS is a nationally representative longitudinal survey of Chinese residents aged 45 and above. This analysis used CHARLS data from three waves in 2013, 2015, and 2018, including a total of 9,220 participants. 51.4% were women. Participants’ average age was 58 years.

The authors of the study used data on participants’ depressive symptoms (the 10-item Center for Epidemiologic Studies Depression Scale), social isolation, and cognitive function (assessed with tests of contextual memory and mental integrity). A social isolation score was calculated based on four factors: being unmarried (single, separated, divorced, or widowed), living alone, having less than weekly contact with children (in person, via phone, or email), and not participating in any social activities in the past month.

Results showed that depressive symptoms were associated with subsequent social isolation. Social isolation, in turn, was associated with subsequent worse cognitive functioning. Further analyses showed that social isolation partially mediated the link between depressive symptoms and cognitive functioning, explaining 3.1% of the total effect.

The study authors concluded that the association between depressive symptoms and cognitive function is partially mediated by social isolation. They suggest that public health initiatives targeting depressive symptoms in older adults could reduce social isolation and help maintain cognitive health in middle-aged and older adults in China.

The study sheds light on the nature of the link between depressive symptoms and cognitive functioning. However, it should be noted that the design of the study does not allow definitive causal inferences to be derived from these results. Additionally, social isolation was assessed through self-reports, leaving room for reporting bias to have affected the results. Finally, the reported mediation effect was very modest in size, indicating that the link between depression and cognitive functioning depends much more on factors other than social isolation.

The paper, “Social isolation mediates association between depressive symptoms and cognitive function: Evidence from China Health and Retirement Longitudinal Study,” was authored by Jia Fang, Wencan Cheng, Huiyuan Li, Chen Yang, Ni Zhang, Baoyi Zhang, Ye Zhang, and Meifen Zhang.

New research explores why being single is linked to lower well-being in two different cultures

A new study finds that single adults in both the United States and Japan report lower well-being than their married peers. The research suggests that the influence of family support and strain on this health and satisfaction gap differs significantly between the two cultures. The findings were published in the journal Personal Relationships.

Researchers conducted this study to better understand the experiences of single adults outside of Western contexts. Much of the existing research has focused on places like the United States, where singlehood is becoming more common and accepted. In these individualistic cultures, some studies suggest single people may even have stronger connections with family and friends than married individuals.

However, in many Asian cultures, including Japan, marriage is often seen as a more essential part of life and family. This can create a different set of social pressures for single people. The researchers wanted to investigate whether these cultural differences would alter how family relationships, both positive and negative, are connected to the well-being of single and married people in the U.S. and Japan.

“I’ve always been curious about relationship transitions and singlehood lies in this awkward space where people are unsure if it really counts as an actual ‘relationship stage’ per se,” said study author Lester Sim, an assistant professor of psychology at Singapore Management University.

“Fortunately, the field is starting to recognize singlehood as an important period and it’s becoming more common, yet people still seem to judge singles pretty harshly. I find that kind of funny in a way, because it often reflects how we judge ourselves through others. Coming from an Asian background, I also wondered if these attitudes toward singlehood might play out differently across cultures, especially since family ties are so central in Asian contexts. That curiosity really sparked this project.”

To explore this, the research team analyzed data from two large, nationally representative studies: the Midlife in the U.S. (MIDUS) study and the Midlife in Japan (MIDJA) study. The combined sample included 4,746 participants who were 30 years of age or older. The researchers focused specifically on individuals who identified as either “married” or “never married,” and they took additional steps to exclude participants who were in a cohabiting or romantic relationship despite being unmarried.

Participants in both studies answered questions at two different points in time. The first wave of data included their marital status, their perceptions of family support, and their experiences of family strain. Family support was measured with items asking how much they felt their family cared for them or how much they could open up to family about their worries. Family strain was assessed with questions about how often family members criticized them or let them down.

At the second wave of data collection, participants reported on their well-being. This included rating their overall physical health on a scale from 0 to 10 and their satisfaction with life through a series of six questions about different life domains. The researchers then used a statistical approach to see how marital status at the first time point was related to well-being at the second time point, and whether family support and strain helped explain that relationship.

Across the board, the results showed that single adults in both the United States and Japan reported poorer physical health and lower life satisfaction compared to their married counterparts. This finding aligns with a large body of previous research suggesting that marriage is generally associated with better health outcomes.

When the researchers examined the role of family dynamics, they found distinct patterns in each country. For American participants, being married was associated with receiving more family support and experiencing less family strain. Both of these family factors were, in turn, linked to higher well-being. This suggests that for Americans, the well-being advantage of being married is partially explained by having more supportive and less tense family relationships.

The pattern observed in the Japanese sample was quite different. Single Japanese adults did report experiencing more family strain than married Japanese adults. Yet, this higher level of family strain did not have a significant connection to their physical health or life satisfaction later on.

“Family relationships matter a lot for everyone, whether you’re single or married, but in different ways across cultures,” Sim told PsyPost. “We found that singles in both the US and Japan reported lower well-being, in part because they experienced more family strain and less support (differentially across cultures). So even though singlehood is becoming more common, it still carries social and emotional costs. I think this shows how important it is to build more inclusive environments where singles feel equally supported and valued.”

Another notable finding from the Japanese sample was that there was no significant difference in the amount of family support reported by single and married individuals. While family support did predict higher life satisfaction for Japanese participants, it did not serve as a pathway explaining the well-being gap between single and married people in the way it did for Americans.

“I honestly thought the patterns would differ more across cultures,” Sim said. “I expected singles in Western countries to feel more accepted, and singles in Asia to rely more on family support and report greater strain; but neither of the latter findings turned out to be the case. It seems that, across the board, social norms around marriage still shape how people experience singlehood and well-being.”

The researchers acknowledged some limitations of their work. The definition of “single” was based on available survey questions and could be refined in future studies with more direct inquiries about relationship status.

“We focused only on familial support and strain because family is such a big part of East Asian culture,” Sim noted. “But singlehood is complex: friendships, loneliness, voluntary versus involuntary singlehood, and how satisfied people feel being single all matter too. We didn’t examine these constructs in the current study because there is existing work on this topic, so I wanted to bring more focus onto the family (especially with the cross-cultural focus). Future work should dig into those other layers and examine how they interact to shape the singlehood experience.”

It would also be beneficial to explore these dynamics across different age groups, as the pressures and supports related to marital status may change over a person’s lifespan. Such work would help create a more comprehensive picture of how singlehood is experienced around the world.

“I want to keep exploring how culture shapes the meanings people attach to relationships and singlehood,” Sim explained. “Long term, I hope this work helps shift the narrative away from the idea that marriage is the default route to happiness, and shift toward recognizing that there are many valid ways to live a good life.”

“Being single isn’t a problem to be fixed. It’s a meaningful, often intentional part of many people’s lives. The more we understand that, the closer we get to supporting well-being for everyone, not just those who are married.”

The study, “Cross-Cultural Differences in the Links Between Familial Support and Strain in Married and Single Adults’ Well-Being,” was authored by Lester Sim and Robin Edelstein.

“Major problem”: Ketamine fails to outperform placebo for treating severe depression in new clinical trial

A new clinical trial has found that adding repeated intravenous ketamine infusions to standard care for hospitalized patients with serious depression did not provide a significant additional benefit. The study, which compared ketamine to a psychoactive placebo, suggests that previous estimates of the drug’s effectiveness might have been influenced by patient and clinician expectations. These findings were published in the journal JAMA Psychiatry.

Ketamine, originally developed as an anesthetic, has gained attention over the past two decades for its ability to produce rapid antidepressant effects in individuals who have not responded to conventional treatments. Unlike standard antidepressants that can take weeks to work, a single infusion of ketamine can sometimes lift mood within hours. A significant drawback, however, is that these benefits are often short-lived, typically fading within a week.

This has led to the widespread practice of administering a series of infusions to sustain the positive effects. A central challenge in studying ketamine is its distinct psychological effects, such as feelings of dissociation or detachment from reality. When compared to an inactive placebo like a saline solution, it is very easy for participants and researchers to know who received the active drug, potentially creating strong expectancy effects that can inflate the perceived benefits.

To address this, the researchers designed their study to use an “active” placebo, a drug called midazolam, which is a sedative that produces noticeable effects of its own, making it a more rigorous comparison.

“Ketamine has attracted a lot of interest as a rapidly-acting antidepressant but it has short-lived effects. Therefore, its usefulness is quite limited. Despite this major limitation, ketamine is increasingly being adopted as an off-label treatment for depression, especially in the USA,” said study author Declan McLoughlin, a professor at Trinity College Dublin.

“We hypothesized that repeated ketamine infusions may have more sustained benefit. So far this has been evaluated in only a small number of trials. Another problem is that few ketamine trials have used an adequate control condition to mask the obvious dissociative effects of ketamine, e.g. altered consciousness and perceptions of oneself and one’s environment.”

“To try address some of these issues, we conducted an independent investigator-led randomized trial (KARMA-Dep 2) to evaluate antidepressant efficacy, safety, cost-effectiveness, and quality of life during and after serial ketamine infusions when compared to a psychoactive comparison drug midazolam. Trial participants were randomized to receive up to eight infusions of either ketamine or midazolam, given over four weeks, in addition to all other aspects of usual inpatient care.”

The trial, conducted at an academic hospital in Dublin, Ireland, aimed to see if adding twice-weekly ketamine infusions to the usual comprehensive care provided to inpatients could improve depression outcomes. Researchers enrolled adults who had been voluntarily admitted to the hospital for moderate to severe depression. These participants were already receiving a range of treatments, including medication, various forms of therapy, and psychoeducation programs.

In this randomized, double-blind study, 65 participants were assigned to one of two groups. One group received intravenous ketamine infusions twice a week for up to four weeks, while the other group received intravenous midazolam on the same schedule. The doses were calculated based on body weight. The double-blind design meant that neither the patients, the clinicians rating their symptoms, nor the main investigators knew who was receiving which substance. Only the anesthesiologist administering the infusion knew the assignment, ensuring patient safety without influencing the results.

The primary measure of success was the change in participants’ depression scores, assessed using a standard clinical tool called the Montgomery-Åsberg Depression Rating Scale. This assessment was conducted at the beginning of the study and again 24 hours after the final infusion. The researchers also tracked other outcomes, such as self-reported symptoms, rates of response and remission, cognitive function, side effects, and overall quality of life.

After analyzing the data from 62 participants who completed the treatment phase, the study found no statistically significant difference in the main outcome between the two groups. Although patients in both groups showed improvement in their depressive symptoms during their hospital stay, the group receiving ketamine did not fare significantly better than the group receiving midazolam. The average reduction in depression scores was only slightly larger in the ketamine group, a difference that was small and could have been due to chance.

Similarly, there were no significant advantages for ketamine on secondary measures, including self-reported depression symptoms, cognitive performance, or long-term quality of life. While the rate of remission from depression was slightly higher in the ketamine group (about 44 percent) compared to the midazolam group (30 percent), this difference was not statistically robust. The treatments were found to be generally safe, though ketamine produced more dissociative experiences during the infusion, while midazolam produced more sedation.

“We found no significant difference between the two groups on our primary outcome measure (i.e. depression severity assessed with the commonly used Montgomery-Åsberg Depression Rating Scale (MADRS)),” McLoughlin told PsyPost. “Nor did we find any difference between the two groups on any other secondary outcome or cost-effectiveness measure. Under rigorous clinical trial conditions, adjunctive ketamine provided no additional benefit to routine inpatient care during the initial treatment phase or the six-month follow-up period.”

A key finding emerged when the researchers checked how well the “blinding” had worked. They discovered that it was not very successful. From the very first infusion, the clinicians rating patient symptoms were able to guess with high accuracy who was receiving ketamine.

Patients in the ketamine group also became quite accurate at guessing their treatment over time. This functional unblinding complicates the interpretation of the results, as the small, nonsignificant trend favoring ketamine could be explained by the psychological effect of knowing one is receiving a treatment with a powerful reputation.

“Our initial hypothesis was that repeated ketamine infusions for people hospitalised with depression would improve mood outcomes,” McLoughlin said. “However, contrary to our hypothesis, we found this not to be the case. We suspect that functional unblinding (due to its obvious dissociative effects) has amplified the placebo effects of ketamine in previous trials. This is a major, often unacknowledged, problem with many recent trials in psychiatry evaluating ketamine, psychedelic, and brain stimulation therapies. Our trial highlights the importance of reporting the success, or lack thereof, of blinding in clinical trials.”

The study’s authors acknowledged some limitations. The research was unable to recruit its planned number of participants, partly due to logistical challenges created by the COVID-19 pandemic. This smaller sample size reduced the study’s statistical power, making it harder to detect a real, but modest, difference between the treatments if one existed. The primary limitation, however, remains the challenge of blinding.

The results from this trial suggest that when tested under more rigorous conditions, the antidepressant benefit of repeated ketamine infusions may be smaller than suggested by earlier studies that used inactive placebos. The researchers propose that expectations for both patients and clinicians may play a substantial role in ketamine’s perceived effects. This highlights the need to recalibrate expectations for ketamine in clinical practice and for more robustly designed trials in psychiatry.

Looking forward, the researchers emphasize the importance of reporting negative or null trial results to provide a balanced view of a treatment’s capabilities. They also expressed concern about a separate in the field: the promotion of ketamine as an equally effective alternative to electroconvulsive therapy, or ECT.

“Scrutiny of the scientific literature shows that this includes methodologically flawed trials and invalid meta-analyses,” McLoughlin said. “We discuss this in some detail in a Comment piece just published in Lancet Psychiatry. Unfortunately, such errors have been accepted as scientific evidence and are already creeping into international clinical guidelines. There is a thus a real risk of patients and clinicians being steered towards a less effective treatment, particularly for patients with severe, sometimes life-threatening, depression.”

The study, “Serial Ketamine Infusions as Adjunctive Therapy to Inpatient Care for Depression: The KARMA-Dep 2 Randomized Clinical Trial,” was authored by Ana Jelovac, Cathal McCaffrey, Masashi Terao, Enda Shanahan, Emma Whooley, Kelly McDonagh, Sarah McDonogh, Orlaith Loughran, Ellie Shackleton, Anna Igoe, Sarah Thompson, Enas Mohamed, Duyen Nguyen, Ciaran O’Neill, Cathal Walsh, and Declan M. McLoughlin.

Perceiving these “dark” personality traits in a partner strongly predicts relationship dissatisfaction

A new study suggests that higher levels of psychopathic traits are associated with lower relationship satisfaction in romantic couples. The research indicates that a person’s perception of their partner’s traits is a particularly strong predictor of their own discontent within the relationship. The findings were published in the Journal of Couple & Relationship Therapy.

The research team was motivated by the established connection between personality and the quality of romantic relationships. While traits like agreeableness and conscientiousness are known to support relationship satisfaction, maladaptive traits, such as those associated with psychopathy, are understood to be detrimental. Psychopathy is not a single trait but a combination of characteristics, including interpersonal manipulation, a callous lack of empathy, an erratic lifestyle, and antisocial tendencies.

Previous studies have shown that individuals with more pronounced psychopathic traits tend to prefer short-term relationships, are more likely to be unfaithful, and may engage in controlling or destructive behaviors. Yet, much of this research did not simultaneously account for the perspectives of both partners in a relationship. The researchers aimed to provide a more nuanced understanding by examining how both a person’s own traits and their partner’s traits, as viewed by themselves and by their partner, collectively influence relationship satisfaction.

To investigate these dynamics, the researchers recruited a sample of 85 heterosexual couples from the Netherlands. The participants were predominantly young adults, many of whom were students. Each member of the couple independently completed a series of online questionnaires. The surveys were designed to measure their own psychopathic traits, their perception of their partner’s psychopathic traits, and their overall satisfaction with their relationship.

For measuring psychopathic traits, the study used a well-established questionnaire that assesses three primary facets: Interpersonal Manipulation (e.g., being charming but deceptive), Callous Affect (e.g., lacking guilt or empathy), and Erratic Lifestyle (e.g., impulsivity and irresponsibility). A fourth facet, Antisocial Tendencies, was excluded from the final analysis due to statistical unreliability within this specific sample. Participants completed one version of this questionnaire about themselves and a modified version about their romantic partner.

The researchers used a specialized statistical technique called the Actor-Partner Interdependence Model to analyze the data. This method is uniquely suited for studying couples because it can distinguish between two different kinds of influence. “Actor effects” refer to the association between an individual’s own characteristics and their own outcomes. For example, it can measure how your self-rated manipulativeness relates to your own relationship satisfaction. “Partner effects” describe the association between an individual’s characteristics and their partner’s outcomes, such as how your self-rated manipulativeness relates to your partner’s satisfaction.

Before conducting the main analysis, the researchers examined how partners’ ratings related to one another. They found very little “actual similarity,” meaning that a man’s level of psychopathic traits was not significantly related to his female partner’s level. However, they did find moderate “perceptual accuracy,” which means that how a person rated their partner was generally in line with how that partner rated themselves. There was also strong “perceptual similarity,” indicating that people tended to rate their partners in a way that was similar to how they rated themselves.

One notable preliminary finding was that both men and women tended to rate their partners as having lower levels of psychopathic traits than their partners reported for themselves. This could suggest a positive bias, where individuals maintain a more charitable view of their partner, or it may indicate that certain maladaptive traits are not easily observable to others in a relationship.

The central findings of the study emerged from the Actor-Partner Interdependence Model. The most consistent result was a negative actor effect related to partner perception. When an individual rated their partner higher on psychopathic traits, that same individual reported lower satisfaction with the relationship. This connection was present for both men and women and held true across the total psychopathy score and its specific facets.

The study also identified other significant associations. For both men and women, rating oneself higher on Interpersonal Manipulation was linked to lower satisfaction in one’s own relationship. This suggests that a manipulative style may be unfulfilling even for the person exhibiting it.

A partner effect was observed for the trait of Callous Affect. When a person was perceived by their partner as being more callous, unemotional, and lacking in empathy, that partner reported lower relationship satisfaction. This highlights the direct interpersonal damage that a lack of emotional connection can inflict on a relationship.

In an unexpected turn, the analysis revealed one positive association. When women rated themselves as higher in Callous Affect, their male partners reported slightly higher levels of relationship satisfaction. The researchers propose that this could be related to gender stereotypes, where traits that might be labeled as callous in a clinical sense could be interpreted differently, perhaps as toughness or independence, in women by their male partners.

The study has some limitations that the authors acknowledge. The sample consisted of young, primarily student-based, heterosexual couples in relatively short-term relationships, which may not represent the dynamics in older, married, or more diverse couples. Because the study captured data at a single point in time, it cannot establish causality; it shows an association, not that psychopathic traits cause dissatisfaction. The sample size also meant the study was better equipped to detect medium-to-large effects, and smaller but still meaningful associations might have been missed.

Future research could build on these findings by studying larger and more diverse populations over a longer period. Following couples over time would help clarify how these personality dynamics affect relationship quality and stability as the relationship matures. A longitudinal approach could also determine if these traits predict relationship dissolution.

The study, “Psychopathic Traits and Relationship Satisfaction in Intimate Partners: A Dyadic Approach,” was authored by Frederica M. Martijn, Liam Cahill, Mieke Decuyper, and Katarzyna (Kasia) Uzieblo.

What scientists found when they analyzed 187 of Donald Trump’s shrugs

A new study indicates that Donald Trump’s frequent shrugging is a deliberate communication tool used to establish common ground with his audience and express negative evaluations of his opponents and their policies. The research, published in the journal Visual Communication, suggests these gestures are a key component of his populist performance style, helping him appear both ordinary and larger-than-life.

Researchers have become increasingly interested in the communication style of right-wing populism, which extends beyond spoken words to include physical performance. While a significant amount of analysis has focused on Donald Trump’s language, particularly on social media platforms, his live performances at rallies have received less systematic attention. The body is widely recognized as being important to political performance, but the specific gestures used are not always well understood.

This new research on shrugging builds on a previous study by one of the authors that examined Trump’s use of pointing gestures. That analysis found that Trump uses different kinds of points to serve distinct functions, such as pointing outwards to single out opponents, pointing inwards to emphasize his personal commitment, and pointing downwards to connect his message to the immediate location of his audience. The current study continues this investigation into his non-verbal communication by focusing on another of his signature moves, the shrug.

“The study was motivated by several factors,” explained Christopher Hart, a professor of linguistics at Lancaster University and the author of Language, Image, Gesture: The Cognitive Semiotics of Politics.

(1) Political scientists frequently refer to the more animated bodily performance of right wing populist politicians like Trump compared to non-populist leaders. We wanted to study one gesture – the shrug – that seemed to be implicated here. (2) Trump’s shrug gestures have been noted by the media previously and described as his “signature move”. We wanted to study this gesture in more detail to examine its precise forms and the way he uses it to fulfil rhetorical goals.”

“(3) To meet a gap: while a great deal has been written about Donald Trump’s speech and his use of language online, much less has been written about the gestures that accompany his speech in live settings. This is despite the known importance of gesture in political communication.”

To conduct their analysis, the researchers examined video footage of two of Trump’s campaign rallies from the 2016 primary season. The events, one in Dayton, Ohio, and the other in Buffalo, New York, amounted to approximately 110 minutes of data. The researchers adopted a conservative approach, identifying 187 clear instances of shrugging gestures across the two events.

Each shrug was coded based on its physical form and its communicative function. For the form, they classified shrugs based on the orientation of the forearms and the position of the hands relative to the body. They also noted whether the shrug was performed with one or two hands and whether it was a simple gesture or a more complex, animated movement. To understand the function, they analyzed the spoken words accompanying each shrug to determine the meaning being conveyed.

Hart was surprised “just how often Trump shrugs – 1.7 times per minute in the campaign rallies analyzed. Trump is a prolific shrugger and this is one way his communication style breaks with traditional forms of political communication.”

The analysis of the physical forms of the shrugs provided evidence for what has been described as a strong “corporeal presence.” Trump tended to favor expansive shrugs, with his hands positioned outside his shoulder width, a form that physically occupies more space.

The second most frequent type was the “lateral” shrug, where his arms extend out to his sides, sometimes in a highly theatrical, showman-like manner. This use of large, exaggerated gestures appears to contribute to a performance style more commonly associated with live entertainment than with traditional politics.

The researchers also noted that nearly a third of his shrugs were complex, meaning they involved animated, oscillating movements. These gestures create a dynamic and sometimes caricatured performance. While these expansive and animated shrugs help create an extraordinary, entertaining persona, the very act of shrugging is an informal, everyday gesture. This combination seems to allow Trump to simultaneously signal both his ordinariness and his exceptionalism.

When examining the functions of the shrugs, the researchers found that the most common meaning was not what many people might expect. While shrugs are often associated with expressing ignorance (“I don’t know”) or indifference (“I don’t care”), these were not their primary uses in Trump’s speeches. Instead, the most frequent function, accounting for over 44 percent of instances, was to signal common ground or obviousness. Trump often uses a shrug to present a statement as a self-evident truth that he and his audience already share.

For example, he would shrug when asking rhetorical questions like “We love our police. Do we love our police?” The gesture suggests the answer is obvious and that everyone in the room is in agreement. He also used these shrugs to present his own political skills as a given fact or to frame the shortcomings of his opponents as plainly evident to all. This use of shrugging appears to be a powerful tool for building a sense of shared knowledge and values with his supporters.

“Most people think of shrugs as conveying ‘I don’t know’ or ‘I don’t care,” Hart told PsyPost. “While Trump uses shrugs to convey these meanings, more often he uses shrugs to indicate that something is known to everyone or obviously the case. This is one of the ways he establishes common ground and aligns himself with his audience, indicating that he and they hold a shared worldview.”

The second most common function was to express what the researchers term “affective distance.” This involves conveying negative emotions like disapproval, dissatisfaction, or dismay towards a particular state of affairs. When discussing trade deals he considered terrible or military situations he found lacking, a shrug would often accompany his words. In these cases, the gesture itself, rather than the explicit language, carried the negative emotional evaluation of the topic.

Shrugs that conveyed “epistemic distance,” meaning ignorance, doubt, or disbelief, accounted for about 17 percent of the total. A notable use of this function occurred during what is known as “constructed dialogue,” where Trump would re-enact conversations. In one instance, he used a mocking shrug while impersonating a political opponent to portray them as clueless and incompetent, a performance that drew laughter from the crowd.

The least common function was indifference, or the classic “I don’t care” meaning. Though infrequent, these shrugs served a strategic purpose. When shrugging alongside a phrase like “I understand that it might not be presidential. Who cares?,” Trump used the gesture to dismiss the conventions of traditional politics. This helps him position himself as an outsider who is not bound by the same rules as the political establishment.

The findings highlight that “what politicians do with their hands and other body parts is an important part of their message and their brand,” Hart told PsyPost. However, he emphasizes that “gestures are not ‘body language.’ They do not accidentally give away one’s emotional state. Gestures are built in to the language system and are part of the way we communicate. They carry part of the information speakers intend to convey and that information forms part of the message audiences take away.”

The study does have some limitations. Its analysis is focused exclusively on Donald Trump, so it remains unclear whether this pattern of shrugging is unique to his style or a broader feature of right-wing populist communication. Future research could compare his gestural profile to that of other populist and non-populist leaders.

Additionally, the study centered on one specific gesture, and a more complete picture would require analyzing the full range of a politician’s non-verbal repertoire. The authors also suggest that future work could examine other elements, like facial expressions and the timing of gestures, in greater detail.

Despite these limitations, the research provides a detailed look at how a seemingly simple gesture can be a sophisticated and versatile rhetorical tool. Trump’s shrugs appear to be a central part of a performance style that transgresses political norms, creates entertainment value, and forges a strong connection with his base. The findings indicate the importance of looking beyond a politician’s words to understand the full, embodied performance through which they communicate their message.

“We hope to look at other gestures of Trump to build a bigger picture of how he uses his body to distinguish himself from other politicians and to imbue his performances with entertainment value,” Hart said. This might include, for example, his use of chopping or slicing gestures. I also hope to explore the gestural performances of other right wing populist politicians in Europe to see how their gestures compare. ”

The study, “A shrug of the shoulders is a stance-taking act: The form-function interface of shrugs in the multimodal performance of Donald Trump,” was authored by Christopher Hart and Steve Strudwick.

Horror films may help us manage uncertainty, a new theory suggests

A new study proposes that horror films are appealing because they offer a controlled environment for our brains to practice predicting and managing uncertainty. This process of learning to master fear-inducing situations can be an inherently rewarding experience, according to the paper published in Philosophical Transactions of the Royal Society B.

The authors behind the paper, published in 2013, sought to address why people are drawn to entertainment that is designed to be frightening or disgusting. While some studies have shown psychological benefits from engaging with horror, many existing theories about its appeal seem to contradict one another. The authors aimed to provide a single, unifying framework that could explain how intentionally seeking out negative feelings like fear can result in positive psychological outcomes.

To do this, they applied a theory of brain function known as predictive processing. This framework suggests the brain operates as a prediction engine, constantly making forecasts about incoming sensory information from the world. When reality does not match the brain’s prediction, a “prediction error” occurs, which the brain then works to minimize by updating its internal models or by acting on the world to make it more predictable.

This does not mean humans always seek out calm and predictable situations. The theory suggests people are motivated to find optimal opportunities for learning, which often lie at the edge of their understanding. The brain is not just sensitive to the amount of prediction error, but to the rate at which that error is reduced over time. When we reduce uncertainty faster than we expected, it generates a positive feeling.

This search for the ideal rate of error reduction is what drives curiosity and play. We are naturally drawn to a “Goldilocks zone” of manageable uncertainty that is neither too boringly simple nor too chaotically complex. The researchers argue that horror entertainment is specifically engineered to place its audience within this zone.

According to the theory, horror films can be understood as a form of “affective technology,” designed to manipulate our predictive minds. Even though we know the monsters are not real, the brain processes the film as an improbable version of reality from which it can still learn. Many horror monsters tap into deep-seated, evolutionary fears of predators by featuring sharp teeth, claws, and stealthy, ambush-style behaviors.

The narrative structures of horror films are also built to play with our expectations. The slow build-up of suspense creates a state of high anticipation, and a “jump scare” works by suddenly violating our moment-to-moment predictions. The effectiveness of these techniques is heightened because they are not always predictable. Sometimes the suspense builds and nothing happens, which makes the audience’s response system even more alert.

At the same time, horror films often rely on familiar patterns and clichés, such as the “final girl” who survives to confront the villain. This combination of surprising events within a somewhat predictable structure provides the mix of uncertainty and resolvability that the predictive brain finds so engaging.

The authors propose that engaging with this controlled uncertainty has several benefits. One is that horror provides a low-stakes training ground for learning about high-stakes situations. This idea, known as morbid curiosity, suggests that we watch frightening content to gain information that could be useful for recognizing and avoiding real-world dangers. For example, the film Contagion saw a surge in popularity during the early days of the COVID-19 pandemic, as people sought to understand the potential realities of a global health crisis.

Another benefit is related to emotion regulation. By exposing ourselves to fear in a safe context, we can learn about our own psychological and physiological responses. The experience allows us to observe our own anxiety, increased heart rate, and other reactions as objects of attention, rather than just being swept away by them. This process can grant us a greater sense of awareness and control over our own emotional states, similar to the effects of mindfulness practices.

The theory also offers an explanation for why some people prone to anxiety might be drawn to horror. Anxiety can be associated with a feeling of uncertainty about one’s own internal bodily signals, a state known as noisy interoception. Watching a horror movie provides a clear, external source for feelings of fear and anxiety. For a short time, the rapid heartbeat and sweaty palms have an obvious and controllable cause: the monster on the screen, not some unknown internal turmoil.

The researchers note that this engagement is not always beneficial. For some individuals, particularly those with a history of trauma, horror media may serve to confirm negative beliefs about the world being a dangerous and threatening place. This can create a feedback loop where a person repeatedly seeks out horrifying content, reinforcing a sense of hopelessness or learned helplessness. Future work could examine when the engagement with scary media crosses from a healthy learning experience into a potentially pathological pattern.

The study, “Surfing uncertainty with screams: predictive processing, error dynamics and horror films,” was authored by Mark Miller, Ben White and Coltan Scrivner.

Long-term study shows romantic partners mutually shape political party support

A new longitudinal study suggests that intimate partners mutually influence each other’s support for political parties over time. The research found that a shift in one person’s support for a party was predictive of a similar shift in their partner’s support the following year, a process that may contribute to political alignment within couples and broader societal polarization. The findings were published in Personality and Social Psychology Bulletin/em>.

Political preferences are often similar within families, particularly between parents and children. However, less is known about how political views might be shaped during adulthood, especially within the context of a long-term romantic relationship. Prior studies have shown that partners often hold similar political beliefs, but it has been difficult to determine if this is because people choose partners who already agree with them or if they gradually influence each other over the years.

The authors of the new study sought to examine if this similarity is a result of ongoing influence. They wanted to test whether a change in one partner’s political stance could predict a future change in the other’s. To do this, they used a large dataset from New Zealand, a country with a multi-party system. This setting allowed them to see if any influence was specific to one or two major parties or if it occurred across a wider ideological spectrum, including smaller parties focused on issues like environmentalism, indigenous rights, and libertarianism.

To conduct their investigation, the researchers analyzed data from the New Zealand Attitudes and Values Study, a large-scale project that has tracked thousands of individuals over many years. Their analysis focused on 1,613 woman-man couples who participated in the study for up to 10 consecutive years. Participants annually rated their level of support for six different political parties on a scale from one (strongly oppose) to seven (strongly support).

The study employed a sophisticated statistical model designed for longitudinal data from couples. This technique allowed the researchers to separate two different aspects of a person’s political support. First, it identified each individual’s stable, long-term average level of support for a given party. Second, it isolated the small, year-to-year fluctuations or deviations from that personal average. This separation is important because it allows for a more precise test of influence over time.

The analysis then examined whether a fluctuation in one partner’s party support in a given year could predict a similar fluctuation in the other partner’s support in the subsequent year. This was done while accounting for the fact that couples already tend to have similar average levels of support.

The results showed a consistent pattern of mutual influence. For all six political parties examined, a temporary increase in one partner’s support for that party was associated with a subsequent increase in the other partner’s support one year later. This finding suggests that partners are not just politically similar from the start of their relationship but continue to shape one another’s specific party preferences over time.

This influence also appeared to be a two-way street. The researchers tested whether men had a stronger effect on women’s views or if the reverse was true. They found that the strength of influence was generally equal between partners. With only one exception, the effect of men on women’s party support was just as strong as the effect of women on men’s support.

The single exception involved the libertarian Association of Consumers and Taxpayers Party, where men’s changing support had a slightly stronger influence on women’s subsequent support than the other way around. For the other five parties, including the two largest and three other smaller parties, the influence was symmetrical. This challenges the idea that one partner, typically the man, is the primary driver of a couple’s political identity.

An additional analysis explored whether this dynamic of influence applied to a person’s general political orientation, which was measured on a scale from extremely liberal to extremely conservative. In this case, the pattern was different. While partners tended to be similar in their overall political orientation, changes in one partner’s self-rated orientation did not predict changes in the other’s over time. This suggests that the influence partners have on each other may be more about support for specific parties and their platforms than about shifting a person’s fundamental ideological identity.

The researchers acknowledge some limitations of their work. The study focused on established, long-term, cohabiting couples in New Zealand, so the findings may not apply to all types of relationships or to couples in other countries with different political systems. Because the couples were already in established relationships, the study also cannot entirely separate the effects of ongoing influence from the possibility that people initially select partners who are politically similar to them.

Future research could explore these dynamics in newer relationships to better understand the interplay between partner selection and later influence. Additional studies could also investigate the specific mechanisms of this influence, such as how political discussions, media consumption, or conflict avoidance might play a role in this process. Examining whether these shifts in expressed support translate to actual behaviors like voting is another important avenue for exploration.

The study, “The Interpersonal Transmission of Political Party Support in Intimate Relationships,” was authored by Sam Fluit, Nickola C. Overall, Danny Osborne, Matthew D. Hammond, and Chris G. Sibley.

Study finds a shift toward liberal politics after leaving religion

A new study suggests that individuals who leave their religion tend to become more politically liberal, often adopting views similar to those who have never been religious. This research, published in the Journal of Personality, provides evidence that the lingering effects of a religious upbringing may not extend to a person’s overall political orientation. The findings indicate a potential boundary for a psychological phenomenon known as “religious residue.”

Researchers conducted this study to investigate a concept called religious residue. This is the idea that certain aspects of a person’s former religion, such as specific beliefs, behaviors, or moral attitudes, can persist even after they no longer identify with that faith. Previous work has shown that these lingering effects can be seen in areas like moral values and consumer habits, where formerly religious people, often called “religious dones,” continue to resemble currently religious individuals more than those who have never been religious.

The research team wanted to determine if this pattern of residue also applied to political orientation. Given the strong link between religiosity and political conservatism in many cultures, it was an open question what would happen to a person’s politics after leaving their faith. They considered three main possibilities. One was that religious residue would hold, meaning religious dones would remain relatively conservative.

Another possibility was that they would undergo a “religious departure,” shifting to a liberal orientation similar to the never-religious. A third option was “religious reactance,” where they might react against their past by becoming even more liberal than those who were never religious.

To explore these possibilities, the researchers analyzed data from eight different samples across three multi-part studies. The first part involved a series of six cross-sectional analyses, which provide a snapshot in time. These studies included a total of 7,089 adults from the United States, the Netherlands, and Hong Kong. Participants were asked to identify as currently religious, formerly religious, or never religious, and to rate their political orientation on a scale from conservative to liberal.

In five of these six samples, the results pointed toward a similar pattern. Individuals who had left their religion reported significantly more liberal political views than those who were currently religious. Their political orientation tended to align closely with that of individuals who had never been religious. When the researchers combined all six samples for a more powerful analysis, they found that religious dones were, on average, more politically liberal than both currently religious and never-religious individuals. This combined result offered some initial evidence for the religious reactance hypothesis.

To gain a clearer picture of how these changes unfold over time, the researchers next turned to longitudinal data, which tracks the same individuals over many years. The second study utilized data from the National Study of Youth and Religion, a project that followed a representative sample of 2,071 American adolescents into young adulthood. This allowed the researchers to compare the political attitudes of those who remained affiliated with a religion, those who left their religion at different points, and those who were never religious.

The findings from this longitudinal sample provided strong support for the religious departure hypothesis. Individuals who left their religion during their youth or young adulthood reported more liberal political attitudes than those who remained religious. However, their political views were not significantly different from the views of those who had never been religious. This study also failed to find evidence for “residual decay,” the idea that religious residue might fade slowly over time. Instead, the shift toward a more liberal orientation appeared to be a distinct change associated with leaving religion, regardless of how long ago the person had de-identified.

The third study aimed to build on these findings with another longitudinal dataset, the Family Foundations of Youth Development project. This study followed 1,857 adolescents and young adults and had the advantage of measuring both religious identification and political orientation at multiple time points. This design allowed the researchers to use advanced statistical models to examine the sequence of these changes. Specifically, they could test whether becoming more liberal preceded leaving religion, or if leaving religion preceded becoming more liberal.

The results of this final study confirmed the findings of the previous ones. Religious dones again reported more liberal political attitudes, similar to their never-religious peers. The more advanced analysis revealed that changes in religious identity tended to precede changes in political orientation. In other words, the data suggests that an individual’s departure from religion came first, and this was followed by a shift toward a more liberal political stance. The reverse relationship, where political orientation predicted a later change in religious identity, was not statistically significant in this sample.

The researchers acknowledge some limitations in their work. The studies relied on a single, broad question to measure political orientation, which may not capture the complexity of political beliefs on specific social or economic issues. While the longitudinal designs provide a strong basis for inference, the data is observational, and experimental methods would be needed to make definitive causal claims. The modest evidence for religious reactance was only present in the combined cross-sectional data and may have been influenced by the age of the participants or other sample-specific factors.

Future research could explore these dynamics using more detailed assessments of political ideology to see if religious residue appears in certain policy areas but not others. Examining the role of personality traits like dogmatism could also offer insight into why some individuals shift their political views so distinctly.

Despite these limitations, the collection of studies provides converging evidence that for many people, leaving religion is associated with a clear and significant move toward a more liberal political identity. This suggests that as secularization continues in many parts of the world, it may be accompanied by corresponding shifts in the political landscape.

The study, “Religious Dones Become More Politically Liberal After Leaving Religion,” was authored by Daryl R. Van Tongeren, Sam A. Hardy, Emily M. Taylor, and Phillip Schwadel.

Popular ‘cognitive reserve’ theory challenged by massive new study on education and aging

An analysis of massive cognitive and neuroimaging databases indicated that more education was associated with better memory, larger intracranial volume, and slightly larger volumes of memory-sensitive brain regions. However, contrary to popular theories, education did not appear to protect against the rate of age-related memory decline, nor did it weaken the effects of brain decline on cognition. The paper was published in Nature Medicine.

As people reach advanced age, they tend to start gradually losing their mental abilities. This is called age-related cognitive decline. It typically affects functions such as memory, attention, processing speed, and problem-solving. This decline is a normal part of aging and differs from more serious conditions like dementia or Alzheimer’s disease.

Many older adults notice mild forgetfulness, slower thinking, or difficulty learning new information. Biological changes in the brain, such as reduced neural activity and decreased blood flow, contribute to this process. Lifestyle factors like lack of physical activity, poor diet, and chronic stress can accelerate cognitive aging.

On the other hand, regular mental stimulation, social engagement, and physical exercise can help maintain cognitive health. Adequate sleep and managing conditions like hypertension or diabetes also play a role in slowing decline. The rate and severity of decline vary greatly among individuals. Some people maintain sharp cognitive abilities well into old age, while others experience noticeable difficulties.

Study author Anders M. Fjell and his colleagues note that leading theories propose that education reduces brain decline related to aging and enhances tolerance to brain pathology. Other theories propose that education does not affect cognitive decline but instead reflects higher early-life cognitive function. With this in mind, they conducted a study aiming to resolve this long-standing debate.

They conducted a large-scale mega-analysis of data from multiple longitudinal cohorts, including the Survey of Health, Ageing, and Retirement in Europe (SHARE) and the Lifebrain consortium. In total, they analyzed over 407,000 episodic memory scores from more than 170,000 participants across 33 countries. For the neuroimaging component, they analyzed 15,157 magnetic resonance imaging scans with concurrent memory tests from 6,472 participants across seven countries. In their analyses, they defined brain decline as reductions over time in memory-sensitive brain regions within the same participant.

Results showed that while older age was associated with lower memory scores, the association between education level and the rate of memory decline was negligible. Individuals with a higher education level tended to have better memory throughout their lives but did not differ from their less-educated peers in the speed with which their memory declined as they aged.

Individuals with more education also tended to have a larger intracranial volume (a proxy for maximum brain size developed early in life) and slightly larger volumes of memory-sensitive brain regions.

“In this large-scale, geographically diverse longitudinal mega-analytic study, we found that education is related to better episodic memory and larger intracranial volume and modestly to memory-sensitive brain regions. These associations are established early in life and not driven by slower brain aging or increased resilience to structural brain changes. Therefore, effects of education on episodic memory function in aging likely originate earlier in life,” the study authors concluded.

The study contributes to the scientific understanding of factors affecting age-related cognitive decline by providing strong evidence that education provides a “head start” rather than acting as a shield against decline. The research focused on episodic memory because it is particularly sensitive to the effects of aging and is a key indicator in dementia research. Sensitivity analyses on other cognitive tests, such as numeric skills and orientation, showed the same pattern, strengthening the study’s main conclusion.

The paper, “Reevaluating the role of education on cognitive decline and brain aging in longitudinal cohorts across 33 Western countries,” was authored by Anders M. Fjell, Ole Rogeberg, Øystein Sørensen, Inge K. Amlien, David Bartrés-Faz, Andreas M. Brandmaier, Gabriele Cattaneo, Sandra Düzel, Håkon Grydeland, Richard N. Henson, Simone Kühn, Ulman Lindenberger, Torkild Hovde Lyngstad, Athanasia M. Mowinckel, Lars Nyberg, Alvaro Pascual-Leone, Cristina Solé-Padullés, Markus H. Sneve, Javier Solana, Marie Strømstad, Leiv Otto Watne, Kristine B. Walhovd, and Didac Vidal-Piñeiro.

Psilocybin therapy linked to lasting depression remission five years later

A new long-term follow-up study has found that a significant majority of individuals treated for major depressive disorder with psilocybin-assisted therapy were still in remission from their depression five years later. The research, which tracked participants from an earlier clinical trial, suggests that the combination of the psychedelic substance with psychotherapy can lead to lasting improvements in mental health and overall well-being. The findings were published in the Journal of Psychedelic Studies.

Psilocybin is the primary psychoactive compound found in certain species of mushrooms, often referred to as “magic mushrooms.” When ingested, it can produce profound alterations in perception, mood, and thought. In recent years, researchers have been investigating its potential as a therapeutic tool when administered in a controlled clinical setting alongside psychological support.

The rationale for this line of research stems from the limitations of existing treatments for major depressive disorder. While many people benefit from conventional antidepressants and psychotherapy, a substantial portion do not achieve lasting remission, and medications often come with undesirable side effects and require daily, long-term use.

Psychedelic-assisted therapy represents a different treatment model, one where a small number of high-intensity experiences might catalyze durable psychological changes. This new study was conducted to understand the longevity of the effects observed in an earlier, promising trial.

The research team, led by Alan Davis, an associate professor and director of the Center for Psychedelic Drug Research and Education at The Ohio State University, sought to determine if the initial antidepressant effects would hold up over a much longer period. Davis co-led the original 2021 trial at Johns Hopkins University, and this follow-up represents a collaborative effort between researchers at both institutions.

“We conducted this study to answer a critical question about the enduring effects of psilocybin therapy – namely, what happens after clinical trials end, and do participants experience enduring benefits from this treatment,” Davis told PsyPost.

The investigation was designed as a long-term extension of a clinical trial first published in 2021. That initial study involved 24 adults with a diagnosis of major depressive disorder. The participants were divided into two groups: one that received the treatment immediately and another that was placed on a wait-list before receiving the same treatment.

The therapeutic protocol was intensive, involving approximately 13 hours of psychotherapy in addition to two separate sessions where participants received a dose of psilocybin. The original findings were significant, showing a large and rapid reduction in depression symptoms for the participants, with about half reporting a complete remission from their depression that lasted for up to one year.

For the new follow-up, conducted an average of five years after the original treatment, the researchers contacted all 24 of the initial participants. Of those, 18 enrolled and completed the follow-up assessments. This process involved a series of online questionnaires designed to measure symptoms of depression and anxiety, as well as any functional impairment in their daily lives.

Participants also underwent a depression rating assessment administered by a clinician and took part in in-depth interviews. These interviews were intended to capture a more nuanced understanding of their experiences and life changes since the trial concluded, going beyond what numerical scores alone could convey.

The researchers found that 67% of the original participants were in remission from their depression. This percentage was slightly higher than the 58% who were in remission at the one-year follow-up point.

“We found that most people reported enduring benefits in their life since participating in psilocybin therapy,” Davis said. “Overall, many reported that even if depression came back, that it was more manageable, less tied to their identity, and that they found it was less interfering in their life.”

To ensure their analysis was robust, the scientists took a conservative approach when handling the data for the six individuals who did not participate in the long-term follow-up. They made the assumption that these participants had experienced a complete relapse and that their depression symptoms had returned to their pre-treatment levels.

“Even controlling for those baseline estimates from the people who didn’t participate in the long-term follow-up, we still see a very large and significant reduction in depression symptoms,” said Davis, who also holds faculty positions in internal medicine and psychology at Ohio State. “That was really exciting for us because this showed that the number of participants still in complete remission from their depression had gone up slightly.”

The study also revealed that these lasting improvements were not solely the product of the psilocybin therapy sessions from five years earlier. The reality of the participants’ lives was more complex. Through the interviews, the researchers learned that only three of the 18 follow-up participants had not received any other form of depression-related treatment in the intervening years. The others had engaged in various forms of support, including taking antidepressant medications, undergoing traditional psychotherapy, or trying other treatments like ketamine or psychedelics on their own.

However, the qualitative data provided important context for these decisions. Many participants described a fundamental shift in their relationship with depression after the trial. Before undergoing psilocybin-assisted therapy, they often felt their depression was a debilitating and all-encompassing condition that prevented them from engaging with life. After the treatment, even if symptoms sometimes returned, they perceived their depression as more situational and manageable.

Participants reported a greater capacity for positive emotions and enthusiasm. Davis explained that these shifts appeared to lead to important changes in how they related to their depressive experiences. This newfound perspective may have made other forms of therapy more effective or made navigating difficult periods less impairing.

“Five years later, most people continued to view this treatment as safe, meaningful, important, and something that catalyzed an ongoing betterment of their life,” said Davis, who co-led the 2021 trial at Johns Hopkins University. “It’s important for us to understand the details of what comes after treatment. I think this is a sign that regardless of what the outcomes are, their lives were improved because they participated in something like this.”

Some participants who had tried using psychedelics on their own reported that the experiences were not as helpful without the supportive framework provided by the clinical trial, reinforcing the idea that the therapeutic context is a vital component of the treatment’s success.

Regarding safety, 11 of the participants reported no negative effects since the trial. A few recalled feeling unprepared for the heightened emotional sensitivity they experienced after the treatment, while others noted that the process of weaning off their previous medications before the trial was difficult.

The researchers acknowledge several limitations of their work. The small sample size of the original trial means that the findings need to be interpreted with caution and require replication in larger studies. Because the study was a long-term follow-up without a continuing control group, it is not possible to definitively attribute all the observed benefits to the psilocybin-assisted therapy, especially since most participants sought other forms of treatment during the five-year period. It is also difficult to know how natural fluctuations in mood and life circumstances may have influenced the outcomes.

“I’d like for people to know that this treatment is not a magic bullet, and these findings support that notion,” Davis noted. “Not everyone was in remission, and some had depression that was ongoing and a major negative impact in their lives. Thankfully, this was not the case for the majority of folks in the study, but readers should know that this treatment does not work for everyone even under the most rigorous and clinically supported conditions.”

Future research should aim to include larger and more diverse groups of participants, including individuals with a high risk for suicide, who were excluded from this trial. Despite these limitations, this study provides a first look at the potential for psilocybin-assisted therapy to produce durable, long-term positive effects for people with major depressive disorder. The findings suggest the treatment may not be a simple cure but rather a catalyst that helps people re-engage with their lives and other therapeutic processes, ultimately leading to sustained improvements in functioning and well-being.

“Next steps are to continue evaluating the efficacy of psilocybin therapy among larger samples and in special populations,” Davis said. “Our work at OSU involves exploring this treatment for Veterans with PTSD, lung cancer patients with depression, gender and sexual minorities with PTSD, and adolescents with depression.”

The study, “Five-year outcomes of psilocybin-assisted therapy for Major Depressive Disorder,” was authored by Alan K. Davis, Nathan D. Sepeda, Adam W. Levin, Mary Cosimano, Hillary Shaub, Taylor Washington, Peter M. Gooch, Shoval Gilead, Skylar J. Gaughan, Stacey B. Armstrong, and Frederick S. Barrett.

Rising autism and ADHD diagnoses not matched by an increase in symptoms

A new study examining nine consecutive birth years in Sweden indicates that the dramatic rise in clinical diagnoses of autism spectrum disorder is not accompanied by an increase in autism-related symptoms in the population. The research, published in the journal Psychiatry Research, also found that while parent-reported symptoms of ADHD remained stable in boys, there was a small but statistically significant increase in symptoms among girls.

Autism spectrum disorder, or ASD, is a neurodevelopmental condition characterized by differences in social communication and interaction, along with restricted or repetitive patterns of behavior and interests. Attention-Deficit/Hyperactivity Disorder, or ADHD, is another neurodevelopmental condition marked by persistent patterns of inattention, hyperactivity, and impulsivity that can interfere with functioning or development. Over the past two decades, the number of clinical diagnoses for both conditions has increased substantially in many Western countries, particularly among teenagers and young adults.

This trend has raised questions about whether the underlying traits associated with these conditions are becoming more common in the general population. Researchers sought to investigate this possibility by looking beyond clinical diagnoses to the level of symptoms reported by parents.

“The frequency of clinical diagnoses of ASD and ADHD has increased substantially over the past decades across the world,” said study author Olof Arvidsson, a PhD student at the Gillberg Neuropsychiatry Centre at Gothenburg University and resident physician in Child and Adolescent Psychiatry.

“The largest prevalence increase has been among teenagers and young adults. Therefore, we wanted to investigate if symptoms of ASD and ADHD in the population had increased over time in 18-year-olds. In this study we used data from a twin study in Sweden in which parents reported on symptoms of ASD and ADHD when their children turned 18 and investigated whether symptoms had increased between year 2011 to 2019.”

To conduct their analysis, the researchers utilized data from a large, ongoing project called the Child and Adolescent Twin Study in Sweden. This study follows twins born in Sweden to learn more about mental and physical health. For this specific investigation, researchers focused on information collected from the parents of nearly 10,000 twins born between 1993 and 2001. When the twins reached their 18th birthday, their parents were asked to complete a web-based questionnaire about their children’s behaviors and traits.

Parents answered a set of 12 questions designed to measure symptoms related to autism. These items correspond to the diagnostic criteria for ASD. For ADHD, parents completed a 17-item checklist covering problems associated with inattention and executive function, which are core components of ADHD.

Using this data, the researchers employed statistical methods to analyze whether the average symptom scores changed across the nine different birth years, from 1993 to 2001. They also looked at the percentage of individuals who scored in the highest percentiles, representing those with the most significant number of traits.

The analysis showed no increase in the average level of parent-reported autism symptoms among 18-year-olds across the nine-year span. This stability was observed for both boys and girls. Similarly, when the researchers examined the proportion of individuals with the highest symptom scores, defined as those in the top five percent, they found no statistically significant change over time. This suggests that the prevalence of autism-related traits in the young adult population remained constant during this period.

The results for ADHD presented a more nuanced picture. Among boys, the data indicated that parent-reported ADHD symptoms were stable. There was no significant change in either the average symptom scores or in the percentage of boys scoring in the top 10 percent. For girls, however, the study identified a small but statistically detectable increase in ADHD symptoms over the nine birth years. This trend was apparent in both the average symptom scores and in the proportion of girls who scored in the top 10 percent for ADHD traits.

Despite being statistically significant, the researchers note that the magnitude of this increase in girls was small. The year of birth explained only a very small fraction of the variation in ADHD symptom scores. The results suggest that while there may be a slight upward trend in certain ADHD symptoms among adolescent girls, it is not nearly large enough to account for the substantial increase in clinical ADHD diagnoses reported in this group. The study provides evidence that the steep rise in both autism and ADHD diagnoses is likely influenced by factors other than a simple increase in the symptoms themselves.

“Across the nine birth years examined, there was no sign of increasing symptoms of ASD in the population, despite rising diagnoses,” Arvidsson told PsyPost. “For ADHD, there was no increase among boys. However, in 18-year-old girls we saw a very small but statistically significant increase in ADHD symptoms. The increase in absolute numbers was small in relation to the increase in clinical diagnoses.”

The researchers propose several alternative explanations for the growing number of diagnoses. Increased public and professional awareness may lead more people to seek assessments. Diagnostic criteria for both conditions have also widened over the years, potentially including individuals who would not have met the threshold in the past. Another factor may be a change in perception, where certain behaviors are now seen as more impairing than they were previously. This aligns with other research indicating that parents today tend to report higher levels of dysfunction associated with the same number of symptoms compared to a decade ago.

Changes in societal demands, particularly in educational settings that place a greater emphasis on executive functioning and complex social skills, could also contribute. In some cases, a formal diagnosis may be a prerequisite for accessing academic support and resources, creating an incentive for assessment. For the slight increase in ADHD symptoms among girls, the authors suggest it could reflect better recognition of how ADHD presents in females, or perhaps an overlap with symptoms of anxiety and depression, which have also been on the rise in this demographic.

“The takeaway is that the increases in clinical diagnoses of both ASD and ADHD need to be explained by other factors than increasing symptoms in the population, such as increased awareness and increased perceived impairment related to ASD and ADHD symptoms,” Arvidsson said. “Taken together we also hope to curb any worries about a true increase in ASD or ADHD.”

The study has some limitations. The response rate for the parental questionnaires was about 41 percent. While the researchers checked for potential biases and found that their main conclusions about the trends over time were likely unaffected, a higher participation rate would strengthen the findings. Additionally, the questionnaire for ADHD primarily measured symptoms of inattention and did not include items on hyperactivity. The results, therefore, mainly speak to the inattentive aspects of ADHD.

Future research could explore these trends with different measures and in different populations. The researchers also plan to investigate trends in clinical diagnoses more closely to better understand resource allocation for healthcare systems.

“We want to better understand trends of clinical diagnoses, such as trends of incidence of diagnoses in different groups,” Arvidsson said. “With increasing clinical diagnoses of ASD and ADHD and the resulting impact on the healthcare system as well as on the affected patients, it is important to characterize these trends in order to motivate an increased allocation of resources.”

The study, “ASD and ADHD symptoms in 18-year-olds – A population-based study of twins born 1993 to 2001,” was authored by Olof Arvidsson, Isabell Brikell, Henrik Larsson, Paul Lichtenstein, Ralf Kuja-Halkola, Mats Johnson, Christopher Gillberg, and Sebastian Lundström.

Scientists identify ecological factors that predict dark personality traits across 48 countries

Recent research published in the journal Evolution and Human Behavior offers new insights into how broad environmental conditions may shape “dark” personality traits on a national level. The study suggests that harsh or unpredictable ecological factors experienced during childhood, such as natural disasters or skewed sex ratios, are linked to higher average levels of traits like narcissism in adulthood. These findings indicate that forces largely outside of an individual’s control could play a key role in the development of antisocial personality profiles across different cultures.

The “Dark Triad” consists of three distinct but related personality traits: narcissism, Machiavellianism, and psychopathy. Individuals with high levels of narcissism often display grandiosity, entitlement, and a constant need for admiration. Machiavellianism is characterized by a cynical, manipulative approach to social interaction and a focus on self-interest over moral principles. Psychopathy involves high impulsivity, thrill-seeking behavior, and a lack of empathy or remorse for others.

While these traits are often viewed as undesirable, evolutionary perspectives suggest they may represent adaptive strategies in certain environments. Psychological research frequently focuses on immediate social causes for these traits, such as family upbringing or individual trauma. However, this new study aimed to broaden that lens by examining macro-level ecological factors that affect entire populations.

“There were several reasons to do this study,” explained Peter Jonason, a professor at Vizja University, creator of the Your Stylish Scientist YouTube Channel, and editor of Shining Light on the Dark Side of Personality: Measurement Properties and Theoretical Advances.

“First, there is limited understanding how ecological factors predict personality at all, let alone the Dark Triad. That is, most research focuses on personal, familial, or sociological predictors, but these are embedded in larger ecological systems. If the Dark Triad traits are mere pathologies of defunkt parenting or income inequality, one would not predict sensitivity to ecological factors in determining people’s adult Dark Triad scores let alone sex differences therein.”

“Second, most research on the Dark Triad traits focuses on individual-level variance but here we examined what you might call a culture of each trait and what might account for it. Third, and, less interestingly perhaps, the team happened to meet, get along, have the skills needed, and had access to the data to examine this.”

The researchers employed a theoretical framework known as life history theory to guide their investigation. This theory proposes that organisms, including humans, unconsciously adjust their reproductive and survival strategies based on the harshness and predictability of their environment. In dangerous or unstable environments, “faster” life strategies (characterized by greater risk-taking, short-term mating, and higher aggression) tend to be more advantageous for evolutionary fitness.

To test this idea, the researchers utilized existing personality data from 11,504 participants across 48 different countries. The data for these national averages were collected around 2016 using the “Dirty Dozen,” a widely used twelve-item questionnaire designed to briefly measure the three Dark Triad traits. The researchers then paired these personality scores with historical ecological data from the World Bank and other international databases.

They specifically examined ecological conditions during three developmental windows: early childhood (years 2000–2004), mid-childhood (years 2005–2009), and adolescence (years 2010–2015). The ecological indicators included population density, life expectancy (survival to age 65), and the operational sex ratio, which measures the balance of men to women in society. They also included data on the frequency of natural disasters, the prevalence of major infectious disease outbreaks, and levels of income inequality.

“When considering what makes people different from around the world, it is lazy to say ‘culture,'” Jonason told PsyPost. “Culture is a system that results from higher-order conditions like access to resources and ecological threats. If you want to understand why someone differs from you, you must consider more than just her/his immediate–and obvious–circumstances.”

The analysis used advanced statistical techniques known as spatial autoregressive models. These models allowed the researchers to not only test the direct associations within a country but also to account for “spillover” effects from neighboring nations. This approach recognizes that countries do not exist in isolation and may be influenced by the conditions and cultures of sharing borders.

The results indicated that different ecological factors were associated with distinct Dark Triad traits. Countries that had more male-biased sex ratios during the participants’ childhoods tended to have higher average levels of adult narcissism. The researchers suggest that an excess of males may intensify intrasexual competition, prompting men to adopt grander, more self-promoting behaviors to attract mates.

Conversely, a higher prevalence of infectious diseases during childhood and adolescence was associated with lower national levels of Machiavellianism and psychopathy. In environments with a high disease burden, strict adherence to social norms and greater group cohesion are often necessary for survival. In such contexts, manipulative or antisocial behaviors that disrupt group harmony might be less adaptive and therefore less common.

The study also found that ecological conditions might influence the magnitude of personality differences between men and women. Exposure to natural disasters during developmental years was consistently linked to larger sex differences across all three Dark Triad traits in adulthood. High-threat environments may cause men and women to adopt increasingly divergent survival and reproductive strategies, thereby widening the psychological gap between the sexes.

Furthermore, the research provided evidence for regional clustering of these personality profiles. Conditions in neighboring countries frequently predicted a focal country’s personality scores. For example, higher income inequality or natural disaster impact in bordering nations was associated with higher narcissism or Machiavellianism in the country being studied.

This suggests that dark personality traits may diffuse across borders. This could happen through mechanisms such as migration, shared regional economic challenges, or cultural transmission. The findings highlight the importance of considering regional contexts when studying national character.

“Do not assume that good parenting, safe schools, and successful social experiences are all that matter in determining who goes dark,” Jonason explained. “Larger factors, well beyond our control, have influence as well. By removing the human from the equation, we can better see how people are subject to forces well beyond their will, self-reports, and even situated in larger socioecological systems.”

As with all research, the study has some limitations that should be considered when interpreting these results. The personality data were largely derived from university students, who may not be fully representative of their national populations. Additionally, because the study relied on historical aggregate data, it cannot establish a definitive causal link between these ecological factors and individual personality development. It is possible that other unmeasured variables contribute to these associations.

Future research could aim to replicate these findings using more diverse and representative samples from the general population. The researchers also express an interest in investigating the specific psychological and cognitive mechanisms that might link broad environmental conditions to individual differences in motives and morals. Understanding these mechanisms could provide a clearer picture of how macro-level forces shape the human mind.

“We hope to pursue projects that try to understand the specific conditions that allow for not just personality, but also motives, morals, and mate preferences to be calibrated to local conditions providing more robust tests of not just cross-national differences, but, also, what are the cognitive mechanisms and perceptions that drive those differences,” Jonason said. “This is assuming we get some grant money to do so!”

“This is a study attempting to understand how lived experiences in people’s mileu can correlate with their personality and sex differences therein. This is an important step forward because while manipulating the conditions in people’s lives is nearly impossible, we can get a strong glimpse of how conditions in people’s generalized past can cause adaptive responses to help them solve important tasks like securing status and mates–two motivations highly valued by those high in the Dark Triad traits.”

The study, “Towards an ecological model of the dark triad traits,” was authored by Peter K. Jonason, Dritjon Gruda, and Mark van Vugt.

Music engagement is associated with substantially lower dementia risk in older adults

A new study provides evidence that older adults who frequently engage with music may have a significantly lower risk of developing dementia. The research, published in the International Journal of Geriatric Psychiatry, indicates that consistently listening to music was associated with up to a 39 percent reduced risk, while regularly playing an instrument was linked to a 35 percent reduced risk. These findings suggest that music-related activities could be an accessible way to support cognitive health in later life.

Researchers were motivated to conduct this study because of the growing global health challenge posed by aging populations and the corresponding rise in dementia cases. As life expectancy increases, so does the prevalence of age-related conditions like cognitive decline. With no current cure for dementia, identifying lifestyle factors that might help prevent or delay its onset has become a major focus of scientific inquiry.

While some previous research pointed to potential cognitive benefits from music, many of those studies were limited. They often involved small groups of participants, included people who already had cognitive problems, or were susceptible to selection bias. This new study aimed to overcome these limitations by using a large, long-term dataset of older adults who were cognitively healthy at the beginning of the research period. The team also wanted to explore how education level might influence the relationship between music engagement and cognitive outcomes.

The investigation utilized data from a large-scale Australian study called ASPirin in Reducing Events in the Elderly (ASPREE) and its sub-study. The final analysis included 10,893 community-dwelling adults who were 70 years of age or older and did not have a dementia diagnosis when they enrolled. These participants were followed for a median of 4.7 years, with some observational follow-up extending beyond that period.

About three years into the study, participants answered questions about their social activities, including how often they listened to music or played a musical instrument. Their responses ranged from “never” to “always.” Researchers then tracked the participants’ cognitive health over subsequent years through annual assessments. Dementia diagnoses were made by an expert panel based on rigorous criteria, while a condition known as cognitive impairment no dementia (CIND), a less severe form of cognitive decline, was also identified.

The findings indicate a strong association between music engagement and a lower risk of dementia. Individuals who reported “always” listening to music had a 39 percent decreased risk of developing dementia compared to those who listened never, rarely, or sometimes. This group also showed a 17 percent decreased risk of developing CIND.

Regularly playing a musical instrument was also associated with positive outcomes. Those who played an instrument “often” or “always” had a 35 percent decreased dementia risk compared to those who played rarely or never. However, playing an instrument did not show a significant association with a reduced risk of CIND.

When researchers looked at individuals who engaged in both activities, they found a combined benefit. Participants who frequently listened to music and played an instrument had a 33 percent decreased risk of dementia. This group also showed a 22 percent decreased risk of CIND.

Beyond the risk of dementia or CIND, the study also examined changes in performance on specific cognitive tests over time. Consistently listening to music was associated with better scores in global cognition, which is a measure of overall thinking abilities, as well as in memory. Playing an instrument was not linked to significant changes in scores on these cognitive tests. Neither listening to nor playing music appeared to be associated with changes in participants’ self-reported quality of life or mental wellbeing.

The research team also explored whether a person’s level of education affected these associations. The results suggest that education may play a role, particularly for music listening. The association between listening to music and a lower dementia risk was most pronounced in individuals with 16 or more years of education. In this highly educated group, always listening to music was linked to a 63 percent reduced risk.

The findings were less consistent for those with 12 to 15 years of education, where no significant protective association was observed. The researchers note this particular result was unexpected and may warrant further investigation to understand potential underlying factors.

The study has several limitations that are important to consider. Because it is an observational study, it can only identify associations between music and cognitive health; it cannot establish that music engagement directly causes a reduction in dementia risk. It is possible that individuals with healthier brains are simply more likely to engage with music, a concept known as reverse causation. The study’s participants were also generally healthier than the average older adult population, which may limit how broadly the findings can be applied.

Additionally, the data on music engagement was self-reported, which could introduce inaccuracies. The survey did not collect details on the type of music, the duration of listening or playing sessions, or whether listening to the radio involved music or talk-based content. Such details could be important for understanding the mechanisms behind the observed associations.

Future research could build on these findings by examining longer-term outcomes and exploring which specific aspects of music engagement might be most beneficial. Studies involving more diverse populations could also help determine if these associations hold true across different groups. Ultimately, randomized controlled trials would be needed to determine if actively encouraging music engagement as an intervention can directly improve cognitive function and delay the onset of dementia in older adults.

The study, “What Is the Association Between Music-Related Leisure Activities and Dementia Risk? A Cohort Study,” was authored by Emma Jaffa, Zimu Wu, Alice Owen, Aung Azw Zaw Phyo, Robyn L. Woods, Suzanne G. Orchard, Trevor T.-J. Chong, Raj C. Shah, Anne Murray, and Joanne Ryan.

AI chatbots often violate ethical standards in mental health contexts

A new study suggests that popular large language models like ChatGPT can systematically breach established ethical guidelines for mental health care, even when specifically prompted to use accepted therapeutic techniques. The research, which will be presented at the AAAI/ACM Conference on Artificial Intelligence, Ethics, and Society, provides evidence that these AI systems may pose risks to individuals who turn to them for mental health support.

The motivation for this research stems from the rapidly growing trend of people using publicly available AI chatbots for advice on mental health issues. While these systems can offer immediate and accessible conversational support, their alignment with the professional standards that govern human therapists has remained largely unexamined. Researchers from Brown University sought to bridge this gap by creating a systematic way to evaluate the ethical performance of these models in a therapeutic context. They collaborated with mental health practitioners to ensure their analysis was grounded in the real-world principles that guide safe and effective psychotherapy.

To conduct their investigation, the researchers first developed a comprehensive framework outlining 15 distinct ethical risks. This framework was informed by the ethical codes of professional organizations, including the American Psychological Association, translating core therapeutic principles into measurable behaviors for an AI. The team then designed a series of simulated conversations between a user and a large language model, or LLM, which is an AI system trained on vast amounts of text to generate human-like conversation. In these simulations, the AI was instructed to act as a counselor employing evidence-based psychotherapeutic methods.

The simulated scenarios were designed to present the AI with common and challenging mental health situations. These included users expressing feelings of worthlessness, anxiety about social situations, and even statements that could indicate a crisis, such as thoughts of self-harm. By analyzing the AI’s responses across these varied prompts, the researchers could map its behavior directly onto their practitioner-informed framework of ethical risks. This allowed for a detailed assessment of when and how the models tended to deviate from professional standards.

The study’s findings indicate that the large language models frequently engaged in behaviors that would be considered ethical violations for a human therapist. One of the most significant areas of concern was in the handling of crisis situations. When a simulated user expressed thoughts of self-harm, the AI models often failed to respond appropriately. Instead of prioritizing safety and providing direct access to crisis resources, some models offered generic advice or conversational platitudes that did not address the severity of the situation.

Another pattern observed was the reinforcement of negative beliefs. In psychotherapy, a practitioner is trained to help a person identify and gently challenge distorted or unhelpful thought patterns, such as believing one is a complete failure after a single mistake. The study found that the AIs, in an attempt to be agreeable and supportive, would sometimes validate these negative self-assessments. This behavior can inadvertently strengthen a user’s harmful beliefs about themselves or their circumstances, which is counterproductive to therapeutic goals.

The research also points to the issue of what the authors term a “false sense of empathy.” While the AI models are proficient at generating text that sounds empathetic, this is a simulation of emotion, not a genuine understanding of the user’s experience. This can create a misleading dynamic where a user may form an attachment to the AI or develop a dependency based on this perceived empathy. Such a one-sided relationship lacks the authentic human connection and accountability that are foundational to effective therapy.

Beyond these specific examples, the broader framework developed by the researchers suggests other potential ethical pitfalls. These include issues of competence, where an AI might provide advice on a topic for which it has no genuine expertise or training, unlike a licensed therapist who must practice within their scope. Similarly, the nature of data privacy and confidentiality is fundamentally different with an AI. Conversations with a chatbot may be recorded and used for model training, a practice that is in direct conflict with the strict confidentiality standards of human-centered therapy.

The study suggests that these ethical violations are not necessarily flaws to be fixed with simple tweaks but may be inherent to the current architecture of large language models. These systems are designed to predict the next most probable word in a sequence, creating coherent and contextually relevant text. They do not possess a true understanding of psychological principles, ethical reasoning, or the potential real-world impact of their words. Their programming prioritizes a helpful and plausible response, which in a therapeutic setting can lead to behaviors that are ethically inappropriate.

The researchers acknowledge certain limitations to their work. The study relied on simulated interactions, which may not fully capture the complexity and unpredictability of conversations with real individuals seeking help. Additionally, the field of artificial intelligence is evolving rapidly, and newer versions of these models may behave differently than the ones tested. The specific prompts used by the research team also shape the AI’s responses, and different user inputs could yield different results.

For future research, the team calls for the development of new standards specifically designed for AI-based mental health tools. They suggest that the current ethical and legal frameworks for human therapists are not sufficient for governing these technologies. New guidelines would need to be created to address the unique challenges posed by AI, from data privacy and algorithmic bias to the management of user dependency and crisis situations.

In their paper, the researchers state, “we call on future work to create ethical, educational, and legal standards for LLM counselors—standards that are reflective of the quality and rigor of care required for human-facilitated psychotherapy.” The study ultimately contributes to a growing body of evidence suggesting that while AI may have a future role in mental health, its current application requires a cautious and well-regulated approach to ensure user safety and well-being.

The study, “How LLM Counselors Violate Ethical Standards in Mental Health Practice: A Practitioner-Informed Framework,” was authored by Zainab Iftikhar, Amy Xiao, Sean Ransom, Jeff Huang, and Harini Suresh.

A religious upbringing in childhood is linked to poorer mental and cognitive health in later life

A new large-scale study of European adults suggests that, on average, being religiously educated as a child is associated with slightly poorer self-rated health after the age of 50. The research, published in the journal Social Science & Medicine, also indicates that this association is not uniform, varying significantly across different aspects of health and among different segments of the population.

Past research has produced a complex and sometimes contradictory picture regarding the connections between religiousness and health. Some studies indicate that religious involvement can offer health benefits, such as reduced suicide risk and fewer unhealthy behaviors. Other research points to negative associations, linking religious attendance with increased depression in some populations.

Most of this work has focused on religious practices in adulthood, leaving the long-term health associations of childhood religious experiences less understood. To address this gap, researchers set out to investigate how a religious upbringing might be linked to health outcomes decades later, taking into account the diverse life experiences that can shape a person’s well-being.

The researchers proposed several potential pathways through which a religious upbringing could influence long-term health. These include psychosocial mechanisms, where religion might foster positive emotions and coping strategies but could also lead to internal conflict or distress. Social and economic mechanisms might involve access to supportive communities and resources, while also potentially exposing individuals to group tensions.

Finally, behavioral mechanisms suggest religion may encourage healthier lifestyles, such as avoiding smoking or excessive drinking, which could have lasting positive effects on physical health. Given these varied and sometimes opposing potential influences, the researchers hypothesized that the link between a religious upbringing and late-life health would not be simple or consistent for everyone.

To explore these questions, the study utilized data from the Survey of Health, Aging, and Retirement in Europe, a major cross-national project. The analysis included information from 10,346 adults aged 50 or older from ten European countries. Participants were asked a straightforward question about their childhood: “Were you religiously educated by your parents?” Their current health was assessed through self-ratings on a five-point scale from “poor” to “excellent.” The study also examined more specific health indicators, including physical health (chronic diseases and limitations in daily activities), mental health (symptoms of depression), and cognitive health (numeracy and orientation skills).

The researchers employed an advanced statistical method known as a causal forest approach. This machine learning technique is particularly well-suited for identifying complex and non-linear patterns in large datasets. Unlike traditional methods that often look for straightforward, linear relationships, the causal forest model can uncover how the association between a religious upbringing and health might change based on a wide array of other factors. The analysis accounted for 19 different variables, including early-life circumstances, late-life demographics like age and marital status, and current religious involvement.

The overall results indicated that, on average, having a religious upbringing was associated with poorer self-rated health in later life. The average effect was modest, representing a -0.10 point difference on the five-point health scale. The analysis showed that for a majority of individuals in the sample, the association was negative.

However, the model also identified a smaller portion of individuals for whom the association was positive, suggesting that for some, a religious upbringing was linked to better health outcomes. This variation highlights that an average finding does not tell the whole story.

When the researchers examined different domains of health, a more nuanced picture emerged. A religious upbringing was associated with poorer mental health, specifically a higher level of depressive symptoms. It was also linked to poorer cognitive health, as measured by lower numeracy, or mathematical ability.

In contrast, the same childhood experience was associated with better physical health, indicated by fewer limitations in activities of daily living, which include basic self-care tasks like bathing and dressing. This suggests that a religious childhood may have different, and even opposing, associations with the physical, mental, and cognitive aspects of a person’s well-being in later life.

The study provided further evidence that the link between a religious upbringing and poorer self-rated health was not the same for all people. The negative association appeared to be stronger for certain subgroups. For example, individuals who grew up with adverse family circumstances, such as a parent with mental health problems or a parent who drank heavily, showed a stronger negative link between their religious education and later health.

Late-life demographic factors also seemed to modify the association. The negative link was more pronounced among older individuals (aged 65 and above), females, those who were not married or partnered, and those with lower levels of education. These findings suggest that disadvantages or vulnerabilities experienced later in life may interact with early experiences to shape health outcomes.

The analysis also considered how adult religious practices related to the findings. The negative association between a religious upbringing and later health was stronger for individuals who reported praying in adulthood. It was also stronger for those who reported that they never attended a religious organization as an adult. This combination suggests a complex interplay between past experiences and present behaviors.

The study does have some limitations. The data on religious upbringing and other childhood circumstances were based on participants’ retrospective self-reports, which can be subject to memory biases. The study’s design is cross-sectional, meaning it captures a snapshot in time and cannot establish a direct causal link between a religious upbringing and health outcomes. It is possible that other unmeasured factors, such as parental socioeconomic status, could play a role in this relationship. The measure of religious upbringing was also broad and did not capture the intensity, type, or strictness of the education received.

Future research could build on these findings by using longitudinal data to track individuals over time, providing a clearer view of how early experiences unfold into later life health. More detailed measures of religious education could also help explain why the experience appears beneficial for some health domains but detrimental for others. Researchers also suggest that exploring the mechanisms, such as coping strategies or social support, would provide a more complete understanding.

The study, “Heterogeneous associations between early-life religious upbringing and late-life health: Evidence from a machine learning approach,” was authored by Xu Zong, Xiangjiao Meng, Karri Silventoinen, Matti Nelimarkka, and Pekka Martikainen.

Men with delayed ejaculation report lower sexual satisfaction and more depressive symptoms

A study of men seeking help for delayed or premature ejaculation in Italy found that those suffering from delayed ejaculation tended to have more severe depressive and anxiety symptoms, and lower sexual desire than men suffering from premature ejaculation. They also tended to be older. The paper was published in IJIR: Your Sexual Medicine Journal.

Premature ejaculation is a sexual condition in which a man reaches orgasm and ejaculates sooner than desired, often within a minute of penetration or with minimal stimulation. It can lead to frustration, anxiety, and reduced sexual satisfaction for both partners. The causes may include psychological factors such as stress, depression, or relationship problems, as well as biological ones like hormonal imbalances or nerve sensitivity.

In contrast, delayed ejaculation is the persistent difficulty or inability to reach orgasm and ejaculate despite adequate sexual stimulation. This condition can also cause emotional distress, relationship strain, and decreased confidence. Delayed ejaculation may result from psychological issues, nerve damage, certain medications, or chronic health conditions such as diabetes. Both conditions are forms of ejaculatory disorders and sexual dysfunction. They can occur occasionally or become chronic depending on underlying causes.

Study author Fausto Negri and his colleagues note that many men experiencing ejaculatory disorders have difficulty expressing their negative feelings and that sexuality and emotional expression are closely connected. With this in mind, they conducted a study aiming to define specific clinical and psychological profiles of individuals suffering from premature and delayed ejaculation and to investigate the association between delayed ejaculation and other domains of sexual functioning.

Study participants were 555 men who were seeking medical help for ejaculation disorders. 76 of them reported for delayed ejaculation, while the rest of them sought help for premature ejaculation. Participants’ average age was approximately 45 years. 53% of participants with delayed ejaculation reported having a stable partner, and this was the case with 64% of participants with premature ejaculation.

Participants completed assessments of erectile function (the International Index of Erectile Function) and depression (the Beck Depression Inventory). Researchers also measured levels of various hormones and collected other medical and demographic information about the participants.

Results showed that participants suffering from delayed ejaculation were older than participants suffering from premature ejaculation (average age of 47 years vs 44 years). They also more often suffered from other disorders. Participants with delayed ejaculation also tended to have more severe symptoms of depression and anxiety. Their sexual desire tended to be lower, as were their orgasmic function scores, compared to participants with premature ejaculation. The two groups did not differ in relationship status, waist circumference, body mass index, or levels of examined hormones.

“Roughly one of ten men presenting for self-reported ejaculatory dysfunction as their main complaint in the real-life setting suffers from DE [delayed ejaculation]. Usually, they are older than men with primary PE [premature ejaculation] and overall less healthy. Likewise, they depict an overall poorer quality of sexual life, with lower SD [sexual desire] and OF [orgasmic function]. Moreover, men with DE have higher chances to report clinically significant depression and anxiety, which significantly impact their overall sexual satisfaction,” the study authors concluded.

The study sheds light on the differences in psychological characteristics between people with different forms of ejaculation disorders. However, it should be noted that the design of the study does not allow any causal inferences to be derived from the results. Additionally, all participants came from the same clinical center. Results on men from other geographical areas might differ.

The paper, “Men with delayed ejaculation report lower sexual satisfaction and more depressive symptoms than those with premature ejaculation: findings from a cross-sectional study,” was authored by Fausto Negri, Christian Corsini, Edoardo Pozzi, Massimiliano Raffo, Alessandro Bertini, Gabriele Birolini, Alessia d’Arma, Luca Boeri, Francesco Montorsi, Michael L. Eisenberg, and Andrea Salonia.

Psychiatrists document extremely rare case of menstrual psychosis

Researchers in Japan have documented the case of a teenager whose psychotic symptoms consistently appeared before her menstrual period and resolved immediately after. A case report published in Psychiatry and Clinical Neurosciences Reports indicates that a medication typically used to treat seizures and bipolar disorder was effective after standard antipsychotic and antidepressant drugs failed to provide relief. This account offers a detailed look at a rare and often misunderstood condition.

The condition is known as menstrual psychosis, which is characterized by the sudden onset of psychotic symptoms in an individual who is otherwise mentally well. These episodes are typically brief and occur in a cyclical pattern that aligns with the menstrual cycle. The presence of symptoms like delusions or hallucinations distinguishes menstrual psychosis from more common conditions such as premenstrual syndrome or premenstrual dysphoric disorder, which primarily involve mood-related changes. Menstrual psychosis is considered exceptionally rare, with fewer than 100 cases identified in the medical literature.

The new report, authored by Atsuo Morisaki and colleagues at the Tokyo Metropolitan Children’s Medical Center, details the experience of a 17-year-old Japanese girl who sought medical help after about two years of recurring psychological distress. Her initial symptoms included intense anxiety, a feeling of being watched, and auditory hallucinations where she heard a classmate’s voice. She also developed the belief that conversations around her were about herself. She had no prior psychiatric history or family history of mental illness.

Initially, she was diagnosed with schizophrenia and prescribed antipsychotic medication, which did not appear to alleviate her symptoms. Upon being transferred to a new medical center, her treatment was changed, but her condition persisted. While hospitalized, her medical team observed a distinct pattern. In the days leading up to her first menstrual period at the hospital, she experienced a depressive mood and restlessness. This escalated to include delusional thoughts and the feeling that “voices and sounds were entering my mind.” These symptoms disappeared completely four days later, once her period ended.

This cycle repeated itself the following month. About twelve days before her second menstruation, she again became restless. Nine days before, she reported the sensation that her thoughts were “leaking out” during phone calls. She also experienced auditory hallucinations and believed her thoughts were being broadcast to others. Her antipsychotic dosage was increased, but the symptoms continued until her menstruation ended, at which point they once again resolved completely.

A similar pattern emerged before her third period during hospitalization. Fourteen days prior, she developed a fearful, delusional mood. She reported that “gazes and voices are entering my head” and her diary entries showed signs of disorganized thinking. An increase in her medication dosage seemed to have no effect. As her period began, the symptoms started to fade, and they were gone by the time it was over. This consistent, cyclical nature of her psychosis, which did not respond to conventional treatments, led her doctors to consider an alternative diagnosis and treatment plan.

Observing this clear link between her symptoms and her menstrual cycle, the medical team initiated treatment with carbamazepine. This medication is an anticonvulsant commonly used to manage seizures and is also prescribed as a mood stabilizer for bipolar disorder. The dosage was started low and gradually increased. Following the administration of carbamazepine, her psychotic symptoms resolved entirely. She was eventually able to discontinue the antipsychotic and antidepressant medications. During follow-up appointments as an outpatient, her symptoms had not returned.

The exact biological mechanisms behind menstrual psychosis are not well understood. Some scientific theories suggest a link to the sharp drop in estrogen that occurs during the late phase of the menstrual cycle. Estrogen influences several brain chemicals, including dopamine, and a significant reduction in estrogen might lead to a state where the brain has too much dopamine activity, which has been associated with psychosis. However, since psychotic episodes can occur at various points in the menstrual cycle, fluctuating estrogen levels alone do not seem to fully explain the condition.

The choice of carbamazepine was partly guided by the patient’s age and the potential long-term side effects of other mood stabilizers. The authors of the report note that carbamazepine may work by modulating the activity of various channels and chemical messengers in the brain, helping to stabilize neuronal excitability. While there are no previous reports of carbamazepine being used specifically for menstrual psychosis, it has shown some effectiveness in other cyclical psychiatric conditions, suggesting it may influence the underlying mechanisms that produce symptoms tied to biological cycles.

It is important to understand the nature of a case report. Findings from a single patient cannot be generalized to a larger population. This report does not establish that carbamazepine is a definitive treatment for all individuals with menstrual psychosis. The positive outcome observed in this one person could be unique to her specific biology and circumstances.

However, case reports like this one serve a significant function in medical science, especially for uncommon conditions. They can highlight patterns that might otherwise be missed and introduce potential new avenues for treatment that warrant further investigation. By documenting this experience, the authors provide information that may help other clinicians recognize this rare disorder and consider a wider range of therapeutic options. This account provides a foundation for future, more systematic research into the causes of menstrual psychosis and the potential effectiveness of medications like carbamazepine.

The report, “Menstrual psychosis with a marked response to carbamazepine,” was authored by Atsuo Morisaki, Ken Ebishima, Akira Uezono, and Takashi Nagasawa.

Short exercise intervention helps teens with ADHD manage stress

A new study published in the Journal of Affective Disorders provides evidence that a brief but structured physical exercise program can help reduce stress levels in adolescents diagnosed with attention-deficit/hyperactivity disorder. The researchers found that after just three weeks of moderate to vigorous physical activity, participants reported lower levels of stress and showed a measurable increase in salivary cortisol, a hormone linked to the body’s stress response.

Adolescence is widely recognized as a time of dramatic psychological and biological development. For teens with ADHD, this period often comes with heightened emotional challenges. In addition to the typical symptoms of inattention and hyperactivity, many adolescents with the condition also struggle with internal feelings such as anxiety and depression. These emotional difficulties can interfere with daily functioning at school and at home, placing them at greater risk for long-term mental health problems.

Although stimulant medications are commonly used to manage symptoms, they often cause side effects such as sleep problems and mood shifts. Due to these complications, many families and young people stop using medication or seek alternative approaches. One such approach gaining traction is physical exercise. Prior research suggests that structured activity may benefit brain function and emotional regulation. However, most studies have focused on children rather than adolescents, and few have examined whether exercise influences cortisol, a stress hormone thought to be dysregulated in young people with ADHD.

Cortisol plays an important role in how the body manages stress. Low levels of cortisol in the morning have been found in children and adolescents with ADHD, and this pattern has been associated with fatigue, anxiety, and greater symptom severity. The researchers behind the new study wanted to know whether a short physical exercise intervention could influence both subjective stress levels and objective stress markers like cortisol in teens with ADHD.

“Adolescents with ADHD face stress-related challenges and appear to display atypical cortisol patterns, yet most exercise studies focus on younger children and rarely include biological stress markers,” explained study author Cindy Sit, a professor of sports science and physical education at The Chinese University of Hong Kong.

“We wanted to test a practical, low-risk intervention that schools and families could feasibly implement and to examine both perceived stress and a physiological marker (salivary cortisol) within a randomized controlled trial design. In short, we aimed to examine whether a brief, feasible program could help regulate stress in this under-researched group through non-pharmacological methods.”

The researchers recruited 82 adolescents, aged 12 to 17, who had been diagnosed with ADHD. Some of the participants also had a diagnosis of autism spectrum disorder, which often co-occurs with ADHD. The teens were randomly assigned to one of two groups. One group participated in a structured physical exercise program lasting three weeks. The other group served as a control and continued with their normal routines.

The exercise group attended two 90-minute sessions each week, totaling 540 minutes over the course of the program. These sessions included a variety of activities designed not only to improve physical fitness but also to engage cognitive functions such as memory, reaction time, and problem-solving. Exercises included circuit training as well as games that required strategic thinking and teamwork. Participants were guided to maintain moderate to vigorous intensity throughout much of the sessions, and their heart rates were monitored to ensure appropriate effort.

To measure outcomes, the researchers used both self-report questionnaires and biological samples. Stress, depression, and anxiety levels were assessed through a validated scale. Cortisol was measured using saliva samples collected in the afternoon before and after the intervention, as well as three months later.

The findings showed that immediately following the exercise program, participants in the exercise group reported lower levels of stress compared to their baseline scores. At the same time, their cortisol levels increased.

The increase in cortisol following exercise was interpreted not as a sign of increased stress but as a reflection of more typical hormonal activity. The researchers noted that this pattern aligns with the idea of exercise as a “positive stressor” that helps train the body to respond more effectively to real-life challenges. Importantly, the teens felt less stressed, even as their cortisol levels rose.

“The combination of lower perceived stress alongside an immediate rise in cortisol was striking,” Sit told PsyPost. “It supports the idea that exercise can feel stress-relieving while still producing a normal physiological stress response that may help calibrate the HPA axis. We also noted a baseline positive association between anxiety and cortisol in the control group only, which warrants further investigation.”

However, by the three-month follow-up, the improvements in self-reported stress had faded, and cortisol levels had returned to their initial levels. There were no significant changes in self-reported depression or anxiety in either group at any point.

“A short, three-week exercise program (90-minute sessions twice a week at moderate to vigorous intensity) reduced perceived stress in adolescents with ADHD immediately after the program,” Sit said. “Cortisol levels increased right after the intervention, consistent with a healthy, short-term activation of the stress system during exertion (often called ‘good stress’). The positive effects on perceived stress did not last for three months without continued physical exercise, and we did not observe short-term changes in depression or anxiety. This suggests that ongoing participation is necessary to sustain these benefits.”

Although the results suggest benefits from the short-term exercise program, there are some limitations to consider. Most of the participants were male, and this gender imbalance could affect how the findings apply to a broader group of adolescents. The study also relied on self-report questionnaires to assess stress, anxiety, and depression, which can be affected by personal bias. Additionally, there was no “active” control group, meaning the control participants were not given an alternate activity that involved social interaction or structure, which might have helped isolate the effects of the exercise itself.

Future studies might benefit from longer intervention periods to examine whether extended participation can produce lasting changes. Collecting saliva samples multiple times during the day could also help map out how cortisol behaves in response to both daily routines and interventions. Incorporating interviews or observer-based assessments could provide a more complete understanding of emotional changes, especially in teens who have difficulty expressing their feelings through questionnaires.

“Our team is currently conducting a large randomized controlled trial testing physical‑activity interventions for people with intellectual disability, with co‑primary outcomes of mood and physical strength,” Sit explained. “The broader aim is to develop scalable, low‑cost programs that can be implemented in schools, day services, and community settings. Ultimately, we aim to increase access for underserved populations so that structured movement becomes a feasible part of everyday care and improves their quality of life.”

“We see exercise as a useful adjunct, not a replacement, for standard ADHD care,” she added. “In practice, that involves incorporating structured movement alongside evidence-based treatments (e.g., medication, psychoeducation, behavioural supports) and working with families, schools, and healthcare providers. Exercise is accessible and generally has low risk; it can assist with stress regulation, sleep, attention, and fitness. However, it should be individualized and monitored, especially for individuals with special needs like ADHD, to support rather than replace routine care.”

The study, “Efficacy of a short-term physical exercise intervention on stress biomarkers and mental health in adolescents with ADHD: A randomized controlled trial,” was authored by Sima Dastamooz, Stephen H.S. Wong, Yijian Yang, Kelly Arbour-Nicitopoulos, Rainbow T.H. Ho, Jason C.S. Yam, Clement C.Y. Tham, Liu Chang, and Cindy H.P. Sit.

Masculinity and sexual attraction appear to shape how people respond to infidelity

A new study in the Archives of Sexual Behavior suggests that how people react to sexual versus emotional infidelity is shaped by more than just biological sex. While heterosexual men were more distressed by sexual betrayal and women by emotional betrayal, the findings indicate that traits like masculinity, femininity, and sexual attraction also influence these responses in flexible ways.

For several decades, psychologists have observed that men and women tend to react differently to infidelity. Men are more likely to be disturbed by sexual infidelity, while women are more upset by emotional cheating. Evolutionary psychologists have suggested that this might reflect reproductive pressures. For men, the risk of raising another man’s child might have favored the development of stronger reactions to sexual betrayal. For women, the loss of a partner’s emotional commitment could mean fewer resources and support for offspring, making emotional infidelity more threatening.

But this difference is not universal. Studies have shown that it becomes much less pronounced among sexual minorities. Gay men and lesbian women often report similar levels of distress over emotional and sexual infidelity, rather than showing a clear difference based on biological sex. This has raised the question of whether the difference between men and women is really just about being male or female—or whether other psychological traits might be involved.

The researchers behind the current study wanted to examine this question in more detail. They were interested in whether traits often associated with masculinity or femininity might influence how people respond to infidelity. They also wanted to test whether sexual orientation, measured not just as a label but as a continuum of attraction to men and women, could account for some of the variation in jealousy responses.

“We have for many years found robust sex difference in jealousy, but we have also been interested in any factors that could influence this pattern. Other researchers discovered that sexual orientation might influence that pattern. We also were influence by David Schmitt’s ideas on sexual dials vs. switches — how masculinization/feminization might be much better described as dimensional than categorical, including sexual orientation and jealousy triggers,” said study author Leif Edward Ottesen Kennair, a professor at the Norwegian University of Science and Technology.

For their study, the researchers collected data from 4,465 adults in Norway, ranging in age from 16 to 80. The sample included people who identified as heterosexual, gay, lesbian, bisexual, and pansexual. Participants were recruited through social media advertisements and LGBTQ+ websites. Each person completed a survey about their responses to hypothetical infidelity scenarios, along with questions about their childhood behavior, personality traits, sexual attraction, and self-perceived masculinity or femininity.

To measure jealousy, the participants were asked to imagine different types of infidelity. In one example, they were asked whether it would be more upsetting if their partner had sex with someone else, or if their partner developed a deep emotional connection with another person. Their answers were used to calculate a jealousy score that reflected how much more distressing they found sexual versus emotional betrayal.

The results supported some long-standing findings. Heterosexual men were much more likely than heterosexual women to be disturbed by sexual infidelity. In fact, nearly 59 percent of heterosexual men said sexual betrayal was more upsetting, compared to only 31 percent of heterosexual women. This pattern was consistent with past research.

But among sexual minorities, the sex difference mostly disappeared. Gay men and lesbian women responded in ways that were more alike, with both groups tending to be more upset by emotional infidelity. Bisexual men and women also reported similar responses. This suggests that sexual orientation plays a key role in how people experience jealousy.

The researchers then examined sexual attraction as a continuous variable. Rather than looking only at how people labeled themselves, they measured how strongly participants were attracted to men and to women. Among men, those who were exclusively attracted to women showed the highest levels of sexual jealousy. Men who had even a small degree of attraction to other men reported less distress about sexual infidelity.

The researchers also measured four different psychological traits related to masculinity and femininity. These included whether participants preferred system-oriented thinking or empathizing, whether they had gender-typical interests as children, whether they preferred male- or female-dominated occupations, and how masculine or feminine they saw themselves. These traits were used to create a broader measure of psychological gender.

In men, higher levels of psychological masculinity were linked to both a stronger attraction to women and a greater tendency to be disturbed by sexual infidelity. But the connection between masculinity and jealousy seemed to depend on whether the man was attracted to women. Masculinity influenced jealousy only when it was also linked to strong gynephilic attraction—that is, attraction to women.

Among women, masculinity was related to sexual orientation, but not to jealousy responses. This suggests that masculinity and femininity may play different roles in shaping sexual psychology for men and women.

Kennair told PsyPost that these findings suggest “that sexual orientation might be best measured dimensionally (as involving both gynephilia and androphilia), that sexual orientation influences sex differences (in this case, jealousy triggers), and that gendering and sex differences are not primarily categorical processes but dimensional processes that are largely influenced by biological sex, but absolutely not categorically determined in an either/or switch pattern. Rather, they function more like interconnected dimensional dials.”

A surprising finding came from a smaller group: bisexual men who were partnered with women. “In the current study, we found that bisexual men with a female partner were still more triggered by emotional than sexual infidelity,” Kennair explained. “Bisexual men should also be concerned about who the father of their partner’s children really is, from an evolutionary perspective, but it seems that only the highly gynephilic men are primarily triggered by sexual infidelity. This needs further investigation and theorizing.”

But the study, like all research, has some caveats. The participants were recruited online, which means the sample might not fully represent the broader population. In addition, the jealousy scenarios were hypothetical, and people’s real-life reactions might differ from what they imagine.

The study raises some new and unresolved questions. One puzzle is why sexual jealousy in men seems to drop off so steeply with even a small degree of androphilic attraction. From an evolutionary standpoint, any man who invested in raising a child would have faced reproductive costs if his partner had been unfaithful, regardless of his own sexual orientation. Yet the findings suggest that the mechanism for sexual jealousy may be tightly linked to sexual attraction to women, rather than simply being male or being partnered with a woman.

It also remains unclear why women’s jealousy responses are less influenced by sexual orientation or masculinity. The results suggest that emotional jealousy is a more stable pattern among women, while sexual jealousy in men appears more sensitive to individual differences in orientation and psychological traits.

“I think this is a first empirical establishment of the dials approach,” Kennair said. “I think it might be helpful to investigate this approach with other phenomena. Also, the research cannot address the developmental and biological processes underlying the psychological level we addressed in the paper. The causal pathways therefore need further investigation. And theorizing.”

He hopes that “maybe in the current polarized discussion of identity and sex/gender, people will find the dimensional and empirical approach of this paper a tool to communicate better than the categorical approaches let us do.”

The study, “Male Sex, Masculinization, Sexual Orientation, and Gynephilia Synergistically Predict Increased Sexual Jealousy,” was authored by Leif Edward Ottesen Kennair, Mons Bendixen, and David P. Schmitt.

Feeling moved by a film may prompt people to reflect and engage politically

Watching a powerful movie may do more than stir emotions. According to a study published in the journal Communication Research, emotionally moving films that explore political or moral issues may encourage viewers to think more deeply about those topics and even engage politically. The researchers found that German television theme nights combining fictional drama with related factual programs were associated with higher levels of information seeking, perceived knowledge, and consideration of political actions related to the issues portrayed.

There is a longstanding debate about whether entertainment harms or helps democracy. Some scholars worry that media such as movies and reality shows distract citizens from more serious political content. But recent research has begun to suggest that certain types of entertainment might actually contribute to political awareness and engagement.

“We were curious about effects of entertainment media on political interest and engagement. Can watching a movie and walking in the shoes of people affected by a political issue raise viewers’ awareness about the issue and motivate them to take action to address the issue?” explained study author Anne Bartsch, a professor at Leipzig University.

“From about a decade of experimental research, we know that moving and thought-provoking media experiences can stimulate empathy and prosocial behavior, including political engagement. In this study, we used television theme nights as an opportunity to replicate these findings ‘in the wild.’ Theme nights are a popular media format in Germany that combines entertainment and information programs about a political issue and attracts a large enough viewership to conduct representative survey research. This opportunity to study political effects of naturally occurring media use was quite unique.”

The researchers conducted three studies around two German television theme nights. The first theme night focused on the arms trade, while the second dealt with physician-assisted suicide. Each theme night included a full-length fictional film followed by an informational program. Across the three studies, more than 2,800 people took part through telephone and online surveys.

In the first study, researchers surveyed a nationally representative sample of 905 German adults by phone after the arms trade theme night. Participants were asked whether they watched the movie, the documentary, or both. They were also asked about their emotional reactions, whether they had thought deeply about the issue, and what actions they had taken afterward.

People who had seen the movie reported feeling more emotionally moved and were more likely to report having reflected on the issue. These viewers also reported greater interest in seeking more information, higher levels of both perceived and factual knowledge, and more willingness to engage in political actions related to arms trade, such as signing petitions or considering the issue when voting.

Statistical analysis indicated that the emotional experience of feeling moved led to deeper reflection, which then predicted greater knowledge and political engagement. However, there was no significant difference in how often viewers talked about the issue with others, compared to non-viewers. Surprisingly, emotional reactions did not appear to encourage discussion on social media, and may have slightly reduced it.

In the second study, the researchers repeated the survey online with a different sample of 877 participants following the same theme night. The results were largely consistent. Again, those who watched the movie felt more moved, thought more about the issue, and were more engaged. In this study, feeling moved was also linked to more frequent interpersonal discussion.

The third study examined the theme night about physician-assisted suicide. Over 1,000 people took part in the online survey. As with the earlier studies, viewers who watched the movie reported being emotionally affected and more reflective. These experiences were linked to higher interest in the topic, greater perceived knowledge, and a higher likelihood of discussing the issue or participating politically. Watching the movie also predicted stronger interest in the subsequent political talk show.

Across all three studies, the researchers found that emotional and reflective experiences were key pathways leading from entertainment to political engagement. People who felt moved by the movies were more likely to think about the issues they portrayed. These thoughts were, in turn, connected to learning more about the issue, talking with others, and taking or considering political action.

The findings suggest that serious entertainment can function as a catalyst, helping viewers process complex social issues and motivating them to become more engaged citizens.

“We found that moving and thought-provoking entertainment can have politically mobilizing effects, including issue interest, political participation, information seeking, learning, and discussing the issue with others,” Bartsch told PsyPost. “This is interesting because entertainment often gets a bad rap, as superficial, escapist pastime. Our findings suggest that it depends on the type of entertainment and the thoughts and feelings it provokes. Some forms of entertainment, it seems, can make a valuable complementary contribution to political discourse, in particular for audiences that rarely consume traditional news.”

Although the findings were consistent across different samples and topics, the authors note some limitations. Most importantly, the studies were correlational, meaning they cannot establish that the movies directly caused people to seek information or take political action. It is possible that people who are already interested in politics are more likely to watch such films and respond emotionally to them.

The researchers also caution that while theme nights seem to offer an effective combination of entertainment and information, these findings might not easily transfer to other types of media or digital platforms. Watching a movie on television with millions of others at the same time may create a shared cultural moment that is less common in today’s fragmented media landscape.

“Our findings cannot be generalized to all forms of entertainment, of course,” Bartsch noted. “Many entertainment formats are apolitical ‘feel-good’ content – which is needed for mood management as well. What is more concerning is that entertainment can also be instrumentalized to spread misinformation, hate and discrimination.”

Future studies could use experimental methods to better isolate cause and effect, and could also explore how similar effects might occur with streaming platforms or social media. Researchers might also investigate how hedonic, or lighter, forms of entertainment interact with political content, and how emotional reactions unfold over time after watching a movie.

“Our study underscores the value of ‘old school’ media formats like television theme nights that can attract large audiences and provide input for shared media experiences and discussions,” Bartsch said. “With the digital transformation of media, however, it is important to explore how entertainment changes in the digital age. For example, we are currently studying parasocial opinion leadership on social media and AI generated content.”

The study, “Eudaimonic Entertainment Experiences of TV Theme Nights and Their Relationships With Political Information Processing and Engagement,” was authored by Frank M. Schneider, Anne Bartsch, Larissa Leonhard, and Anea Meinert.

❌