Normal view

Today — 27 December 2025Main stream

Linking personal identity to political issues predicts a preference for extreme candidates

26 December 2025 at 23:00

A recent study published in the Journal of Experimental Social Psychology suggests that the rising popularity of extreme political candidates may be driven by how voters link their personal identities to their political opinions. The research provides evidence that when people feel an issue defines who they are as individuals, they tend to adopt more radical positions and favor politicians who do the same.

The researchers conducted this series of investigations to explore the psychological reasons why voters might prefer extreme candidates over moderate ones from their own party. Previous explanations have focused on structural factors like the way primary elections are organized or changes in the pool of people running for office.

But the authors behind the new research sought to better understand whether a voter’s internal connection to an issue is a significant factor. They focused on a concept called identity relevance, which is the degree to which an attitude signals to others and to oneself the kind of person someone is or aspires to be.

“Elected officials in the United States are increasingly extreme. The ideological extremity of members of Congress from both parties has steadily grown since the 1970s, reaching a 50-year high in 2022,” said study author Mohamed Hussein of Columbia University.

“State legislatures show similar trends. A recent analysis of more than 84,000 candidates running for state office revealed that extreme candidates are winning at higher rates than at any time in the last 30 years. We were interested in understanding why extreme candidates are increasingly elected.”

“So far, research in this area has focused on structural factors (e.g., the structure of primary elections),” Hussein explained. “In our work, we wanted to pivot the conversation to more psychological factors. Specifically, we tested if the identity relevance of people’s attitudes causes them to be drawn to extreme candidates. ”

The researchers conducted a series of studies to test their hypothesis. In the first study, 399 participants who identified as Democrats read about a fictional candidate named Sam Becker who was running for a seat in the House of Representatives. Some participants read that Becker held moderate views on climate change, while others read that he held extreme views. The researchers measured how much the participants felt their own attitudes on climate change were relevant to their identity.

The results suggests that as identity relevance increased, the participants reported having more extreme personal views on the issue. Those with high identity relevance showed a preference for the extreme version of Sam Becker and a dislike for the moderate version. This study provides initial evidence that the more someone sees an issue as a reflection of their character, the more they favor radical politicians.

The second study involved 349 participants and used a more complex choice task to see if these patterns held across different topics. Participants were shown pairs of candidates with varying ages, genders, and professional backgrounds. One candidate in each pair held a moderate position on a social issue, while the other held an extreme position.

The researchers tested five separate issues: abortion, gun control, immigration, climate change, and transgender rights. The data suggests that across all these topics, higher identity relevance predicted a greater likelihood of choosing the extreme candidate. Additionally, participants with high identity relevance reported being more receptive to hearing the views of the extreme candidate.

In the third study, the researchers aimed to see if they could change a person’s identity relevance by shifting their perception of what their political party valued. They recruited 584 Democrats and asked them to read a news article about the priorities of the Democratic National Committee. One group read that the party was prioritizing corn subsidies, a topic that is generally not a core identity issue for most voters.

The results suggests that when participants believed their party viewed corn subsidies as a priority, they began to see the issue as more relevant to their own identity. This shift in identity relevance led them to adopt more extreme personal views on the topic. Consequently, these participants showed a higher preference for candidates who supported radical changes to agricultural subsidies.

This experiment also allowed the researchers to rule out other factors that might influence candidate choice. They measured whether participants felt more certain, more moral, or more knowledgeable about the issue. The analysis provides evidence that identity relevance influences candidate choice primarily through its effect on attitude extremity rather than through these other psychological states.

The fourth study sought to prove that this effect can occur even when people have no factual information about a topic. The researchers presented 752 participants with a fictitious ballot initiative called Prop DW. The participants were told nothing about what the proposal would actually do.

Some participants were told their political party had taken a position on Prop DW, while others were told the party had no stance. Even without knowing the details of the policy, those who believed their party had a stance reported that Prop DW felt more identity-relevant. These individuals developed more extreme attitudes and favored candidates who took extreme positions on the made-up issue.

This finding suggests that the psychological pull toward extremity is not necessarily based on a deep understanding of policy. Instead, it seems to be a reaction to the social and personal significance assigned to the topic. It also suggests that people can form strong, radical opinions on matters they do not fully understand if they feel those matters define their social group.

Studies five and six moved away from group dynamics to see if individual reflection could trigger the same results. The researchers used a digital tool that allowed 514 participants to have a live conversation with a large language model. In one condition, the computer program was instructed to help participants reflect on how their views on corn subsidies related to their core values and sense of self.

This reflection process led to a measurable increase in identity relevance. Participants who reflected on their identity reported a higher desire for clarity, which means they wanted their opinions to be certain and distinct. This desire for clarity pushed them toward more extreme views and a higher probability of choosing an extreme candidate.

The final study involving 807 participants replicated this effect with a more rigorous comparison group. In this version, the control group also discussed corn subsidies with the language model but was not prompted to think about their personal identity. The results provides evidence that only the participants who specifically linked the issue to their identity showed a significant shift toward extremity.

The researchers note that this effect was symmetric across political parties. Both Democrats and Republicans showed the same pattern of moving toward extreme candidates when an issue felt relevant to their identity. This suggests that the psychological mechanism is a general feature of human behavior rather than a trait specific to one side of the political aisle.

“Across six studies with over 3,000 participants, we found that the more people see their political attitudes as tied to identity, the more likely they are to choose extreme, versus moderate, candidates,” Hussein told PsyPost. “The more central fighting climate change felt to the identity of participants, the more they liked the extreme Sam and the more they disliked the moderate Sam. Put simply, identity relevance increased liking of extreme candidates but decreased liking of moderate ones.”

“These results were remarkably robust. Across studies we tested a range of issues including climate change, abortion, immigration, transgender rights, gun control, and corn subsidies . We even created a fictitious issue (“Prop DW”) that participants had no information about. Across issues, we found that when we framed the issue as central to their identity, people formed more extreme views on it and then preferred extreme candidates who promised bolder action. Even on a made-up issue, identity relevance pushed people toward extremes.”

“These results were also robust regardless of how we talked about candidate extremity,” Hussein continued. “In addition to having candidates describe themselves as extreme, we also signaled extremity in different ways. In some studies, the candidates endorsed different policies, some that were moderate and others that were extreme.”

“In other studies, we held the policy constant but changed the level of action that candidates supported (e.g., increasing a subsidy by a small amount compared to a large amount). Lastly, in some studies, we explicitly labeled candidates as ‘moderate’ or ‘extreme’ on an issue. Regardless of how candidate extremity was described to participants, the results held.”

But there are some potential misinterpretations and limitations to consider regarding this research. One limitation is that the studies were conducted within the specific political context of the United States. The American two-party system might encourage a greater need for distinct, polarized identities compared to countries with multiple competing parties.

Future research could explore whether these findings apply to people in other nations with different electoral structures. It would also be useful to investigate whether certain personality types are more prone to linking their identity to political issues. Some individuals may naturally seek more self-definition through their opinions than others.

Another direction for future study involves finding ways to decrease political tension. If identity relevance is a primary driver of the preference for extreme candidates, it suggests that finding ways to de-emphasize the personal significance of political stances might lead to more moderate dialogue. Interventions that help people feel secure in their identity without needing to hold radical opinions could potentially reduce social polarization.

“Politics has always been personal, but it’s becoming more identity-defining than ever,” Hussein said. “And when politics becomes identity-relevant, our research suggests that extremity gains in appeal. Illuminating this psychological process helps us understand today’s political landscape and provides a roadmap for how to change it. Our results suggest that if we can loosen the grip of identity on politics, the appeal of extreme candidates might start to wane.”

The study, “Why do people choose extreme candidates? The role of identity relevance,” was authored by Mohamed A. Hussein, Zakary L. Tormala, and S. Christian Wheeler.

Yesterday — 26 December 2025Main stream

Musical expertise is associated with specific cognitive and personality traits beyond memory performance

26 December 2025 at 21:00

Experienced musicians tend to possess an advantage in short-term memory for musical patterns and a small advantage for visual information, according to a large-scale international study. The research provides evidence that the memory benefit for verbal information is much smaller than previously thought, suggesting that some earlier findings may have overrepresented this link. These results, which stem from a massive collaborative effort involving 33 laboratories, were published in the journal Advances in Methods and Practices in Psychological Science.

The study was led by Massimo Grassi and a broad team of researchers who sought to address inconsistencies in past scientific literature. For many years, scientists have used musicians as a model for understanding how intense, long-term practice changes the brain and behavior. While many smaller studies suggested that musical training boosts various types of memory, these individual projects often lacked the statistical power to provide a reliable estimate of the effect.

The researchers aimed to establish a community-driven standard for future studies by recruiting a much larger group of participants than typical experiments in this field. They also wanted to explore whether other factors, such as general intelligence or personality traits, might explain why musicians often perform better on cognitive tests. By using a shared protocol across dozens of locations, the team intended to provide a more definitive answer regarding the scope of the musical memory advantage.

To achieve this goal, the research team recruited 1,200 participants across 15 different countries. This group consisted of 600 experienced musicians and 600 nonmusicians who were matched based on their age, gender, and level of general education. The musicians in the study were required to have at least 10 years of formal training and be currently active in their practice.

The nonmusicians had no more than two years of training and had been musically inactive for at least five years. This strict selection process ensured that the two groups represented clear ends of the musical expertise spectrum. Each participant completed the same set of tasks in a laboratory setting to maintain consistency across the 33 different research units.

The primary measures included three distinct short-term memory tasks involving musical, verbal, and visuospatial stimuli. In the musical task, participants listened to a melody and then judged whether a second melody was identical or different. The verbal task required participants to view a sequence of digits on a screen and recall them in the correct order.

For the visuospatial task, participants watched dots appear in a grid and then had to click on those positions in the sequence they were shown. Additionally, the researchers measured fluid intelligence using the Raven Advanced Progressive Matrices and crystallized intelligence through a vocabulary test. They also assessed executive functions with a letter-matching task and collected data on personality and socioeconomic status.

The researchers found that musicians performed significantly better than nonmusicians on the music-related memory task. This difference was large, which suggests that musical expertise provides a substantial benefit when dealing with information within a person’s specific domain of skill. This finding aligns with the idea that long-term training makes individuals much more efficient at processing familiar types of data.

In contrast, the advantage for verbal memory was very small. This suggests that the benefits of music training do not easily transfer to the memorization of words or numbers. The researchers noted that some previous studies showing a larger verbal benefit may have used auditory tasks, where musicians could use their superior hearing skills to gain an edge.

For visuospatial memory, the study found a small but statistically significant advantage for the musicians. This provides evidence that musical training might have a slight positive association with memory for locations and patterns. While this effect was not as large as the music-specific memory gain, it suggests a broader cognitive difference between the two groups.

The statistical models used by the researchers revealed that general intelligence and executive functions were consistent predictors of memory performance across all tasks. When these factors were taken into account, the group difference for verbal memory largely disappeared. This suggests that the minor verbal advantage seen in musicians may simply reflect their slightly higher average scores on general intelligence tests.

Musicians also tended to score higher on the personality trait of open-mindedness. This trait describes a person’s curiosity and willingness to engage with new experiences or complex ideas. The study suggests that personality and family background are important variables that often distinguish those who pursue long-term musical training from those who do not.

Data from the study also indicated that musicians often come from families with a higher socioeconomic status. This factor provides evidence that access to resources and a stimulating environment may play a role in both musical achievement and cognitive development. These background variables complicate the question of whether music training directly causes better memory or if high-performing individuals are simply more likely to become musicians.

As with all research, there are some limitations. Because the study was correlational, it cannot confirm that musical training is the direct cause of the memory advantages. It remains possible that children with naturally better memory or higher intelligence are more likely to enjoy music lessons and stick with them for over a decade.

Additionally, the study focused on young adults within Western musical cultures. The results might not apply to children, elderly individuals, or musicians trained in different cultural traditions. Future research could expand on these findings by tracking individuals over many years to see how memory changes as they begin and continue their training.

The team also noted that the study only measured short-term memory. Other systems, such as long-term memory or the ability to manipulate information in the mind, were not the primary focus of this specific experiment. Future collaborative projects could use similar large-scale methods to investigate these other areas of cognition.

The multilab approach utilized here helps correct for the publication bias that often favors small studies with unusually large effects. By pooling data from many locations, the researchers provided a more realistic and nuanced view of how expertise relates to general mental abilities. This work sets a new benchmark for transparency and reliability in the field of music psychology.

Ultimately, the study suggests that while musicians do have better memory, the advantage is most prominent when they are dealing with music itself. The idea that learning an instrument provides a major boost to all types of memory appears to be an oversimplification. Instead, the relationship between music and the mind is a complex interaction of training, personality, and general cognitive traits.

The study, “Do Musicians Have Better Short-Term Memory Than Nonmusicians? A Multilab Study,” was authored by Massimo Grassi, Francesca Talamini, Gianmarco Altoè, Elvira Brattico, Anne Caclin, Barbara Carretti, Véronique Drai-Zerbib, Laura Ferreri, Filippo Gambarota, Jessica Grahn, Lucrezia Guiotto Nai Fovino, Marco Roccato, Antoni Rodriguez-Fornells, Swathi Swaminathan, Barbara Tillmann, Peter Vuust, Jonathan Wilbiks, Marcel Zentner, Karla Aguilar, Christ B. Aryanto, Frederico C. Assis Leite, Aíssa M. Baldé, Deniz Başkent, Laura Bishop, Graziela Kalsi, Fleur L. Bouwer, Axelle Calcus, Giulio Carraturo, Victor Cepero-Escribano, Antonia Čerič, Antonio Criscuolo, Léo Dairain, Simone Dalla Bella, Oscar Daniel, Anne Danielsen, Anne-Isabelle de Parcevaux, Delphine Dellacherie, Tor Endestad, Juliana L. d. B. Fialho, Caitlin Fitzpatrick, Anna Fiveash, Juliette Fortier, Noah R. Fram, Eleonora Fullone, Stefanie Gloggengießer, Lucia Gonzalez Sanchez, Reyna L. Gordon, Mathilde Groussard, Assal Habibi, Heidi M. U. Hansen, Eleanor E. Harding, Kirsty Hawkins, Steffen A. Herff, Veikka P. Holma, Kelly Jakubowski, Maria G. Jol, Aarushi Kalsi, Veronica Kandro, Rosaliina Kelo, Sonja A. Kotz, Gangothri S. Ladegam, Bruno Laeng, André Lee, Miriam Lense, César F. Lima, Simon P. Limmer, Chengran K. Liu, Paulina d. C. Martín Sánchez, Langley McEntyre, Jessica P. Michael, Daniel Mirman, Daniel Müllensiefen, Niloufar Najafi, Jaakko Nokkala, Ndassi Nzonlang, Maria Gabriela M. Oliveira, Katie Overy, Andrew J. Oxenham, Edoardo Passarotto, Marie-Elisabeth Plasse, Herve Platel, Alice Poissonnier, Neha Rajappa, Michaela Ritchie, Italo Ramon Rodrigues Menezes, Rafael Román-Caballero, Paula Roncaglia, Farrah Y. Sa’adullah, Suvi Saarikallio, Daniela Sammler, Séverine Samson, E. G. Schellenberg, Nora R. Serres, L. R. Slevc, Ragnya-Norasoa Souffiane, Florian J. Strauch, Hannah Strauss, Nicholas Tantengco, Mari Tervaniemi, Rachel Thompson, Renee Timmers, Petri Toiviainen, Laurel J. Trainor, Clara Tuske, Jed Villanueva, Claudia C. von Bastian, Kelly L. Whiteford, Emily A. Wood, Florian Worschech, and Ana Zappa.

Before yesterdayMain stream

Some men may downplay climate change risks to avoid appearing feminine

25 December 2025 at 21:00

New research provides evidence that men who are concerned about maintaining a traditional masculine image may be less likely to express concern about climate change. The findings suggest that acknowledging environmental problems is psychologically linked to traits such as warmth and compassion. These traits are stereotypically associated with femininity in many cultures. Consequently, men who feel pressure to prove their manhood may avoid environmentalist attitudes to protect their gender identity. The study was published in the Journal of Environmental Psychology.

Scientific consensus indicates that climate change is occurring and poses significant risks to global stability. Despite this evidence, public opinion remains divided. Surveys consistently reveal a gender gap regarding environmental attitudes. Men typically express less concern about climate change than women do. Michael P. Haselhuhn, a researcher at the University of California, Riverside, sought to understand the psychological drivers behind this disparity.

Haselhuhn conducted this research to investigate why within-gender differences exist regarding climate views. Past studies have often focused on political ideology or a lack of scientific knowledge as primary explanations. Haselhuhn proposed that the motivation to adhere to gender norms plays a significant but overlooked role. He based his hypothesis on the theory of precarious manhood.

Precarious manhood theory posits that manhood is viewed socially as a status that is difficult to earn and easy to lose. Unlike womanhood, which is often treated as a biological inevitability, manhood must be proven through action. This psychological framework suggests that men experience anxiety about failing to meet societal standards of masculinity. They must constant reinforce their status and avoid behaviors that appear feminine.

Socialization often expects women to be communal, caring, and warm. In contrast, men are often expected to be agentic, tough, and emotionally reserved. Haselhuhn theorized that because caring for the environment involves communal concern, it signals warmth. Men who are anxious about their social status might perceive this signal as a threat. They may reject climate science not because they misunderstand the data, but because they wish to avoid seeming “soft.”

The researcher began with a preliminary test to establish whether environmental concern is indeed viewed as a feminine trait. He recruited 450 participants from the United States through an online platform. These participants read a short scenario about a male university student named Adam. Adam was described as an undergraduate majoring in Economics who enjoyed running.

In the control condition, Adam was described as active in general student issues. In the experimental condition, Adam was described as concerned about climate change and active in a “Save the Planet” group. After reading the scenario, participants rated Adam on various personality traits. Haselhuhn specifically looked at ratings for warmth, caring, and compassion.

The results showed that when Adam was described as concerned about climate change, he was perceived as significantly warmer than when he was interested in general student issues. Participants viewed the environmentalist version of Adam as possessing more traditionally feminine character traits. This initial test confirmed that expressing environmental concern can alter how a man’s gender presentation is perceived by others.

Following this pretest, Haselhuhn analyzed data from the European Social Survey to test the hypothesis on a large scale. This survey included responses from 40,156 individuals across multiple European nations. The survey provided a diverse sample that allowed the researcher to look for broad patterns in the general population.

The survey asked participants to rate how important “being a man” was to their self-concept if they were male. It asked women the same regarding “being a woman.” It also measured three specific climate attitudes. These included belief in human causation, feelings of personal responsibility, and overall worry about climate change.

Haselhuhn found a negative relationship between masculinity concerns and climate engagement. Men who placed a high importance on being a man were less likely to believe that climate change is caused by human activity. They also reported feeling less personal responsibility to reduce climate change. Furthermore, these men expressed lower levels of worry about the issue.

A similar pattern appeared for women regarding the importance of being a woman. However, statistical analysis confirmed that the effect of gender role concern on climate attitudes was significantly stronger for men. This aligns with the theory that the pressure to maintain one’s gender status is more acute for men due to the precarious nature of manhood.

To validate these findings with more precise psychological tools, Haselhuhn conducted a second study with 401 adults in the United States. The measure used in the European survey was a single question, which might have lacked nuance. In this second study, men completed the Masculine Gender Role Stress scale.

This scale assesses how much anxiety men feel in situations that challenge traditional masculinity. Items include situations such as losing in a sports competition or admitting fear. Women completed a parallel scale regarding feminine gender stress. This scale includes items about trying to excel at work while being a good parent. Climate attitudes were measured using a standard scale assessing conviction that climate change is real and concern about its impact.

The results from the second study replicated the findings from the large-scale survey. Men who scored higher on masculinity stress expressed significantly less concern about climate change. This relationship held true regardless of the participants’ political orientation. Haselhuhn found no relationship between gender role stress and climate attitudes among women in this sample. This suggests that the pressure to adhere to gender norms specifically discourages men from engaging with environmental issues.

A third study was conducted to pinpoint the underlying mechanism. Haselhuhn recruited 482 men from the United States for this final experiment. He sought to confirm that the fear of appearing “warm” or feminine was the specific driver of the effect. Participants completed the same masculinity stress scale and climate attitude measures used in the previous study.

They also completed a task where they categorized various personality traits. Participants rated whether traits such as “warm,” “tolerant,” and “sincere” were expected to be more characteristic of men or women. This allowed the researcher to see how strongly each participant associated warmth with femininity.

Haselhuhn found that men with high masculinity concerns were generally less concerned about climate change. However, this effect depended on their beliefs about warmth. The negative relationship between masculinity concerns and climate attitudes was strongest among men who viewed warmth as a distinctly feminine characteristic.

For men who did not strongly associate warmth with women, the pressure to be masculine did not strongly predict their views on climate change. This provides evidence that the avoidance of feminine stereotypes is a key reason why insecure men distance themselves from environmentalism. They appear to regulate their attitudes to avoid signaling traits that society assigns to women.

These findings have implications for how climate change communication is framed. If environmentalism is perceived as an act of caring and compassion, it may continue to alienate men who are anxious about their gender status. Haselhuhn notes that the effect sizes in the study were small but consistent. This means that while gender concerns are not the only factor driving climate denial, they are a measurable contributor.

The study has some limitations. It relied on self-reported attitudes rather than observable behaviors. It is possible that the pressure to conform to masculine norms would be even higher in public settings where men are watched by peers. Men might be willing to express concern in an anonymous survey but reject those views in a group setting to maintain status.

Future research could examine whether reframing environmental action affects these attitudes. Describing climate action in terms of protection, courage, or duty might make the issue more palatable to men with high masculinity concerns. Additionally, future work could investigate whether affirming a man’s masculinity in other ways reduces his need to reject environmental concern. The current data indicates that for many men, the desire to be seen as a “real man” conflicts with the desire to save the planet.

The study, “Man enough to save the planet? Masculinity concerns predict attitudes toward climate change,” was authored by Michael P. Haselhuhn.

Perceived spiritual strength of a group drives extreme self-sacrifice through collective narcissism

25 December 2025 at 19:00

New research indicates that perceiving one’s social group as possessing inner spiritual strength can drive members to extreme acts of self-sacrifice. This willingness to suffer for the group appears to be fueled by collective narcissism, a belief that the group is exceptional but underappreciated by others. The findings suggest that narratives of spiritual power may inadvertently foster dangerous forms of group entitlement. The study was published in the Personality and Social Psychology Bulletin.

History is replete with examples of smaller groups overcoming larger adversaries through sheer willpower. Social psychologists have termed this perceived inner strength “spiritual formidability.” This concept refers to the conviction in a cause and the resolve to pursue it regardless of material disadvantages. Previous observations of combatants in conflict zones have shown that spiritual formidability is often a better predictor of the willingness to fight than physical strength or weaponry.

The authors of the current study sought to understand the psychological mechanisms behind this phenomenon. They aimed to determine why a perception of spiritual strength translates into a readiness to die or suffer for a group. They hypothesized that this process is not merely a result of loyalty or love for the group. Instead, they proposed that it stems from a demand for symbolic recognition.

The researchers suspected that viewing one’s group as spiritually powerful feeds into collective narcissism. Collective narcissism differs from simple group pride or satisfaction. It involves a defensive form of attachment where members believe their group possesses an undervalued greatness that requires external validation. The study tested whether this specific type of narcissistic belief acts as the bridge between spiritual formidability and self-sacrifice.

“Previous research has shown that perceiving one’s group as spiritually strong—deeply committed to its values—predicts a willingness to fight and self-sacrifice, but the psychological mechanisms behind this link were still unclear,” said study author Juana Chinchilla, an assistant professor of social psychology at the Universidad Nacional de Educación a Distancia (UNED) in Spain.

“We were particularly interested in understanding why narratives of moral or spiritual strength can motivate extreme sacrifices, especially in real-world contexts marked by conflict and behavioral radicalization. This study addresses that gap by identifying collective narcissism as a key mechanism connecting spiritual formidability to extreme self-sacrificial intentions.”

The research team conducted a series of five investigations to test their hypothesis. They began with a preliminary online survey of 420 individuals from the general population in Spain. Participants completed measures assessing their satisfaction with their nation and their levels of national collective narcissism. They also rated their willingness to engage in extreme actions to defend the country, such as going to jail or dying.

A central component of this preliminary study was the inclusion of ingroup satisfaction as a control variable. Ingroup satisfaction represents a secure sense of pride and happiness with one’s membership in a group. It is distinct from the defensive and resentful nature of collective narcissism. By statistically controlling for this variable, the researchers aimed to isolate the specific effects of narcissism.

The data from this initial survey provided a baseline for the researchers’ theory. The results showed that collective narcissism predicted a willingness to sacrifice for the country even after accounting for the influence of ingroup satisfaction.

“One striking finding was how reliably collective narcissism explained self-sacrificial intentions even when controlling for more secure forms of group attachment, such as ingroup satisfaction,” Chinchilla told PsyPost. “This suggests that extreme sacrifice is not always driven by genuine concern for the group’s well-being, but sometimes by defensive beliefs about the group’s greatness and lack of recognition. We were also surprised by how easily these processes could be activated through shared narratives about spiritual strength.”

Following this preliminary work, the researchers gained access to high-security penitentiary centers across Spain for two field studies. Study 1a involved 70 male inmates convicted of crimes related to membership in violent street gangs. Study 1b focused on 47 male inmates imprisoned for organized property crimes and membership in delinquent bands. These populations were selected because they are known for engaging in costly actions to protect their groups.

In these prison studies, participants used a dynamic visual measure to rate their group’s spiritual formidability. They were shown an image of a human body and adjusted a slider to change its size and muscularity. This visual metaphor represented the inner strength and conviction of their specific gang or band. They also completed questionnaires measuring collective narcissism and their willingness to make sacrifices, such as enduring longer prison sentences or cutting off family contact.

The findings from the prison samples were consistent with the initial hypothesis. Inmates who perceived their gang or band as spiritually formidable reported higher levels of collective narcissism. This sense of underappreciated greatness was statistically associated with a higher willingness to make severe personal sacrifices. Mediation analysis indicated that collective narcissism explains why spiritual formidability leads to self-sacrifice.

The researchers then extended their investigation to a sample of 88 inmates convicted of jihadist terrorism or proselytizing in prison. This sample included individuals involved in major attacks and thwarted plots. The procedure mirrored the previous studies but focused on the broader ideological group of Muslims rather than a specific criminal band. Participants rated the spiritual formidability of Muslims and their willingness to sacrifice for their religious ideology.

The researchers conducted additional statistical analyses to ensure the robustness of these findings. These models explicitly controlled for the gender of the participants. This step ensured that the observed effects were not simply due to differences in how men and women might approach sacrifice or group perception.

The results from the jihadist sample aligned with those from the street gangs. Perceptions of spiritual strength within the religious community were associated with higher collective narcissism regarding the faith. This defensive pride predicted a greater readiness to suffer for the ideology. The relationship remained significant even when controlling for gender. The study demonstrated that the psychological mechanism operates for large-scale ideological values just as it does for small, cohesive gangs.

Finally, the researchers conducted an experimental study with 457 Spanish citizens to establish causality. This study took place during the early stages of the COVID-19 pandemic, a time of heightened threat and social uncertainty. The researchers provided false feedback to a portion of the participants. This feedback stated that most Spaniards viewed their country as possessing high spiritual formidability.

Participants in the control group received no information regarding how other citizens viewed the nation. All participants then completed measures of collective narcissism and willingness to sacrifice to defend the country against the pandemic. The manipulation was designed to test if simply hearing about the group’s spiritual strength would trigger the proposed psychological chain reaction.

The experiment confirmed the causal role of spiritual formidability. Participants led to believe their country was spiritually formidable scored higher on measures of collective narcissism. They also expressed a greater willingness to endure extreme hardships to fight the pandemic. Statistical analysis confirmed that the manipulation influenced self-sacrifice specifically by boosting collective narcissism.

The study provides evidence that narratives of spiritual strength can have a double-edged nature. While such beliefs can foster cohesion, they can also trigger a sense of entitlement and resentment toward those who do not recognize the group’s greatness. This defensive mindset appears to be a key driver of extreme pro-group behavior.

“Our findings suggest that believing one’s group is spiritually formidable can motivate extreme self-sacrifice not only through loyalty or love, but also through a sense that the group is undervalued and deserves greater recognition,” Chinchilla explained. “This illustrates that people may engage in risky or extreme progroup actions to achieve symbolic recognition. Importantly, it also highlights how seemingly positive narratives about spiritual strength can have unintended and potentially dangerous consequences.”

However, “it would be a mistake to interpret spiritual formidability as inherently dangerous or as a direct cause of violence. On its own, perceiving the ingroup as morally committed and spiritually strong can promote loyalty, trust, and cohesion. The problematic consequences may arise only under severe threat or when perceptions of spiritual formidability become intertwined with collective narcissism.”

Future research is needed to determine when exactly these beliefs turn into narcissistic entitlement. The authors note that a key challenge is clarifying the boundary conditions under which spiritual formidability gives rise to collective narcissism. This distinction might depend on whether individuals see violence as morally acceptable.

“We plan to examine whether similar mechanisms operate in non-violent movements, such as environmental or human rights activism, where strong moral commitment is critical,” Chinchilla said. “Another important next step is identifying interventions that can decouple spiritual formidability from collective narcissism, for example by promoting narratives that frame cooperation and peace as markers of true moral strength.”

“One of the strengths of this research is the diversity of the samples, including populations that are rarely accessible in psychological research. Studying these processes in real-world, high-stakes contexts helps bridge the gap between laboratory findings and the dynamics underlying radicalization, intergroup conflict, and extreme collective behavior.”

The study, “Spiritual Formidability Predicts the Will to Self-Sacrifice Through Collective Narcissism,” was authored by Juana Chinchilla and Angel Gomez.

New research frames psychopathy as a potential survival adaptation to severe early adversity

25 December 2025 at 15:00

New research suggests that specific personality traits may amplify the way childhood adversity shapes an individual’s approach to life. A study published in the journal Personality and Individual Differences provides evidence that subclinical psychopathy strengthens the link between childhood trauma and “fast” life history strategies. The findings indicate that for those who have experienced severe early difficulties, certain dark personality traits may function as adaptive mechanisms for survival.

Psychologists use a framework called Life History Theory to explain how people allocate their energy. This theory proposes that all living organisms must make trade-offs between investing in their own growth and investing in reproduction. These trade-offs create a spectrum of strategies that range from “fast” to “slow.”

A fast life history strategy typically emerges in environments that are harsh or unpredictable. Individuals with this orientation tend to prioritize immediate rewards and reproduction over long-term planning. They often engage in riskier behaviors and invest less effort in long-term relationships. This approach makes evolutionary sense when the future is uncertain.

Conversely, a slow life history strategy is favored in stable and safe environments. This approach involves delaying gratification and investing heavily in personal development and long-term goals. It also involves a focus on building deep, enduring social and family bonds.

The researchers also examined the “Dark Triad” of personality. This cluster includes three distinct traits: narcissism, Machiavellianism, and psychopathy. Narcissism involves grandiosity and a need for admiration. Machiavellianism is characterized by manipulation and strategic calculation. Psychopathy involves high impulsivity and a lack of empathy.

The research team, led by Vlad Burtaverde from the University of Bucharest, sought to understand how these dark traits interact with early life experiences. They hypothesized that these traits might help individuals adapt to traumatic environments by accelerating their life strategies. The study aimed to determine if the Dark Triad traits or childhood socioeconomic status moderate the relationship between trauma and life outcomes.

To investigate this, the researchers recruited 270 undergraduate students. The participants had an average age of approximately 20 years. The majority of the sample was female. The participants completed a series of online questionnaires designed to measure their childhood experiences and current personality traits.

The Childhood Trauma Questionnaire assessed exposure to emotional, physical, and sexual abuse, as well as neglect. The Short Dark Triad measure evaluated levels of narcissism, Machiavellianism, and psychopathy. The High-K Strategy Scale assessed life history strategies by asking about health, social capital, and future planning. Participants also answered questions regarding their family’s financial situation during their childhood.

The results showed that participants who reported higher levels of childhood trauma were more likely to exhibit fast life history strategies. These individuals also tended to report lower childhood socioeconomic status. This aligns with the expectation that adverse environments encourage a focus on the present rather than the future.

Among the Dark Triad traits, subclinical narcissism showed a unique pattern. It was the only trait that had a statistically significant direct relationship with life history strategies. Specifically, higher narcissism was associated with slower life history strategies. This suggests that narcissism may function differently than the other dark traits.

The most significant finding involved subclinical psychopathy. The analysis revealed that psychopathy moderated the relationship between childhood trauma and fast life history strategies. For individuals with low levels of psychopathy, the link between trauma and a fast strategy was weaker. However, for those with high levels of psychopathy, the link was much stronger.

This means that psychopathy may act as a catalyst. It appears to amplify the effect of trauma, pushing the individual more aggressively toward a fast life strategy. The authors suggest this frames psychopathy as a “survival” trait. It helps the individual pursue immediate resources in a world they perceive as dangerous.

In contrast, the researchers found that childhood socioeconomic status did not moderate this relationship. While growing up poor was linked to faster life strategies, it did not change how trauma impacted those strategies. This suggests that the psychological impact of trauma operates somewhat independently of financial resources.

These findings build upon a growing body of research linking environmental conditions to personality development. A global study by Peter Jonason and colleagues analyzed data from over 11,000 participants across 48 countries. They found that macro-level ecological factors, such as natural disasters and skewed sex ratios, predict national averages of Dark Triad traits. For instance, countries with more men than women tended to have higher levels of narcissism.

That global study suggested that these traits are not merely pathologies. They may be functional responses to broad ecological pressures. The current study by Burtaverde and colleagues zooms in from the national level to the individual level. It shows how personal history interacts with these traits to shape behavior.

Research by Lisa Bohon and colleagues provides further context regarding gender and environment. Their study of female college students found that a disordered home life predicted fast life history traits. They found that father absence and childhood trauma were strong predictors of psychopathy in women. These traits then mediated the relationship between childhood environment and mating effort.

The Bohon study highlighted that immediate family dynamics, or the “microsystem,” are powerful predictors of adult personality. This aligns with the Burtaverde study’s focus on childhood trauma. Both studies suggest that the “dark” traits serve a function in regulating reproductive effort and survival strategies.

Another study by Junwei Pu and Xiong Gan examined the social roots of these traits in adolescents. They found that social ostracism led to increased loneliness. This loneliness subsequently promoted the development of Dark Triad traits over time. Their work suggests that social isolation acts as a signal to the individual that the environment is hostile.

This hostility prompts the development of defensive personality traits. Psychopathy, in particular, was strongly connected to feelings of loneliness in their sample. This complements the Burtaverde finding that psychopathy strengthens the reaction to trauma. A person who feels rejected and traumatized may develop callousness as a protective shell.

David Pineda and his team investigated the specific role of parental discipline. They found that psychological aggression from parents was a unique predictor of psychopathy and sadism in adulthood. Severe physical assault was linked to Machiavellianism and narcissism. Their work emphasizes that specific types of mistreatment yield specific personality outcomes.

This nuance helps explain why the Burtaverde study found a link between general trauma and life strategies. The specific type of trauma likely matters. Pineda’s research suggests that psychological aggression may be particularly potent in fostering the traits that Burtaverde identified as moderators.

Finally, research by Jacob Dye and colleagues looked at the buffering effect of positive experiences. They found that positive childhood experiences could reduce psychopathic traits, but only up to a point. If a child faced severe adversity, positive experiences were no longer enough to prevent the development of dark traits.

This limitation noted by Dye supports the Burtaverde finding regarding the strength of the trauma-psychopathy link. In cases of high trauma, the “survival” mechanism of psychopathy appears to override other developmental pathways. The protective factors become less effective when the threat level is perceived as extreme.

Nevertheless, the authors of the new study note some limitations to their work. The reliance on self-reported data introduces potential bias. Participants may not accurately remember or report their childhood experiences. The sample consisted largely of female undergraduate students. This limits the ability to generalize the findings to the broader population or to men specifically.

Future research is needed to track these relationships over time. Longitudinal studies could help determine the direction of causality. It is possible that children with certain temperaments elicit different reactions from their environment. Understanding the precise timeline of these developments would require observing participants from childhood through adulthood.

The study, “Childhood trauma and life history strategies – the moderating role of childhood socio-economic status and the dark triad traits,” was authored by Vlad Burtaverde, Peter K. Jonason, Anca Minulescu, Bogdan Oprea, Șerban A. Zanfirescu, Ștefan -C. Ionescu, and Andreea M. Gheorghe.

Study finds little evidence of the Dunning-Kruger effect in political knowledge

24 December 2025 at 21:00

A new study suggests that the average person may be far more aware of their own lack of political knowledge than previously thought. Contrary to the popular idea that people consistently overestimate their competence, this research indicates that individuals with low political information generally admit they do not know much. These findings were published in Political Research Quarterly.

Political scientists have spent years investigating the gap between what citizens know and what they think they know. This gap is often attributed to the Dunning-Kruger effect. This psychological phenomenon occurs when people with low ability in a specific area overestimate their competence.

In their new study, Alexander G. Hall and Kevin B. Smith of the University of Nebraska sought to answer several unresolved questions regarding this phenomenon. They wanted to determine if receiving objective feedback could reduce overconfidence. The researchers also intended to see if the Dunning-Kruger effect remains stable over time or changes due to major events. The study utilized a natural experiment to test these ideas in a real-world educational setting.

“Kevin and I have had an ongoing interest in this question: if you make someone’s substantive knowledge salient, will they do a more accurate job of reporting it?” explained Hall, who is now a staff statistician for Creighton University’s School of Medicine and adjunct instructor for the University of Nebraska-Omaha.

“I noticed that in his intro political science course he had been consistently collecting information that could speak to this, and that we had the makings of a neat natural experiment where participants had either taken this knowledge assessment before (presumably increasing that salience) or after being asked about their self-rated political knowledge.”

This data collection spanned eleven consecutive semesters between the fall of 2018 and the fall of 2023. The total sample included 1,985 students. The mean sample size per semester was approximately 180 participants.

The course required students to complete two specific assignments during the first week of the semester. One assignment was a forty-two-question assessment test designed to measure objective knowledge of American government and politics. The questions included items from textbook test banks and the United States citizenship test. The second assignment was a class survey that asked students to rate their own knowledge.

The researchers measured confidence using a specific question on the survey. Students rated their knowledge of American politics on a scale from zero to ten. A score of zero represented no knowledge, while a score of ten indicated the student felt capable of running a presidential campaign.

The study design took advantage of the order in which students completed these assignments. The course did not require students to finish the tasks in a specific sequence. Approximately one-third of the students chose to take the objective assessment test before completing the survey. The remaining two-thirds completed the survey before taking the test.

This natural variation allowed the researchers to treat the situation as a quasi-experiment. The students who took the test first effectively received feedback on their knowledge levels before rating their confidence. This group served as the experimental group. The students who rated their confidence before taking the test served as the control group.

The results provided a consistent pattern across the five-year period. The researchers found that students objectively knew very little about American politics. The average score on the assessment test was roughly 60 percent. This grade corresponds to a D-minus or F in academic terms.

Despite these low scores, the students did not demonstrate the expected overconfidence. When asked to rate their general political knowledge, the students gave answers that aligned with their low performance. The average response on the zero-to-ten confidence scale was modest.

The researchers compared the confidence levels of the group that took the test first against the group that took the survey first. They hypothesized that taking the test would provide a “reality check” and lower confidence scores. The analysis showed no statistically significant difference between the two groups. Providing objective feedback did not reduce confidence because the students’ self-assessments were already low.

The study also examined the stability of these findings over time. The data collection period covered significant events, including the COVID-19 pandemic and the 2020 presidential election. The researchers looked for any shifts in knowledge or confidence that might have resulted from these environmental shocks.

The analysis revealed that levels of political knowledge and confidence remained remarkably stable. The pandemic and the election cycle did not lead to meaningful changes in how much students knew or how much they thought they knew. The gap between actual knowledge and perceived knowledge remained substantively close to zero throughout the study.

“More than anything, I thought we’d see an impact of the natural experiment,” Hall told PsyPost. “I was also somewhat surprised by how flat the results appeared around 2020, when external factors like COVID-19 and the presidential election may have been impacting actual and perceived student knowledge.”

The authors utilized distinct statistical methods to verify their findings regarding overconfidence. They calculated overconfidence using quintiles, which divides the sample into five equal groups based on performance. They also used Z-scores, which measure how far a data point is from the average. Both methods yielded similar conclusions.

Using the quintile method, the researchers subtracted the quintile of the student’s actual score from the quintile of their self-assessment. The resulting overconfidence estimates were not statistically different from zero across all eleven semesters. This finding persisted regardless of whether the students took the assessment before or after the survey.

The Z-score analysis showed minor fluctuations but supported the main conclusion. There was a slight decrease in overconfidence in the control group between 2020 and 2023. However, the magnitude of this change was so small that it had little practical meaning. The overarching trend showed that students consistently recognized their own lack of expertise.

These results challenge the prevailing narrative in political science regarding the Dunning-Kruger effect. Hall and Smith suggest that the difference in findings may stem from how confidence is measured. Many previous studies ask participants to estimate their performance on a specific test they just took. This prompt often triggers a psychological bias where people assume they performed better than average.

In contrast, this study asked students to rate their general knowledge of a broad domain. When faced with a general question about how much they know about politics, individuals appear to be more humble. They do not default to assuming they are above average. Instead, they provide a rating that accurately reflects their limited understanding.

“The gap between what people know and what they think they know (over-or-under-confidence) may be less of a problem than we think, at least in the realm of political knowledge,” Hall said. “What we found is that if you ask someone what they know about politics they are likely to respond with ‘not much.’ You don’t have to provide them with evidence of that lack of information to get that response, they seem to be well-aware of the limitations of their knowledge regardless.”

“The short version here is that we did not find the Dunning-Kruger effect we expected to find. People with low information about politics did not overestimate their political knowledge, they seemed well-aware of its limitations.”

The authors argue that the Dunning-Kruger effect in politics might be an artifact of measurement choices. If researchers ask people how they did on a test, they find overconfidence. If researchers ask people how much they generally know, the overconfidence disappears. This distinction implies that the gap between actual and perceived knowledge may be less problematic than previously feared.

The study does have limitations that the authors acknowledge. The sample consisted entirely of undergraduate students. While the sample was diverse in terms of gender and political orientation, students are not perfectly representative of the general voting population. It is possible that being in an educational setting influences how students rate their own knowledge.

Another limitation involves the nature of the questions. The assessment relied on factual knowledge about civics and government structure. It is possible that overconfidence manifests differently when discussing controversial policy issues or specific political events. Future research could investigate whether different types of political knowledge elicit different levels of confidence.

The study also relied on a natural experiment rather than a randomized controlled trial. While the researchers found no significant differences between the groups initially, they did not control who took the test first. However, the large sample size and repeated data collection add weight to the findings.

“We should certainly be mindful of the principle that ‘absence of evidence isn’t evidence of absence,’ given the frequentist nature of null hypothesis significance testing,” Hall noted. “It’s also critical to understand the limitations of a natural experiment. There’s a lot of work on the Dunning-Kruger effect, and this is just one study, but I think it challenges us to think closely about the construct and how it generalizes.”

Future research could explore these measurement discrepancies further. The authors suggest that scholars should investigate how different ways of asking about confidence affect the results. Understanding whether overconfidence is a stable trait or a response to specific questions is vital for political psychology.

“Whether or not the Dunning-Kruger effect applies to broad domain knowledge is an important question for addressing political engagement – continuing down this line to broaden the domain coverage (something like civic reasoning, or real-world policy scenarios), and trying to move from a knowledge-based test scenario towards some closer indicator of manifest political behavior may give us a better sense of what’s likely to succeed in addressing political informedness,” Hall said.

The study, “They Know What They Know and It Ain’t Much: Revisiting the Dunning–Kruger Effect and Overconfidence in Political Knowledge,” was authored by Alexander G. Hall and Kevin B. Smith.

New research reveals a subtle and dark side-effect of belief in free will

24 December 2025 at 17:00

A new study published in Applied Psychology provides evidence that the belief in free will may carry unintended negative consequences for how individuals view gay men. The findings suggest that while believing in free will often promotes moral responsibility, it is also associated with less favorable attitudes toward gay men and preferential treatment for heterosexual men. This effect appears to be driven by the perception that sexual orientation is a personal choice.

Psychological research has historically investigated the concept of free will as a positive force in social behavior. Scholars have frequently observed that when people believe they have control over their actions, they tend to act more responsibly and helpfully. The general assumption has been that a sense of agency leads to adherence to moral standards. However, the authors of the current study argued that this sense of agency might have a “dark side” when applied to social groups that are often stigmatized.

The researchers reasoned that if people believe strongly in human agency, they may incorrectly attribute complex traits like sexual orientation to personal decision-making. This attribution could lead to the conclusion that gay men are responsible for their sexual orientation.

“I’m broadly interested in how beliefs that are typically seen as morally virtuous—like believing in free will—can, in some cases, have unintended negative consequences. Free-will beliefs are generally associated with personal agency, accountability, and moral responsibility,” said study author Shahin Sharifi, a senior lecturer in Marketing at La Trobe Business School.

“But from reviewing the literature, I began to wonder whether these beliefs might also create a sense of moral licensing—where people feel they’ve met their moral obligations simply by believing in responsibility, and therefore let their guard down in other ways. In this paper, we explored one potential manifestation of that: the subtle prejudice that can emerge when people assume sexual orientation is a matter of personal choice and hold others accountable for it.”

The researchers conducted five separate studies using different methodologies. The first study involved 201 adults recruited from the United States. Participants read a workplace scenario about an employee named Jimmy who was nominated for an “employee of the month” award. The researchers manipulated Jimmy’s sexual orientation by altering a single detail in the text. In one version, Jimmy mentioned his girlfriend, while in the other, he mentioned his boyfriend.

Participants in this first study also completed a survey measuring their chronic belief in free will. They rated their agreement with statements such as “People always have the ability to do otherwise.” The researchers then measured the participants’ attitudes toward Jimmy and their willingness to support his nomination. The results showed that participants with stronger free-will beliefs reported more favorable attitudes toward the heterosexual version of Jimmy. This positive association did not exist for the gay version of Jimmy.

The second study sought to establish a causal link by manipulating the belief in free will rather than just measuring it. The researchers recruited 200 participants and assigned them to one of two conditions. One group completed a writing task designed to promote a belief in free will by recalling experiences where they had high control over their lives. The other group wrote about experiences where they lacked control, effectively promoting disbelief in free will.

Following this manipulation, participants evaluated the same “Jimmy” scenario used in the first study. The data revealed that inducing a belief in free will led to divergent outcomes depending on the target’s sexual orientation. Participants primed with free-will beliefs expressed greater intentions to help the heterosexual employee. However, this same prime resulted in reduced intentions to help the gay employee. This finding suggests that free-will beliefs can simultaneously fuel favoritism toward the cultural majority and bias against a minority group.

The third study examined these dynamics in a more formal personnel selection context. The researchers recruited 310 participants who worked in healthcare and social assistance sectors. These industries were chosen because they typically have strong policies regarding workplace discrimination. Participants reviewed a resume for a psychologist position. The qualifications were identical across conditions, but the applicant’s personal interests differed.

In one condition, the applicant was listed as an active member of an LGBTQ+ support group. In the other, he was involved in a general community support group. Participants rated how much they liked the applicant, their expectations of his performance, and his likely organizational citizenship behavior.

The results mirrored the previous studies. Stronger endorsement of free will predicted higher likability ratings for the heterosexual applicant. This “liking” then mediated higher ratings for performance and citizenship. This positive chain of evaluation was significantly weaker or absent when the applicant was identified as gay.

“What surprised us most was how consistent the pattern was,” Sharifi told PsyPost. “We didn’t just find that free-will beliefs were linked to harsher views of gay men; we also found more favorable views of straight individuals. This suggests it’s not just about negativity toward a minority group, it’s also about a kind of favoritism toward the majority, which can be just as impactful.”

The fourth and fifth studies focused on identifying the specific psychological mechanism behind these biases. Study 4a surveyed 297 individuals to assess the relationship between free-will beliefs and perceptions of controllability. Participants rated the extent to which they believed people can freely control or shape their sexual orientation.

The analysis confirmed that belief in free will is strongly correlated with the belief that sexual orientation is controllable. This perception of control was, in turn, associated with more negative attitudes toward homosexuality.

Study 4b utilized an experimental design to verify this mechanism. The researchers recruited 241 participants and divided them into two groups. One group read a scientific passage explaining that sexual orientation is biologically determined and largely unchangeable. The other group read a neutral passage about the effects of classical music. Participants then completed measures of free-will beliefs and attitudes toward gay men.

The findings from this final experiment provided evidence for the researchers’ proposed mechanism. When participants were exposed to information that described sexual orientation as biological and uncontrollable, the link between free-will beliefs and anti-gay attitudes was significantly weakened. This suggests that the negative impact of free-will beliefs relies heavily on the assumption that being gay is a choice. When that assumption is challenged, the bias appears to diminish.

“The main takeaway is that even well-intentioned beliefs—like the idea that everyone has free will—can lead to biased or unfair attitudes, especially when applied to aspects of identity that people don’t actually choose, like sexual orientation,” Sharifi explained.

“Our findings suggest that when people strongly believe in free will, they may assume that being gay is a choice, and as a result, judge gay individuals more harshly. This isn’t always obvious or intentional—it can show up in subtle ways, like hiring preferences or gut-level reactions. The broader message is that we need to be thoughtful about how we apply our moral beliefs and recognize that not everything in life is under personal control.”

“The effects we found were small to moderate—but they matter, especially in real-world settings like job interviews or healthcare. Even subtle biases can add up and shape decisions that affect people’s lives. Our results suggest that moral beliefs like free will can quietly influence how we judge others, without us even realizing it.”

There are limitations to this research that provide directions for future inquiry. The studies focused exclusively on attitudes toward gay men. It remains unclear if similar patterns would emerge regarding lesbian women, bisexual individuals, or transgender people. The underlying mechanism of “controllability” might function differently for other identities within the LGBTQ+ community. Additionally, the samples were drawn entirely from the United States. Conceptions of free will and attitudes toward sexual orientation vary significantly across cultures.

“A key point is that we’re not saying belief in free will is bad,” Sharifi noted. “It can promote responsibility and good behavior in many contexts. But when it’s applied to parts of people’s identity they don’t control—like sexual orientation—it can backfire. Also, most people in our studies didn’t show strong anti-gay attitudes overall. The effects we found were about subtle shifts, not overt prejudice.”

Regarding direction for future research, Sharifi said that “we want to explore how other beliefs that are seen as positive might also contribute to hidden biases. We’re especially interested in workplace settings and how to design training or policies that help reduce these effects without making people feel blamed or defensive.”

“This study reminds us how complex human judgment can be,” he added. “Even our most cherished values, like fairness or responsibility, can have unintended effects. Being aware of these blind spots is the first step toward creating more inclusive and equitable environments, for everyone.”

The study, “The dark side of free will: How belief in agency fuels anti-gay attitudes,” was authored by Shahin Sharifi and Raymond Nam Cam Trau.

Misophonia is linked to broader sensory processing sensitivities beyond sounds

24 December 2025 at 15:00

A new study published in the Journal of Psychiatric Research suggests that individuals with misophonia experience sensory sensitivities that extend beyond just sound. The findings suggest that this condition may involve a broader pattern of sensory processing differences, particularly regarding touch and smell, though these additional sensitivities rarely cause the same level of impairment as auditory triggers.

The motivation for this research emerged from clinical observations during trials for misophonia treatments. Lead researcher Mercedes Woolley noted that participants frequently described irritations with sensory inputs other than sound. Patients often mentioned discomfort with the feeling of clothing on their skin or specific odors.

“The idea for this study grew out of my work conducting interviews with adults enrolled in our clinical trial on the efficacy of acceptance and commitment therapy for misophonia. Our lab at Utah State University specializes in this form of cognitive‑behavioral therapy, and because misophonia is still relatively underexplored, our team wanted to gather as much information as possible about the lived experiences of people with misophonia,” explained Woolley.

“During the interviews, I asked participants about sensory sensitivities beyond sound, and I began noticing a pattern: many of them described additional sensitivities, especially tactile ones. One participant explained that it felt like being constantly aware of the sensation of wearing clothes, something that becomes irritating when your mind can’t shift attention away from it, especially when you need to focus on something else.”

“That comment resonated with me,” Woolley said. “I’ve always been sensitive to smells; certain odors can be overwhelming or frustrating. Personally, as a child, I strongly dislike particular smells, especially the smell of fruit and would go out of my way to avoid it and become irritated when my family disregarded my requests to avoid eating it in front of me. I made significant efforts to avoid anyone eating it, and sometimes I still do.”

“Hearing participants describe their reactions to specific trigger sounds reminded me of my own experiences, just in a different sensory domain. These observations made me wonder whether misophonia might be connected to broader sensory processing challenges or sensory overstimulation.”

“When I reviewed the existing literature, I found that a few researchers had already suggested that heightened sensory sensitivity could be correlated with, or even contribute to, misophonia. That gave me enough grounding to justify developing a study focused on this idea. We still don’t fully understand the underlying mechanisms of misophonia, but sensory processing clearly plays a role. Having data that allowed us to explore this connection was exciting, and publishing this paper felt like a meaningful step toward clarifying potential mechanisms and clinical correlates.”

To explore this, the researchers recruited 60 adults who met the clinical criteria for misophonia and 60 control participants who did not possess measurable traits of the condition. The groups were matched based on age and gender to ensure compatibility.

Participants in the clinical group underwent a detailed interview using the Duke Misophonia Interview to assess symptom severity and impairment. Both groups completed the Misophonia Questionnaire and the Adolescent/Adult Sensory Profile. This standardized measure evaluates how individuals respond to sensory experiences across categories like taste, smell, visual input, and touch.

The Adolescent/Adult Sensory Profile assesses four distinct patterns of sensory processing. These patterns are based on a person’s neurological threshold for noticing a stimulus and their behavioral response to it. The four quadrants include low registration, sensation seeking, sensory sensitivity, and sensation avoidance.

Low registration refers to a tendency to miss sensory cues that others notice. Sensation seeking involves actively looking for sensory stimulation. Sensory sensitivity involves noticing stimuli more acutely than others. Finally, sensation avoidance involves actively trying to escape or reduce sensory input.

The researchers found distinct differences in how the two groups processed sensory information. Individuals with misophonia reported significantly higher levels of sensory sensitivity and sensation avoidance compared to the control group. They also reported lower levels of sensation seeking.

There was no statistical difference between the groups regarding low sensory registration. This indicates that people with misophonia do not lack awareness of sensory input. Instead, their systems appear to be highly reactive to the input they receive.

Within the misophonia group, 80 percent of participants endorsed sensitivity in at least one non-auditory domain. Sensitivity to touch was the most frequently reported non-auditory issue, affecting nearly 57 percent of the clinical group. Of those reporting tactile issues, close to half described their symptoms as moderate to severe. Olfactory sensitivities followed, while visual and taste sensitivities were less common.

Despite the high prevalence of these additional sensitivities, the participants reported that they caused relatively low impairment in their daily lives. This stands in contrast to the significant life disruption caused by their auditory triggers.

For example, 75 percent of participants reported no functional impairment related to their tactile sensitivities. The distress associated with misophonia appears to be tied specifically to the emotional nature of auditory triggers rather than general sensory over-responsivity.

The data indicated a positive association between the severity of misophonia and the intensity of other sensory issues. As misophonia symptoms became more severe, participants were more likely to report higher levels of sensory avoidance and sensitivity. This pattern was also observed in the control group among individuals with subthreshold symptoms. This suggests that sensory vulnerabilities may represent a general risk factor for the development of misophonia-like experiences.

“People with misophonia are most bothered by specific sounds, but many also have sensitivities in other senses, such as touch or smell,” Woolley told PsyPost. “This doesn’t mean they’re overwhelmed by everything; rather, their sensory processing system seems more reactive overall.”

“While many people with misophonia notice certain textures or smells more intensely, these sensitivities typically do not cause major life challenges in the same way misophonic sounds do. We also found that the more severe someone’s misophonia is, the more likely they are to have other sensory sensitivities as well. This doesn’t mean that sensitivities in other senses cause misophonia, but they may reflect a broader sensory processing vulnerability.”

These findings regarding sensory processing align with other recent investigations into the psychological profile of misophonia. A study published in the British Journal of Psychology indicates that the condition may reflect broader cognitive traits rather than being limited to annoyance at noises.

Researchers found that individuals with misophonia struggle with switching attention in emotionally charged situations. This suggests a pattern of mental rigidity that extends beyond the auditory system. Individuals with the condition often hyperfocus on specific sounds and find it difficult to shift their attention elsewhere.

Further evidence regarding attentional processing comes from research published in the Journal of Affective Disorders. This study examined young people and found that those with misophonia exhibit heightened attentional processing compared to those with anxiety disorders.

The data supports the hypothesis that misophonia is linked to a state of increased vigilance. The affected individuals appear to be more aware of environmental stimuli in general. They performed better on tasks requiring the detection of subtle differences in stimuli, indicating a nervous system that is highly tuned to the environment.

The heightened state of arousal observed in misophonia patients also has associations with stress levels. Research published in PLOS One examined the relationship between misophonia severity and various forms of stress. The authors found that higher symptom severity was associated with greater levels of perceived stress and hyperarousal.

This suggests that the condition involves transdiagnostic processes related to how the body manages stress and alertness. While the study did not find a direct causal link to traumatic history, the presence of hyperarousal suggests a physiological state similar to that seen in post-traumatic stress disorders.

The biological underpinnings of these traits have been explored through genetic analysis as well. A large-scale study published in Frontiers in Neuroscience utilized a Genome-Wide Association Study to identify genetic factors. The researchers found that misophonia shares significant genetic overlap with psychiatric disorders such as anxiety and post-traumatic stress disorder. The study identified a specific genetic locus associated with the rage response to chewing sounds.

Understanding misophonia as a condition involving multisensory and cognitive differences helps explain why treatments solely focused on sound often fall short. The combination of sensory avoidance, cognitive rigidity, and physiological hyperarousal points to a complex disorder. The new findings from Woolley and colleagues reinforce the idea that while sound is the primary trigger, the underlying mechanism involves a broader sensory processing vulnerability.

As with all research, the current study by Woolley and colleagues has certain limitations. The researchers did not screen participants for autism spectrum disorder, so it is possible that some reported sensory traits reflect undiagnosed autism. The study relied on a single clinician for interviews, and interrater reliability was not assessed. Additionally, the researchers were unable to compare specific sensory domains between the clinical and control groups due to data limitations in the control set.

Future research should aim to clarify the relationship between misophonia and broader sensory processing patterns using larger samples. Longitudinal designs could help determine how these sensory sensitivities develop over time. It remains to be seen whether these non-auditory sensitivities precede the onset of misophonia or develop concurrently. Further investigation into the mechanisms of sensory over-responsivity could lead to more effective, holistic treatment strategies for those suffering from this challenging condition.

The study, “Sensory processing differences in misophonia: Assessing sensory sensitivities beyond auditory triggers,” was authored by Mercedes G. Woolley, Hailey E. Johnson, Samuel J.E. Knight, Emily M. Bowers, Julie M. Petersen, Karen Muñoz, and Michael P. Twohig.

Researchers identify distinct visual cues for judging female attractiveness and personality traits

24 December 2025 at 03:00

A new study published in BMC Psychology provides evidence that the way people judge a woman’s physical attractiveness differs fundamentally from how they judge her personality traits. The findings suggest that physical attractiveness is primarily evaluated based on static body features, such as body mass index, while traits like warmth and understanding are inferred largely through body motion and gestures. This research highlights the distinct roles that fixed physical attributes and dynamic movements play in social perception.

Previous psychological research has established that physical appearance substantially influences first impressions. People often attribute positive personality characteristics to individuals who are physically attractive, a phenomenon known as the halo effect. Despite this, there is limited understanding of how specific visual cues contribute to these different types of judgments. While static features like body shape are known to be important, the role of body motion is less clear.

A team from Shanghai International Studies University and McGill University conducted this research to disentangle these factors. They aimed to determine the relative contributions of unchanging body features versus dynamic movements when observers evaluate a woman’s attractiveness and her expressive character traits. They hypothesized that judgments of physical beauty would rely more on stable physical traits. On the other hand, they proposed that judgments of personality would depend more on transient movements.

To test this hypothesis, the researchers recruited fifteen female participants to serve as models, or posers. These women were photographed and filmed to create the visual stimuli for the study. The researchers took detailed physical measurements of each poser. These measurements included height, weight, waist-to-hip ratio, and limb circumference. This allowed the team to calculate body mass index and other anthropometric data points.

The researchers created two types of visual stimuli. For the static images, the posers stood in neutral positions and also adopted specific poses. Some poses were instructed. This means the models mimicked attractive stances shown to them by the researchers. Other poses were spontaneous. In these cases, the models were asked to pose in ways they personally considered attractive or unattractive without specific guidance.

For the dynamic stimuli, the researchers recorded the models delivering a short speech introducing their hometown. The models performed this speech under two conditions. In the first condition, they spoke in a neutral and emotionless manner. In the second condition, they were asked to speak with passion. The goal was to convince an audience to visit their hometown. The researchers then edited these videos. They isolated the first five seconds and the last five seconds of the clips to examine how impressions might change over time.

The study recruited fifty-four adults to act as perceivers. This group consisted of an equal split of twenty-seven men and twenty-seven women. None of the raters knew the models. They viewed the images and silent video clips to provide ratings. The participants rated the physical attractiveness of the women in the images and videos on a seven-point scale.

The participants also evaluated the models on feminine expressive traits. These traits included characteristics such as being understanding, sympathetic, compassionate, warm, and tender. The researchers coded specific body movements in the videos. They tracked variables such as the number of hand gestures used and whether the hands were kept close to the body or moved freely.

The results indicated a clear distinction in how different judgments are formed. When rating physical attractiveness, the statistical analysis showed that static body features were the strongest predictors. This held true for both the static photographs and the video clips. The Lasso regression analysis revealed that body features accounted for a large portion of the variance in attractiveness ratings.

Among the various body measurements, body mass index emerged as the most significant predictor of attractiveness ratings. Models with lower body mass index scores generally received higher attractiveness ratings. Other features like skin color and shoulder-to-hip ratio also played a role. However, body mass index was the most consistent and robust factor.

In contrast, body motion had a much smaller impact on judgments of physical attractiveness. The statistical models showed that while movement played a role, it was secondary to fixed physical attributes. For instance, in the video condition, body motions explained only a small fraction of the variance in attractiveness compared to body features.

However, the researchers did find that posture style mattered in photographs. Spontaneous attractive poses were rated higher than instructed attractive poses. This suggests that the women had an intuitive understanding of how to present themselves to appear appealing. They were more effective when allowed to pose naturally than when mimicking a standard attractive pose.

A different pattern emerged for the evaluation of feminine expressive traits. In the video condition, body motion was a much stronger predictor of traits like warmth and compassion than static body features. The frequency of hand gestures and the use of open body language were positively associated with these traits. Body features alone were poor predictors of these personality characteristics.

The study found that neither body features nor body motions effectively predicted feminine traits in static images. This suggests that perceiving these personality attributes requires the observation of movement over time. A static image does not convey enough information for an observer to reliably infer warmth or sympathy.

The researchers also compared the neutral and passionate video conditions. The passionate presentations received higher ratings for both attractiveness and feminine traits. This effect was particularly strong in the final five seconds of the passionate videos. This finding suggests that positive body language accumulates to influence perception. As the observers watched the passionate clips for longer, they perceived greater levels of feminine expressive traits.

The results support the idea that humans use different visual channels for different types of social judgments. Physical attractiveness appears to be assessed rapidly based on stable biological signals. These signals may be associated with health and reproductive potential. In contrast, traits like warmth and understanding are social signals. These are inferred from behavioral cues that unfold during an interaction.

The study has certain limitations that affect the generalizability of the results. The sample size of fifteen posers is relatively small. This restricts the range of body types and movement styles represented in the stimuli. The distribution of body mass index among the posers was not perfectly balanced. There were fewer individuals in the overweight category compared to the healthy weight category.

Future research would benefit from a larger and more diverse group of models. This would allow for a more comprehensive analysis of how different body types interact with movement. The current study focused exclusively on female targets. Cultural norms regarding body language and ideal body types vary significantly. The participants in this study were from a specific cultural background. Future studies should investigate these dynamics across different cultures to see if the patterns hold true.

Another direction for future inquiry involves the interaction of other factors. The current study focused on silent videos to isolate body motion. However, voice and facial expressions are also potent social cues. Future research could examine how body motion interacts with vocal tone and facial expressions to form a holistic impression. It would also be useful to investigate how personality traits of the observer influence these ratings.

This research contributes to the understanding of nonverbal communication. It provides evidence that while we may judge beauty largely by what we see in a snapshot, we judge character by watching how a person moves. The distinction emphasizes that social perception is a complex process integrating multiple streams of visual information.

The study, “Perceiving female physical attractiveness and expressive traits from body features and body motion,” was authored by Lin Gao, Marc D. Pell, Zhikang Peng, and Xiaoming Jiang.

New research uncovers a seemingly universal preference for lower-quality news on social media

23 December 2025 at 23:00

A new analysis of millions of social media posts across seven different platforms reveals that the relationship between political content and user engagement is highly dependent on the specific digital environment. The findings suggest that while users tend to engage more with news that aligns with the dominant political orientation of a specific platform, there appears to be a consistent pattern regarding the quality of information.

Across all examined sites, users tended to engage more with lower-quality news sources compared to high-quality sources shared by the same individual. The study, which highlights the fragmented nature of the modern online landscape, was published in the Proceedings of the National Academy of Sciences.

The motivation for this research stems from a need to update the scientific understanding of social media dynamics. For many years, academic inquiry into online behavior relied heavily on data derived from a single platform, most notably Twitter (now X).

This concentration occurred largely because Twitter provided an application programming interface that made data collection relatively accessible for scholars. As a result, many assumptions about how misinformation spreads or how political biases function were based on a potentially unrepresentative sample of the internet. The research team sought to correct this by broadening the scope of analysis to include a diverse array of newer and alternative platforms.

The study was conducted by a collaborative group of researchers from several institutions. The team included Mohsen Mosleh from the University of Oxford and the Massachusetts Institute of Technology, Jennifer Allen from New York University, and David G. Rand from the Massachusetts Institute of Technology and Cornell University.

Their goal was to determine if phenomena such as the “right-wing advantage” in engagement or the rapid spread of falsehoods were universal truths or artifacts of specific platform architectures. They also aimed to understand whether the rise of alternative social media sites has led to the creation of “echo platforms,” where entire user bases segregate themselves by political ideology.

To achieve this, the researchers collected data during January 2024. They focused on seven platforms that allow for the public sharing of news links: X, BlueSky, Mastodon, LinkedIn, TruthSocial, Gab, and GETTR. This selection represents a mix of mainstream networks, professional networking sites, decentralized platforms, and sites that explicitly cater to specific political demographics.

The final dataset included nearly 11 million posts that contained links to external news domains. This large sample provided a comprehensive cross-section of online sharing behaviors.

The researchers employed a rigorous set of measures to evaluate the content within these posts. To assess the quality of the news being shared, they did not rely on their own subjective judgments. Instead, they utilized a set of reliability ratings for 11,520 news domains. These ratings were generated through a “wisdom of crowds” methodology that aggregated evaluations from professional fact-checkers, journalists, and academics. This system allowed the team to assign a quality score to the publisher of each link, serving as a proxy for the likely accuracy of the content.

In addition to quality, the team needed to quantify the political leaning of the news sources. They utilized a sophisticated large language model to estimate the political alignment of each domain. The model was asked to rate domains on a scale ranging from strongly liberal to strongly conservative.

To ensure the validity of these AI-generated estimates, the researchers cross-referenced them with established political benchmarks and found a high degree of correlation. This allowed them to categorize content as left-leaning, right-leaning, or neutral with a high degree of confidence.

The primary statistical method used in the study was a linear regression analysis that incorporated user fixed effects. This is a statistical technique designed to control for variables that remain constant for each individual. By comparing a user’s posts only against other posts by the same user, the researchers effectively removed the influence of popularity. It did not matter if a user had ten followers or ten million. The study measured whether a specific user received more engagement than usual when they shared a specific type of content.

The results regarding political polarization challenged the idea of a universal advantage for conservative content. The data indicated that the political lean of the most engaging content generally matched the political lean of the platform’s user base.

On platforms known for attracting conservative users, such as TruthSocial, Gab, and GETTR, right-leaning news sources garnered significantly more engagement. On platforms with more liberal or neutral populations, such as BlueSky, Mastodon, and LinkedIn, left-leaning news attracted more likes and shares.

This finding supports the hypothesis of “echo platforms.” In the past, researchers worried about echo chambers forming within a single site like Facebook. The current landscape suggests a migration where users choose entire platforms that align with their views.

The researchers found a strong correlation between the average political lean of a platform and the type of content that gets rewarded with engagement. This implies that the “right-wing advantage” observed in earlier studies of Twitter and Facebook may have been a product of those specific user bases rather than an inherent property of social media.

While political engagement varied by platform, the findings regarding news quality were remarkably consistent. The researchers discovered that on all seven platforms, posts containing links to lower-quality news domains received more engagement than posts linking to high-quality domains. This pattern held true regardless of whether the platform was considered left-leaning, right-leaning, or neutral. It was observed on sites with complex algorithmic feeds as well as on Mastodon, which displays posts in chronological order.

The magnitude of this effect was notable. The analysis showed that a user’s posts linking to the lowest-quality sites received approximately seven percent more engagement than their posts linking to high-quality sites. This effect was robust even when controlling for the political slant of the article. This suggests that the engaging nature of low-quality news is not solely driven by partisanship. The authors propose that factors such as novelty, negative emotional valence, and sensationalism likely contribute to this phenomenon.

The study also clarified the relationship between the volume of content and engagement rates. In terms of absolute numbers, users shared links to high-quality news sources much more frequently than they shared low-quality sources. High-quality news dominates the ecosystem in terms of prevalence. However, the engagement data indicates a discrepancy. While reputable news is shared more often, it generates less excitement or interaction per post compared to low-quality alternatives.

The inclusion of Mastodon in the dataset provided a significant control condition for the study. Because Mastodon does not use an engagement-based ranking algorithm to sort user feeds, the results from that platform suggest that algorithms are not the sole driver of the misinformation advantage. The fact that low-quality news still outperformed high-quality news on a chronological feed points to human psychology as a primary factor. Users appear to naturally prefer or react more strongly to the type of content found in lower-quality outlets.

But as with all research, there are some caveats. The data collection was restricted to a single month, which may not capture seasonal variations or behavior during major political events. The researchers were also unable to include data from Meta platforms like Facebook and Instagram, or video platforms like TikTok, due to data access restrictions. This means the findings apply primarily to text-heavy, link-sharing platforms and may not perfectly translate to video-centric environments.

Additionally, the study is observational, meaning it identifies associations but cannot definitively prove causation beyond the controls applied in the statistical models.

Future research directions could involve expanding the scope of platforms analyzed as data becomes available. Investigating the specific psychological triggers that make low-quality news more engaging remains a priority. The researchers also suggest that further work is needed to understand how the migration of users between platforms affects the spread of information. As the social media landscape continues to fracture, understanding these cross-platform dynamics will become increasingly important.

The study, “Divergent patterns of engagement with partisan and low-quality news across seven social media platforms,” was authored by Mohsen Mosleh, Jennifer Allen, and David G. Rand.

Competitive athletes exhibit lower off-field aggression and enhanced brain connectivity

23 December 2025 at 15:00

A recent study published in Psychology of Sport & Exercise has found that long-term engagement in competitive athletics is linked to reduced aggression in daily life and specific patterns of brain organization. The findings challenge the common stereotype that contact sports foster violent behavior outside of the game. By combining behavioral assessments with advanced brain imaging, the researchers identified a biological basis for the observed differences in aggression between athletes and non-athletes.

Aggression is a complex trait influenced by both biological and environmental factors. A persistent debate in psychology concerns the impact of competitive sports on an individual’s tendency toward aggressive behavior. One perspective, known as social learning theory, suggests that the aggression often required and rewarded in sports like football or rugby can spill over into non-sport contexts. This theory posits that athletes learn to solve problems with physical dominance, which might make them more prone to aggression in social situations.

An opposing perspective argues that the structured environment of competitive sports promotes discipline and emotional regulation. This view suggests that the intense physical and mental demands of high-level competition require athletes to develop superior self-control to succeed.

According to this framework, the ability to inhibit impulsive reactions during a game translates into better behavioral regulation in everyday life. Previous research attempting to settle this debate has yielded mixed results, largely relying on self-reported questionnaires without examining the underlying biological mechanisms.

“This study was motivated by inconsistent findings in previous research regarding the relationship between long-term engagement in competitive sports and aggression,” explained study author Mengkai Luan, associate professor of psychology at the Shanghai University of Sport.

“While some studies suggest that competitive sports, particularly those involving intense physical and emotional demands, may increase off-field aggression through a ‘spillover’ effect, other research indicates that athletes, due to the emotional regulation and discipline developed through long-term training, often exhibit lower levels of aggression in everyday situations compared to non-athletes. This study aims to examine how long-term engagement in competitive athletics is associated with off-field aggression, while also exploring the neural mechanisms underlying these behavioral differences using resting-state functional connectivity analysis.”

The research team recruited a total of 190 participants from a university community in China. The sample consisted of 84 competitive athletes drawn from university football and rugby teams. These athletes had an average of nearly seven years of competitive experience and engaged in rigorous weekly training. The comparison group included 106 non-athlete controls who did not participate in regular organized sports.

All participants completed the Chinese version of the Buss–Perry Aggression Questionnaire. This widely used psychological tool measures an individual’s general aggression levels as well as four specific subtypes. These subtypes include physical aggression, verbal aggression, anger, and hostility. Participants also rated their tendency toward self-directed aggression. The researchers compared the scores of the athlete group against those of the non-athlete control group to identify behavioral differences.

Following the behavioral assessment, participants underwent functional magnetic resonance imaging (fMRI) scans. The researchers utilized a resting-state fMRI protocol. This method involves scanning the brain while the participant is awake but not performing any specific cognitive task. It allows scientists to map the brain’s intrinsic functional architecture by observing spontaneous fluctuations in brain activity. This approach is particularly useful for identifying stable, trait-like characteristics of brain organization.

The behavioral data revealed clear differences between the two groups. Athletes reported significantly lower scores on total aggression than the non-athlete controls. When the researchers analyzed the specific subscales, they found that athletes scored lower on physical aggression, anger, hostility, and self-directed aggression.

The only dimension where no significant difference appeared was verbal aggression. These results provide behavioral evidence supporting the idea that competitive sport participation functions as a protective factor against maladaptive aggression.

The brain imaging analysis offered insights into the potential neural mechanisms behind these behavioral findings. The researchers used a method called Network-Based Statistics to compare the whole-brain connectivity matrices of athletes and non-athletes. They identified a large subnetwork where athletes exhibited significantly stronger connectivity than controls. This enhanced network comprised 105 connections linking 70 distinct brain regions.

The strengthened connections in athletes were not random but were concentrated within specific systems. The analysis showed increased integration between the salience network and sensorimotor networks. The salience network is responsible for detecting important stimuli and coordinating the brain’s response, while sensorimotor networks manage movement and sensory processing. This pattern suggests that the athletic brain is more efficiently wired to integrate sensory information with motor control and attentional resources.

To further understand the link between brain function and behavior, the authors employed a machine-learning technique called Connectome-Based Predictive Modeling. This analysis aimed to determine if patterns of brain connectivity could accurately predict an individual’s aggression scores, regardless of their group membership. The model successfully predicted levels of total aggression and physical aggression based on the fMRI data.

The predictive modeling revealed that lower levels of aggression were associated with specific connectivity patterns involving the prefrontal cortex. The prefrontal cortex is the brain region primarily responsible for executive functions, such as decision-making, impulse control, and planning.

The analysis showed that stronger negative connections between the prefrontal cortex and subcortical regions were predictive of reduced aggression. This implies that a well-regulated brain utilizes top-down control mechanisms to inhibit impulsive drives originating in deeper brain structures.

The researchers also found a significant overlap between the group-level differences and the individual prediction models. Four specific neural connections were identified both as distinguishing features of the athlete group and as strong predictors of lower aggression. These connections involved the orbitofrontal cortex and the cerebellum. The orbitofrontal cortex is key for emotion regulation, while the cerebellum is traditionally associated with balance and motor coordination but is increasingly recognized for its role in emotional processing.

The convergence of these findings suggests that the demands of competitive sports may induce neuroplastic changes that support better behavioral regulation. The need to execute complex motor skills while managing high levels of physiological arousal and adhering to game rules likely strengthens the neural pathways that integrate motor and emotional control. This enhanced neural efficiency appears to extend beyond the field, helping athletes manage frustration and suppress aggressive impulses in their daily lives.

“The study challenges the common stereotype that individuals who participate in competitive, contact sports are more aggressive or dangerous in everyday life,” Luan told PsyPost. “In fact, the research suggests that long-term participation in these sports may help individuals manage aggression better. Through their training, they develop emotional regulation and self-discipline, which may be linked to brain changes that help them control aggression and behavior off the field.”

There are some limitations. The research utilized a cross-sectional design, which captures data at a single point in time. This means the study cannot definitively prove that sports training caused the brain changes or the reduced aggression. It is possible that individuals with better emotional regulation and specific brain connectivity patterns are naturally drawn to and successful in competitive sports.

The sample was also limited to university-level athletes in team-based contact sports within a specific cultural setting. Cultural values regarding emotion and social harmony may influence how aggression is expressed and regulated.

“One of our long-term goals is to expand the sample to include athletes from a wider range of sports, including individual and non-contact sports, as well as participants from different cultural backgrounds,” Luan said. “This would help increase the generalizability of our findings.”

“Additionally, since our current study is cross-sectional, it cannot establish causal relationships. In future research, we plan to adopt longitudinal and intervention-based designs to better understand the causal mechanisms behind the observed effects, and to separate pre-existing individual traits from the neural adaptations resulting from sustained athletic training.”

The study, “Competitive sport experience is associated with reduced off-field aggression and distinct functional brain connectivity,” was authored by Yujing Huang, Zhuofei Lin, Chenglin Zhou, Yingying Wang, and Mengkai Luan.

Wrinkles around the eyes are the primary driver of age perception across five ethnic groups

23 December 2025 at 05:00

Recent research published in the International Journal of Cosmetic Science provides evidence that wrinkles around the eyes are the primary physical feature driving perceptions of age and attractiveness across diverse ethnic groups. While factors such as skin color and gloss contribute to how healthy a woman appears, the depth and density of lines in the periorbital region consistently predict age assessments in women from Asia, Europe, and Africa.

The rationale behind this study stems from the fact that the skin around the eyes is structurally unique. It is significantly thinner than facial skin in other areas and contains fewer oil glands. This biological reality makes the eye area particularly susceptible to the effects of aging and environmental damage.

In addition to its delicate structure, the skin around the eyes is subjected to constant mechanical stress. Humans blink approximately 15,000 times per day, and these repeated muscle contractions eventually lead to permanent lines. Previous surveys have indicated that women worldwide consider under-eye bags, dark circles, and “crow’s feet” to be among their top aesthetic concerns.

However, most prior research on this topic has focused on specific populations or general facial aging. It has remained unclear whether specific changes in the eye region influence social perceptions in the same way across different cultures. The authors of the current study aimed to determine if the visual impact of periorbital skin features is consistent globally or if it varies significantly by ethnicity.

To investigate this, the researchers utilized a multi-center approach involving participants and assessors from five distinct locations. Data collection took place in Guangzhou, China; Tokyo, Japan; Lyon, France; New Delhi, India; and Cape Town, South Africa. The team initially recruited 526 women across these five locations to serve as the pool for the study.

From this larger group, the researchers selected a standardized subset of 180 women to serve as the subjects of the analysis. This final sample included exactly 36 women from each of the five ethnic groups. The participants ranged in age from 20 to 65 years, allowing for a comprehensive view of the aging process.

The researchers recorded high-resolution digital portraits of these women using a specialized system known as ColorFace. This equipment allowed for the standardization of lighting and angles, which is essential for accurate computer analysis. The team then defined two specific regions of interest on each face for detailed measurement.

The first region analyzed was the area directly under the eyes, which included the lower eyelid and the infraorbital hollow. The second region was the area at the outer corners of the eyes where lateral canthal lines, commonly known as crow’s feet, typically develop. The researchers used digital image analysis software to objectively quantify skin characteristics in these zones.

For the region under the eyes, the software measured skin color, gloss, skin tone evenness, and wrinkles. Skin color was broken down into specific components, including lightness, redness, and yellowness. Gloss was measured in terms of its intensity and contrast, while tone evenness was calculated based on the similarity of adjacent pixels.

For the crow’s feet region, the analysis focused exclusively on the measurement of wrinkles. The software identified wrinkles by detecting lines in the image that met specific criteria. The researchers quantified these features by calculating the total length of the wrinkles, their density within the region, and their volume.

To determine how these objective features translated into social perceptions, the study employed a large panel of human assessors. The researchers recruited 120 assessors in each of the five study locations, resulting in a total of 600 raters. These assessors were “naïve,” meaning they were not experts in dermatology or cosmetics.

The assessors were matched to the participants by ethnicity. For example, Chinese assessors rated the images of Chinese women, and French assessors rated the images of French women. Each assessor viewed the digital portraits on color-calibrated monitors.

They were asked to rate each face for perceived age, health, and attractiveness. These ratings were given on a continuous scale ranging from 0 to 100, where 0 represented a low attribute score and 100 represented a high attribute score. The researchers then used statistical methods to identify relationships between the objective skin measurements and the subjective ratings.

The results revealed distinct biological differences in how skin ages across the different groups. For instance, Indian and South African women tended to have lower skin lightness scores under the eyes compared to Chinese, Japanese, and French women. South African women also exhibited the highest density of wrinkles in the under-eye region among all groups.

Regarding the crow’s feet region, the analysis showed that South African, Chinese, and French women had similar levels of wrinkling. These levels were notably higher than those observed in Indian and Japanese women. This finding aligns with some previous research suggesting that wrinkle onset and progression can vary significantly based on ethnic background.

Despite these physical differences, the study found strong consistencies in how these features influenced perception. When looking at the full sample, wrinkles in both the under-eye and crow’s feet regions showed a strong positive correlation with perceived age. This means that as wrinkle density and volume increased, assessors consistently rated the faces as looking older.

On the other hand, wrinkles were negatively correlated with ratings of health and attractiveness. Faces with more pronounced lines around the eyes were perceived as less healthy and less attractive. This pattern held true regardless of the ethnic group of the woman or the assessor.

The study also highlighted the role of skin gloss, or radiance. Higher levels of specular gloss, which corresponds to the shine or glow of the skin, were associated with perceptions of better health and higher attractiveness. This suggests that skin radiance is a universal cue for vitality.

In contrast, skin tone evenness showed a more complex relationship. While generally associated with youth and health, it appeared to be a stronger cue for health judgments than for age. Uneven pigmentation and lower skin lightness were linked to lower health ratings, particularly in populations with darker skin tones.

Regression analyses allowed the researchers to determine which features were the strongest predictors of the ratings. For perceived age, wrinkles in the crow’s feet region emerged as a significant predictor for all five ethnic groups. This confirms that lines at the corners of the eyes are a primary marker used by people to estimate a woman’s age.

For Japanese and French women, wrinkles specifically under the eyes provided additional information for age judgments. This suggests that in these groups, the under-eye area may contribute more distinct visual information regarding aging than in other groups.

When predicting perceived health, the results were more varied. While wrinkles remained a negative predictor, skin color variables played a more prominent role. For Indian women, lighter skin in the under-eye region was a significant positive predictor of rated health.

Similarly, for South African women, skin yellowness was a positive predictor of both health and attractiveness ratings. This indicates that while wrinkles drive age perception, color cues are vital for judgments of well-being in these populations. The researchers posit that pigmentary issues, such as dark circles, may weigh more heavily on health perception in darker skin types.

An exception to these specific predictive patterns was observed in the French group regarding health ratings. While the overall statistical models were effective, no single skin feature stood out as a solitary predictor for health judgments in French women. This implies that French assessors might use a more holistic approach, combining multiple features rather than relying on a single cue like wrinkles or color.

The study has certain limitations that warrant mention. The sample size for the specific sub-group analyses was relatively small, with only 36 women per ethnicity. This reduces the statistical power to detect very subtle differences within each group.

Additionally, the study relied on static digital images. In real-world interactions, facial dynamics and expressions play a major role in the visibility of crow’s feet and other lines. Future research could investigate how movement influences the perception of these features.

The study, “Effects of under-eye skin and crow’s feet on perceived facial appearance in women of five ethnic groups,” was authored by Bernhard Fink, Remo Campiche, Todd K. Shackelford, and Rainer Voegeli.

Adolescents with high emotional intelligence are less likely to trust AI

22 December 2025 at 19:00

A new study published in the journal Behavioral Sciences highlights generational differences in how adolescents and their parents interact with artificial intelligence. The research suggests that teens with higher emotional intelligence and supportive, authoritative parents tend to use AI less frequently and with greater skepticism. Conversely, adolescents raised in authoritarian environments appear more likely to rely on AI for advice and trust it implicitly regarding data security and accuracy.

Artificial intelligence has rapidly integrated into daily life, reshaping how information is accessed and processed. This technological shift is particularly impactful for adolescents. This demographic is at a developmental stage where they are refining their social identities and learning to navigate complex information ecosystems.

While AI offers educational support, it also presents risks related to privacy and the potential for emotional over-reliance. Previous investigations have examined digital literacy or parenting styles in isolation. However, few have examined how these factors interact with emotional traits to shape trust in AI systems.

The authors of this study sought to bridge this gap by exploring the concept of a “digital secure base.” This theoretical framework proposes that strong, supportive family relationships provide a safety net that helps young people explore the digital world responsibly.

The researchers aimed to understand if emotional skills and specific family dynamics might predict whether a teen uses AI as a helpful tool or as a substitute for human connection. They hypothesized that the quality of the parent-child relationship could influence whether an adolescent develops a critical or dependent attitude toward these emerging technologies.

To investigate these dynamics, the research team recruited 345 participants from southern Italy. The sample consisted of 170 adolescents between the ages of 13 and 17. It also included 175 parents, with an average age of roughly 49. Within this group, the researchers were able to match 47 specific parent-adolescent pairs for a more detailed analysis. The data was collected using online structured questionnaires.

Participants completed several standardized assessments. They answered questions regarding parenting styles, specifically looking for authoritative or authoritarian behaviors. They also rated their own trait emotional intelligence, which measures how people perceive and manage their own emotions. Additional surveys evaluated perceived social support from family and friends.

To measure AI engagement, the researchers developed specific questions about the frequency of use and trust. These items asked about sharing personal data, seeking behavioral advice, and using AI for schoolwork. Trust was measured by how much participants believed AI data was secure and whether AI gave better advice than humans.

The data revealed a clear generational divide regarding usage habits. Adolescents reported using AI more often than their parents for school or work-related tasks. Approximately 32 percent of teens used AI for these purposes frequently, compared to only 17 percent of parents. Adolescents were also more likely to ask AI for advice on how to behave in certain situations.

In terms of trust, the younger generation appeared much more optimistic than the adult respondents. Teens expressed higher confidence in the security of the data they provided to AI systems. They were also more likely to believe that AI could provide better advice than their family members or friends. This suggests that adolescents may perceive these systems as more competent or benevolent than their parents do.

The researchers then analyzed how personality and family environment related to these behaviors. They found that adolescents with higher levels of trait emotional intelligence tended to use AI less frequently. These teens also expressed lower levels of trust in the technology. This negative association suggests that emotionally intelligent youth may be more cautious and critical. They may rely on their own internal resources or human networks rather than turning to algorithms for guidance.

A similar pattern emerged regarding parenting styles. Adolescents who described their parents as authoritative—characterized by warmth, open dialogue, and clear boundaries—were less likely to rely heavily on AI. This parenting style was associated with what the researchers called “balanced” use. These teens engaged with the technology but maintained a level of skepticism.

A different trend appeared for those with authoritarian parents. This parenting style involves rigid control and limited communication. Adolescents in these households were more likely to share personal data with AI systems. They also tended to seek behavioral advice from AI more often. This suggests a potential link between a lack of emotional support at home and a reliance on digital alternatives.

Using the matched parent-child pairs, the study identified two distinct profiles among the adolescents. The researchers labeled the first group “Balanced Users.” This group made up about 62 percent of the matched sample. These teens had higher emotional intelligence and reported strong family support. They used AI cautiously and did not view it as superior to human advice.

The second group was labeled “At-Risk Users.” These adolescents comprised roughly 38 percent of the matched pairs. They reported lower emotional intelligence and described their parents as more authoritarian. This group engaged with AI more intensively. They were more likely to share personal data and trust the advice given by AI over that of their parents or peers. They also reported feeling less support from their families.

These findings imply that emotional intelligence acts as a buffer against uncritical technology adoption. Adolescents who can regulate their own emotions may feel less need to turn to technology for comfort or guidance. They appear to approach AI as a tool rather than a companion. This aligns with the idea that emotionally competent individuals are better at critical evaluation.

The connection between parenting style and AI use highlights the importance of the family environment. Authoritative parenting seems to foster independent thinking and digital caution. When parents provide a secure emotional foundation, teens may not feel the need to seek validation from artificial agents. In contrast, authoritarian environments might leave teens seeking support elsewhere. If they cannot get emotional regulation from their parents, they may turn to AI systems that appear competent and non-judgmental.

The study provides evidence that AI systems cannot replace the emotional containment provided by human relationships. The results suggest that rather than simply restricting access to technology, interventions should focus on strengthening family bonds.

Enhancing emotional intelligence and encouraging open communication between parents and children could serve as protective factors. This approach creates a foundation that allows teens to navigate the digital world without becoming overly dependent on it.

The study has several limitations that affect how the results should be interpreted. The design was cross-sectional, meaning it captured data at a single point in time. This prevents researchers from proving that parenting styles cause specific AI behaviors. It is possible that the relationship works in the other direction or involves other factors. The sample size for the matched parent-child pairs was relatively small. This limits the ability to generalize the specific user profiles to broader populations.

Additionally, the study relied on self-reported data. Participants may have answered in ways they felt were socially acceptable rather than entirely accurate. There is also the potential for common-method bias since the same individuals provided data on both their personality and their technology use. The research focused primarily on psychological and relational factors. It did not account for socioeconomic status or cultural differences that might also influence access to and trust in AI.

Future research should look at these dynamics over time. Longitudinal studies could track how changes in emotional intelligence influence AI trust as teens grow older. Researchers could also include objective measures of AI use, such as usage logs, rather than relying solely on surveys.

Exploring these patterns in different cultural contexts would also be beneficial to see if the findings hold true globally. Further investigation is needed to understand how specific features of AI, such as human-like conversation styles, specifically impact adolescents with lower emotional support.

The study, “Emotional Intelligence and Adolescents’ Use of Artificial Intelligence: A Parent–Adolescent Study,” was authored by Marco Andrea Piombo, Sabina La Grutta, Maria Stella Epifanio, Gaetano Di Napoli, and Cinzia Novara.

Not all psychopathic traits are equal when it comes to sexual aggression

22 December 2025 at 17:00

Recent research published in the Journal of Personality provides a comprehensive look at the relationship between psychopathy and sexual aggression. By aggregating data from over one hundred separate samples, the researchers determined that while psychopathy is generally associated with sexually aggressive behavior, the connection varies depending on the specific type of aggression and the specific personality traits involved. These findings help clarify which aspects of the psychopathic personality are most dangerous regarding sexual violence.

The rationale for this large-scale analysis stems from the serious societal impact of sexual aggression. This term covers a wide range of non-consensual sexual activities, including the use of physical force, coercion, and verbal manipulation. Previous scientific literature has established a link between psychopathy and antisocial behavior.

However, prior summaries of the data primarily focused on whether sexual offenders would re-offend after being released from prison. There was a gap in understanding the fundamental relationship between psychopathy and sexual aggression across different populations, such as community members or college students, rather than just convicted criminals.

Additionally, the researchers sought to understand psychopathy not as a single block of negative traits but as a nuanced personality structure. They employed the triarchic model of psychopathy to do this. This model breaks psychopathy down into three distinct components: boldness, meanness, and disinhibition.

Boldness involves social dominance, emotional resilience, and venturesomeness. Meanness encompasses a lack of empathy and cruelty. Disinhibition refers to impulsivity and a lack of restraint. The researchers wanted to see how these specific dimensions related to different forms of sexual violence, such as rape, child molestation, and sexual harassment.

To conduct this investigation, the research team performed a meta-analysis. This is a statistical method that combines the results of multiple independent studies to identify broader patterns that a single study might miss. They performed a systematic search of databases for studies published between 1980 and early 2023.

To be included, a study had to involve adult participants and measure both psychopathy and sexual aggression. The final analysis included 117 independent samples from 95 studies, representing a total of 41,009 participants. The samples were diverse, including forensic groups like prisoners, as well as college students and community members.

A major challenge the researchers faced was that not every study used the same questionnaire to measure psychopathy. Some used the well-known Psychopathy Checklist, while others used self-report surveys. To solve this, the team used a statistical technique called relative weights analysis. This allowed them to estimate the levels of boldness, meanness, and disinhibition present in various psychopathy measures.

By calculating these weights, they could infer how the three traits influenced sexual aggression even in studies that did not explicitly use the triarchic model. They then ran statistical models to see how strong the associations were and tested for potential influencing factors, such as the gender of the participants or the type of measurement tool used.

The results of the meta-analysis showed a moderate, positive relationship between general psychopathy and general sexual aggression. This means that as psychopathic traits increase, the likelihood of committing sexually aggressive acts tends to increase as well. This pattern held true for several specific types of offending. The study found positive associations between psychopathy and sexual homicide, sexual sadism, voyeurism, exhibitionism, and online sexual harassment. The connection was particularly strong for sexual cyberbullying and harassment.

However, the findings revealed important exceptions. When the researchers looked specifically at rape and child molestation, the results were different. The analysis did not find a significant statistical link between global psychopathy scores and rape or child molestation in the aggregate data. This suggests that while psychopathy is a risk factor for many types of antisocial sexual behavior, it may not be the primary driver for these specific offenses in every case, or the relationship is more complex than a simple direct correlation.

When the researchers broke down psychopathy into its triarchic components, the picture became clearer. They found that meanness and disinhibition were positively related to sexual aggression. Individuals who scored high on traits involving cruelty, lack of empathy, and poor impulse control were more likely to engage in sexually aggressive behavior. This aligns with theories that sexual aggression often involves a failure to inhibit sexual impulses and a disregard for the suffering of others.

In contrast, the trait of boldness showed a different pattern. The researchers found that boldness was negatively related to sexual aggression. This implies that the socially dominant and emotionally resilient aspects of psychopathy might actually reduce the risk of committing sexual aggression, or at least are not the traits driving it. Boldness is often associated with adaptive social functioning, which might explain why it does not track with these maladaptive behaviors in the same way meanness and disinhibition do.

The study also identified several factors that influenced the strength of these relationships. The type of sample mattered. The link between psychopathy and sexual aggression was stronger in samples of sexual offenders compared to samples of students or the general community. This difference suggests that in forensic populations, where psychopathy scores might be higher or more severe, the trait plays a larger role in aggressive behavior.

Measurement methods also played a role. The relationship appeared stronger when sexual aggression was measured using risk assessment tools rather than self-report surveys. Risk assessment tools often include items related to criminal history and antisocial behavior, which naturally overlap with psychopathy. This could artificially inflate the apparent connection. Conversely, studies that relied on medical records or clinician ratings tended to show weaker associations than those using self-reports.

The findings regarding child molestation were particularly distinct. When child molestation was removed from the general category of sexual aggression, the overall link with psychopathy became stronger. This indicates that child molestation may be etiologically distinct from other forms of sexual violence. The researchers noted that child molesters often score lower on psychopathy measures compared to other types of sexual offenders. This group might be driven by different psychological mechanisms than the callousness and impulsivity that characterize psychopathy.

There are some limitations. The studies included in the meta-analysis varied widely in their methods, populations, and definitions. This high level of heterogeneity means that the average results might not apply perfectly to every specific situation or individual.

Additionally, the relative weights analysis relies on estimating trait levels rather than measuring them directly, which introduces a layer of abstraction. Some specific forms of aggression, like sexual homicide, had very few studies available, which makes those specific findings less robust than the general ones.

Future research could benefit from more direct measurements of the triarchic traits in relation to sexual violence. The researchers suggest that simply looking at a total psychopathy score obscures important details. Understanding that meanness and disinhibition are the primary danger signals, while boldness is not, allows for more precise risk assessment.

In terms of practical implications, these results suggest that prevention and treatment programs should focus heavily on the specific deficits associated with meanness and disinhibition. Interventions that target empathy deficits and impulse control may be more effective than broad approaches. Furthermore, the lack of a strong link with child molestation indicates that this population requires a different conceptual framework and treatment approach than other sexual offenders.

The study, “Psychopathy and sexual aggression: A meta-analysis,” was authored by Inti Brazil, Larisa McLoughlin, and colleagues.

New psychology research identifies a simple trait that has a huge impact on attractiveness

22 December 2025 at 15:00

New research suggests that a potential partner’s willingness to protect you from physical danger is a primary driver of attraction, often outweighing their actual physical strength. The findings indicate that these preferences likely stem from evolutionary adaptations to dangerous ancestral environments, persisting even in modern, relatively safe societies. This study was published in the journal Evolution and Human Behavior.

Throughout human evolutionary history, physical violence from other humans posed a significant and recurrent threat to survival. In these ancestral settings, individuals did not have access to modern institutions like police forces or judicial systems. Instead, they relied heavily on social alliances, including romantic partners and friends, for defense against aggression. Consequently, evolutionary psychology posits that humans may have evolved specific preferences for partners who demonstrate both the capacity and the motivation to provide physical protection.

Previous scientific inquiries into partner choice have frequently focused on physical strength or formidability. These studies often operated under the assumption that strength serves as a direct cue for protective capability. But physical strength and the willingness to use it are distinct traits. A physically powerful individual might not be inclined to intervene in a dangerous situation, whereas a less formidable individual might be ready to defend an ally regardless of the personal risk.

Past investigations rarely separated these two factors, making it difficult to determine whether people value the ability to fight or the commitment to do so. The authors of the current study aimed to disentangle the capacity for violence from the motivation to employ it in defense of a partner. They sought to understand if the mere willingness to face a threat is sufficient to increase a person’s desirability as a friend or mate.

“Nowadays, many of us live in societies where violence is exceedingly rare, and protection from violence is considered the responsibility of police and courts. As such, you wouldn’t really predict that people should care if their romantic partner or friends are or are not willing to step up to protect them during an altercation,” said study author Michael Barlev, a research assistant professor at Arizona State University.

“However, for almost the entire history of our species, for hundreds of thousands of years, we lived in a social world scarred by violence, multiple orders of magnitude higher than it is today, and where protection was the responsibility of romantic partners, family, friends, and coalitional allies. Our psychology, including what we look for in romantic partners and friends, evolved to survive in such a world.”

To investigate this, the research team conducted a series of seven experiments involving a total of 4,508 adults from the United States. Participants were recruited through Amazon Mechanical Turk. The study utilized a vignette-based methodology where participants read detailed scenarios asking them to imagine they were with a partner, either a date or a friend.

In the primary scenario used across the experiments, the participant and their partner are described leaving a restaurant. They are then approached by an intoxicated aggressor who attempts to strike the participant. The researchers systematically manipulated the partner’s reaction to this immediate threat.

In the “willing” condition, the partner notices the danger and physically intervenes to shield the participant. In the “unwilling” condition, the partner sees the threat but steps away, leaving the participant exposed. A control condition was also included where the partner simply does not see the threat in time to react. In addition to these behavioral variations, the researchers modified the descriptions of the partner’s physical strength, labeling them as weaker than average, average, or stronger than average.

The data revealed that discovering a person is willing to protect significantly increased their attractiveness rating as a romantic partner or friend. This effect appeared consistent regardless of the partner’s described physical strength. The findings suggest that the intent to defend an ally is a highly valued trait in itself. In contrast, partners who stepped away from the threat saw a sharp decline in their desirability ratings compared to the control condition.

“We present evidence that our partner choice preferences—what we look for in romantic partners and friends—are adapted to ancestral environments,” Barlev told PsyPost. “I think that is a very important—and generally unappreciated—fact about partner choice preferences, and psychology more generally.”

The researchers also uncovered distinct patterns based on gender, particularly regarding the penalty for unwillingness. When women evaluated male dates, a refusal to protect acted as a severe penalty to attractiveness. The ratings for unwilling men dropped precipitously, suggesting that for women seeking male partners, a lack of protective instinct is effectively a dealbreaker.

Men also valued willingness in female partners, but they were more lenient toward unwillingness. When men evaluated female dates who stepped away from the threat, the decline in attractiveness was less severe than what women reported for unwilling men. This asymmetry aligns with evolutionary theories regarding sexual dimorphism and the historical division of risk in physical conflicts.

“We found that willingness was hugely important, for raters of both sexes, and when rating both male and female friends and dates,” Barlev said. “In particular, when women rated male dates, willingness to protect was very attractive, whereas failure to do so—stepping away—was a deal-breaker (the attractive of unwilling to protect men plummeted compared to when no information about willingness or unwillingness to protect was given).”

The researchers also explored the role of physical strength. While women did express a preference for stronger men, a mediation analysis clarified the underlying psychological mechanism. The analysis suggested that women tended to infer that stronger men would be more willing to protect them.

Once this inference of willingness was statistically controlled, physical strength itself had a much smaller independent effect on attraction. This indicates that strength is attractive largely because it signals a higher probability of protective behavior.

Subsequent experiments tested the limits of this preference by manipulating the outcome of the confrontation. The researchers introduced scenarios where the partner attempts to intervene but is overpowered and pushed to the ground. Surprisingly, the data showed that a partner who tries to help but fails is still viewed as highly attractive. The attempt itself appeared to be the primary driver of the positive rating, rather than the successful neutralization of the threat.

A final experiment examined the most extreme scenario where the partner fails to stop the attack and the participant is physically harmed. In this condition, the aggressor strikes the participant after the partner’s failed intervention.

Even in cases where the participant suffered physical harm because the partner failed, the partner remained significantly more attractive than one who was unwilling to act. This suggests that the signal of commitment inherent in the act of defense carries more weight in partner evaluation than the immediate physical outcome.

The study also compared preferences for friends versus romantic partners. While willingness to protect was valued in both categories, the standards for friends were generally more relaxed. The penalty for being unwilling to protect was nearly three times more severe for romantic partners than for friends. This difference implies that while protection is a valued attribute in all close alliances, it is considered a more critical requirement for long-term mates.

“Strength—or more generally, ability to protect—mattered only little, much less than we thought it would,” Barlev explained. “In our earlier experiments, women showed a weak preference for strength in male dates, but most of this had to do with the underlying inference that stronger men would be more willing—rather than more able—to protect them. In fact, in our later experiments, women found dates attractive even if they tried to protect but failed, and such dates were not less attractive than dated who tried to protect and succeeded.”

“That’s surprising, because whether you protect someone is a function of both your willingness and ability to do so. But here’s one way to think about this: If the aggressor is a rational decision-maker, his decision of whether to fight or retreat depends not only on his strength relative to yours but also on how much each side is willing to risk. So, he should not attack you even if you are weaker if you show that you are willing to risk a lot. Meaning, potentially even more important than how strong you are is your readiness to step up and fight when it’s needed.”

As with all research, there are some limitations to keep in mind. The study relied on hypothetical vignettes rather than real-world behavioral observations. While imagined scenarios allow for precise control over variables, they may not perfectly capture how individuals react during actual violent encounters. Participants might overestimate or underestimate their emotional reactions to such visceral events when reading about them on a screen.

Additionally, the sample consisted entirely of participants from the United States. This geographic focus means the results reflect preferences in a modern Western society where rates of interpersonal violence are historically low compared to ancestral environments. It remains to be seen whether these preferences would differ in cultures with higher rates of daily violence. Preferences for physical strength might be more pronounced in environments where physical safety is less assured by external institutions.

“One big next step is to ask how preferences for physical strength and willingness to protect vary across societies,” Barlev told PsyPost. “Both preferences are likely tuned to some extent to the social and physical environment in which people live, such as how dangerous it is. Strength in particular can be an asset or a liability—strong individuals, especially men, would be better able to protect themselves and others from violence, but such men might also be more violent toward their romantic partners and friends.”

“Because most of our American participants live in relatively safe environments, their weaker preference for strength may partially reflect this down-regulation. If that’s right, we’d predict that people in more dangerous environments will value both strength and willingness to protect somewhat more.”

The study, “Willingness to protect from violence, independent of strength, guides partner choice,” was authored by Michael Barlev, Sakura Arai, John Tooby, and Leda Cosmides.

Social media surveillance of ex-partners linked to worse breakup recovery

22 December 2025 at 01:00

A new series of studies published in Computers in Human Behavior has found that keeping tabs on a former romantic partner through social media hinders emotional recovery. The findings indicate that both intentional surveillance and accidental exposure to an ex-partner’s content are associated with increased distress, jealousy, and negative mood.

Tara C. Marshall, an associate professor at McMaster University, conducted this research to understand the psychological aftermath of maintaining digital connections with former partners. While social media platforms allow users to maintain contact with friends and family, they also create an archive of information about past relationships. Marshall sought to clarify whether observing an ex-partner actively or passively leads to worse recovery outcomes over time.

Previous research on this topic often relied on data collected at a single point in time, which makes it difficult to determine if social media use causes emotional distress or if distressed individuals simply use social media more often. By examining the timing of these behaviors, the study intends to determine if observing an ex-partner precedes a decline in well-being. The research also explores whether personality traits like attachment anxiety, characterized by a fear of rejection and a desire for extreme closeness, worsen these effects.

Marshall conducted four separate studies to address these questions using different methodologies. The first study employed a longitudinal design to assess changes over time. Marshall recruited 194 adults through Amazon Mechanical Turk who had experienced a romantic breakup within the previous three months.

To be included, participants had to be registered Facebook users who had viewed their ex-partner’s profile at least once. Participants completed an initial survey measuring their attachment style, active Facebook surveillance, and current levels of distress. Six months later, they completed the same measures.

The results from the first study showed that frequent monitoring of an ex-partner’s Facebook page was associated with higher levels of distress and jealousy at both the beginning of the study and six months later. While feelings of distress generally declined over time for most participants, active observation moderated the change in negative affect.

Specifically, individuals who engaged in high levels of surveillance saw their negative mood increase over the six-month period. The data also revealed that the link between active observation and breakup distress was stronger for people with high attachment anxiety. This suggests that for individuals who already fear abandonment, seeing reminders of an ex-partner online is particularly painful.

To better understand the immediate emotional impact of social media exposure, Marshall conducted a second study using an experimental design. This study involved 407 adults recruited from the United States who had experienced a breakup within the last year.

Participants were randomly assigned to one of three conditions. One group was instructed to imagine looking at their ex-partner’s Facebook profile, including photos and relationship status. A second group imagined looking at an acquaintance’s Facebook profile. The third group imagined their ex-partner in a school or workplace setting, without any social media context.

The experiment revealed that participants who visualized their ex-partner’s Facebook page reported significantly higher levels of jealousy compared to those who imagined an acquaintance or the ex-partner in a real-world setting.

This increased jealousy was statistically linked to higher levels of negative affect and breakup distress. The findings indicate that there is something uniquely triggering about social media observation. It is not simply thinking about the ex-partner that causes jealousy, but rather the specific context of social media, which often displays personal information and interactions with potential new romantic rivals.

The third study utilized a daily diary method to capture real-time fluctuations in mood and behavior. Marshall recruited 77 undergraduate students in the United Kingdom who had gone through a breakup in the last two years. For seven consecutive days, participants completed a survey every night before bed. They reported whether they had engaged in active observation, defined as deliberately searching for their ex-partner’s profile, or passive observation, defined as the ex-partner’s posts appearing in their feed without a search. They also rated their daily negative emotions and specific distress regarding the breakup.

This daily tracking provided evidence for the timing of these emotional shifts. On days when participants passively observed their ex-partner on platforms like Facebook, Instagram, or Snapchat, they reported higher negative affect for that same day. This suggests that even unintentional exposure can dampen one’s mood. When participants engaged in active observation, the consequences appeared more severe.

Active searching was associated with higher breakup distress on the same day and predicted higher distress on the following day. This finding supports the idea that surveillance does not just reflect current pain but contributes to future pain.

To replicate and expand upon these findings, Marshall conducted a fourth study with a sample of 84 undergraduate students from a Canadian university. The procedure mirrored the third study but extended the diary period to ten days and included newer platforms such as TikTok and VSCO. This study also included daily measures of jealousy to see how they fluctuated with social media use.

The results of the fourth study aligned with the previous findings. On days when participants engaged in active observation, they reported greater negative affect, breakup distress, and jealousy. Similar to the third study, active observation predicted greater breakup distress on the next day.

The study also found that attachment anxiety played a significant role in daily reactions. For participants with high attachment anxiety, both active and passive observation were significantly associated with feelings of jealousy. This reinforces the conclusion that anxious individuals are more vulnerable to the negative effects of digital exposure to an ex-partner.

The collective findings across all four studies present a consistent pattern. Observing an ex-partner on social media tends to be associated with poorer recovery from a breakup. This relationship holds true across different countries and platforms.

The research highlights that passive observation is not harmless. Simply remaining friends with an ex-partner or following them allows their content to infiltrate one’s feed, which is linked to daily spikes in negative emotion. Active surveillance appears to be more detrimental, as it predicts lingering distress that carries over into subsequent days.

There are limitations to this research that should be noted. The samples were drawn primarily from Western nations, and the latter two studies relied exclusively on university students. This demographic profile may not represent the experiences of older adults or individuals from different cultural backgrounds. Additionally, the measure for passive observation in the diary studies relied on self-reporting, which is subject to memory errors. Participants may not always recall every instance of passive exposure throughout the day.

Future research could address these gaps by recruiting more diverse samples. It would also be beneficial to investigate whether these patterns hold true for other types of relationship dissolution, such as the end of a close friendship or family estrangement. Another potential avenue for investigation would be an intervention study. Researchers could randomly assign participants to block or unfollow an ex-partner and measure whether this disconnection leads to faster emotional recovery compared to those who maintain digital ties.

The study, “Social Media Observation of Ex-Partners is Associated with Greater Breakup Distress, Negative Affect, and Jealousy,” was authored by Tara C. Marshall.

Community gardens function as essential social infrastructure, analysis suggests

21 December 2025 at 23:00

A new review published in Current Opinion in Psychology suggests that community gardens function as vital social infrastructure that contributes significantly to individual and collective health. The analysis indicates that these shared green spaces foster psychological well-being, strengthen social connections, and promote civic engagement by cultivating specific forms of social capital.

Many modern societies are currently experiencing a period of transformation defined by profound challenges. These challenges include widespread social isolation, political polarization, and a decline in public trust. While community gardens are frequently established to improve neighborhood aesthetics or provide fresh food, the authors of this study argue that these goals often obscure a more profound impact.

The researchers sought to bridge the gap between the practical experience of garden-based community building and the theoretical understanding of how these bonds are formed. They aimed to provide a comprehensive explanation for how shared gardening activities transform into community resilience.

“Community gardens are often praised for producing food or beautifying neighborhoods, but those explanations felt incomplete. In my real-world experience with the Community Ecology Institute and beyond, gardens consistently function as places where trust, cooperation, and a sense of shared responsibility emerge—often among people who might not otherwise connect,” said first author Chiara D’Amore, the executive director of the Community Ecology Institute.

“At the same time, much of the research treated these outcomes as secondary or incidental. This study was motivated by a gap between practice and theory: we lacked a clear psychological explanation for how community gardens build social capital and why those relationships matter for individual and community well-being. The article brings together psychological theory and on-the-ground evidence to make those mechanisms more visible and legible.”

The authors synthesize findings from 50 studies published over the past decade to examine the social benefits of community gardens. They frame their analysis using social capital theory, specifically the framework established by Aldrich and Meyer. This framework identifies three distinct types of social capital that enable communities to engage in cooperative behavior. These are bonding social capital, bridging social capital, and linking social capital.

Bonding social capital refers to the strong ties that develop within a specific group. Bridging social capital describes the connections formed between diverse groups who might not otherwise interact. Linking social capital involves relationships between individuals and larger institutions or those in positions of power. The review suggests that community gardens are uniquely positioned to foster all three types because they are intentionally designed for people to gather and share resources.

Community gardening is consistently associated with enhanced psychological well-being across diverse populations. These benefits often stem from what the authors term the “gardening triad.” This triad consists of caretaking, a sense of accomplishment, and a connection to nature.

For children, the garden environment appears to stimulate curiosity and joy. These experiences tend to foster emotional development and learning. Adults frequently report reduced feelings of loneliness and an increased sense of purpose. Participants also describe elevated levels of happiness and self-esteem.

Community gardens often serve as places of refuge and restoration. Participants frequently describe these spaces as locations where they can experience safety and mental clarity. The act of being in the garden allows for a break from the stressors of daily life.

For immigrant, refugee, and Indigenous communities, these gardens can function as sites of cultural refuge. They allow for the healing affirmation of identity and the preservation of traditions. During collective crises, such as the COVID-19 pandemic, gardens offered a sense of continuity. They provided emotional grounding when other social structures were disrupted.

The process of gardening also promotes a sense of agency and pride. This occurs through the rhythms of plant care and participation in the food system. These experiences tend to increase self-esteem and motivation. This is particularly true in underserved contexts where individuals may face systemic barriers.

In the Global South, the review notes that community gardens have enabled marginalized groups to reclaim land. This process fosters a sense of control over personal health. Participants describe heightened belonging and self-worth as they see the tangible results of their labor.

The review also highlights evidence that community gardens significantly enhance social connectedness. The shared nature of the work cultivates repeated and cooperative interactions. These interactions nurture trust and reciprocity among neighbors.

One of the primary ways this occurs is through social learning. Gardens enable mentorship and the transmission of knowledge between generations. Older adults are able to pass down cultural and ecological wisdom to younger participants.

For youth, gardening often leads to stronger relationships with peers. It also fosters informal mentorships with adults in the community. School and campus gardens facilitate bridging social capital by linking students with families and educators.

The inclusive nature of these spaces helps to reduce social isolation. This is particularly relevant for urban residents and the elderly. Gardens create environments where individuals from diverse backgrounds can interact.

These interactions foster intercultural trust and dialogue. By working toward a common goal, participants bridge demographic differences. This helps to reduce prejudice and strengthens the overall social fabric of the neighborhood.

The review also highlights the role of community gardens in fostering civic engagement. The authors argue that these spaces act as sites for empathic growth and civic formation. This is especially observed among students and marginalized populations.

Engaging in local food systems tends to promote a grounded sense of social responsibility. It exposes participants to issues regarding sustainability and environmental justice. Students involved in these programs frequently report increased empathy toward underserved communities.

Gardens can also operate as spaces for grassroots leadership. Participants often assume roles in governance or advocacy. This generates linking social capital by connecting residents to policy networks and civic institutions.

Gardening might also deepen the connection participants feel to ecological systems, leading to a stronger environmental identity. Individuals are more likely to engage in pro-environmental behaviors outside of the garden context.

During the COVID-19 pandemic, community gardens demonstrated their capacity as resilient civic infrastructure. They provided food and sustained mutual aid networks. This highlighted their role in both immediate relief and long-term systemic resilience.

“Community gardens don’t just grow food—they grow connection,” D’Amore told PsyPost. “When people work side by side caring for shared land, they build trust, belonging, and mutual support in ways that are difficult to replicate through other programs or policies. These relationships help communities become healthier, more resilient, and better able to face challenges together. The takeaway is simple but powerful: investing in shared, place-based activities like community gardening is an effective way to rebuild social ties at a time when many people feel increasingly isolated.”

Despite the positive findings, the authors acknowledge several limitations in the current body of research. They note that more rigorous data collection is needed to fully understand the scope of these benefits. Future research would benefit from a combination of pre- and post-surveys alongside direct observation.

There is a need to examine how intersecting identities influence access to these spaces. Factors such as race, class, gender, and immigration status likely shape the gardening experience. Comparative studies across different geographic contexts could reveal important variations in outcomes.

The specific mechanisms that cultivate different forms of social capital also require further clarification. It is not yet fully understood which specific activities or leadership styles are most effective at building trust. Understanding these nuances is necessary for optimizing the design of future programs.

The authors also point out the need to explore barriers to garden establishment. Issues such as access to space and funding present significant challenges. Identifying strategies to overcome these obstacles is necessary for creating equitable opportunities for all communities.

The authors conclude that community gardens are a vital form of social infrastructure. They argue that the value of these spaces lies not only in the produce they grow but in the networks they nourish. They encourage continued investment in community gardens as a strategy to address both individual well-being and community resilience.

“As the Founder and Director of the Community Ecology Institute it is our goal to continue to cultivate community garden spaces in our community in Howard County, Maryland and to create tools and resources that help other communities do the same in ways that are connected to research based best practices,” D’Amore added.

The study, “Community Gardens and the Cultivation of Social Capital,” was authored by Chiara D’Amore, Loni Cohen, Justin Chen, Paige Owen, and Calvin Ball.

Single moderate dose of psilocybin linked to temporary reduction in OCD symptoms

21 December 2025 at 17:00

A new study suggests that a moderate dose of psilocybin can effectively reduce symptoms of obsessive-compulsive disorder for a short period. The findings indicate that the improvement is most pronounced in compulsive behaviors rather than obsessive thoughts. These results were published in the journal Comprehensive Psychiatry.

Obsessive-compulsive disorder, commonly known as OCD, is a chronic mental health condition. It is characterized by uncontrollable, recurring thoughts and repetitive behaviors. People with the disorder often feel the urge to repeat these behaviors to alleviate anxiety.

Standard treatments for the condition usually involve selective serotonin reuptake inhibitors or cognitive behavioral therapy. These treatments are not effective for everyone. A significant portion of patients do not find relief through traditional means. This has led scientists to explore alternative therapeutic options.

Psilocybin is the active psychoactive compound found in “magic mushrooms.” It has gained attention in recent years as a potential treatment for various psychiatric conditions. These conditions include depression, anxiety, and addiction.

Most research into psilocybin has focused on high doses that induce a profound psychedelic experience. However, there are concerns about using high doses for patients with OCD. Individuals with this disorder often struggle with a fear of losing control. The intense psychological effects of a high dose could theoretically be distressing for them.

Luca Pellegrini and his colleagues designed a study to test a different approach. Pellegrini is a researcher associated with the University of Hertfordshire and Imperial College London. The research team wanted to see if a moderate dose of psilocybin could offer therapeutic benefits without a potentially overwhelming psychedelic experience.

The researchers also aimed to determine if the biological effects of the drug could reduce symptoms independently of a “mystical” experience. This is a departure from many depression studies, which often link the therapeutic outcome to the intensity of the psychedelic journey.

The study involved 19 adult participants. All participants had a primary diagnosis of obsessive-compulsive disorder. The severity of their condition ranged from moderate to severe. They had been living with the disorder for at least one year.

The researchers employed a fixed-order, within-subject design. This means that every participant received the same treatments in the same order. There was no randomization of the dosage sequence.

Participants attended two dosing sessions separated by at least four weeks. In the first session, they received a very low dose of 1 mg of psilocybin. This served as a control or active placebo. It was expected to have minimal physiological or psychological effects.

In the second session, participants received a moderate dose of 10 mg of psilocybin. This dose was chosen to be high enough to potentially have a biological effect but low enough to minimize the risk of a challenging psychological experience.

The researchers engaged in extensive preparation with the participants. They provided psychological support before, during, and after the dosing sessions. However, this support was non-interventional. The therapists did not provide specific cognitive behavioral therapy or exposure therapy during the sessions.

During the dosing days, participants stayed in a calm and comfortable room. They were encouraged to lie down and listen to music. Therapists were present to ensure their safety and provide reassurance if needed.

The primary measure of success was the Yale-Brown Obsessive Compulsive Scale (Y-BOCS). This is a standardized clinical tool used to rate the severity of OCD symptoms. The scale measures both obsessions and compulsions separately.

The researchers assessed the participants at several time points. These included the day before dosing, the day of dosing, and then one week, two weeks, and four weeks after each dose.

The results showed a clear difference between the two doses. The 10 mg dose led to a significant reduction in OCD symptoms one week after administration. The magnitude of this improvement was considered large in statistical terms.

In contrast, the 1 mg dose resulted in much smaller changes. The difference between the effects of the 10 mg dose and the 1 mg dose was statistically significant at the one-week mark.

The researchers observed that the beneficial effects began to fade after the first week. By the two-week mark, the difference between the two doses was no longer statistically significant. By four weeks, symptom levels had largely returned to baseline.

A specific finding regarding the nature of the symptom relief stands out. The data revealed that the reduction in total scores was driven primarily by a decrease in compulsions. The scores for obsessions did show some improvement, but the change was not statistically significant.

This suggests that psilocybin might have a specific effect on the mechanisms that drive repetitive behaviors. It appears to make it easier for patients to resist the urge to perform rituals. It seems less effective at stopping the intrusive thoughts themselves.

The study also measured symptoms of depression using the Montgomery-Åsberg Depression Rating Scale. Many patients with OCD also suffer from depression. However, the researchers found no significant change in depression scores following the 10 mg dose.

This lack of effect on depression contrasts with other studies on psilocybin. Those studies typically use higher doses, such as 25 mg. The participants in this study generally had low levels of depression to begin with. This may explain why no significant improvement was observed.

The safety profile of the 10 mg dose was favorable. Participants tolerated the drug well. There were few adverse events reported.

No serious adverse events occurred during the study. Some participants experienced mild anxiety or headaches. One participant experienced a brief anxiety attack after the 1 mg dose, but it was resolved quickly.

The absence of distressing perceptual abnormalities was a key outcome. The 10 mg dose did not induce hallucinations or the intense “trip” associated with higher doses. This supports the idea that a moderate dose is a feasible option for patients who might be afraid of losing control.

The study builds on a growing body of evidence regarding psychedelics and OCD. Previous research has hinted at the potential of these compounds.

A seminal study conducted by Moreno and colleagues in 2006 investigated psilocybin in nine patients. That study found that symptoms decreased markedly after psilocybin administration. It tested various doses and found that even lower doses offered some relief. The current study by Pellegrini and team validates those earlier findings with a larger sample and a focused 10 mg dose.

Other lines of research also support the idea that the psychedelic experience itself may not be necessary for symptom relief in OCD. A preclinical study on mice published in Translational Psychiatry explored this concept.

In that animal study, researchers used a “marble burying” test. This is a common behavior in mice used to model obsessive-compulsive traits. Mice treated with psilocybin buried significantly fewer marbles.

The researchers in the mouse study then administered a drug called buspirone alongside the psilocybin. Buspirone blocks the specific serotonin receptors responsible for the hallucinogenic effects. The mice still showed a reduction in marble burying. This suggests that the anti-compulsive effects of psilocybin might work through a different biological pathway than the psychedelic effects.

Case reports have also appeared in medical literature documenting similar effects. A report in the Journal of Psychoactive Drugs detailed the case of a 30-year-old man with severe, treatment-resistant OCD.

This patient consumed psilocybin mushrooms on his own. He reported that his symptoms completely disappeared during the experience. He also noted a lasting reduction in symptom severity for months afterward. His scores on the Y-BOCS dropped from the “extreme” range to the “mild” range.

Survey data provides additional context. A study published in Scientific Reports surveyed 174 people who had used psychedelics. Over 30% of participants with OCD symptoms reported positive effects lasting more than three months.

These participants reported reduced anxiety and a decreased need to engage in rituals. This retrospective data supports the clinical findings that psilocybin targets the behavioral aspects of the disorder.

Despite the promising results, the current study by Pellegrini has several limitations. The sample size was small, with only 19 participants. This limits the statistical power of the analysis.

The study did not use a randomized design. All participants received the 1 mg dose first and the 10 mg dose second. This was done to prevent any potential long-term effects of the higher dose from influencing the results of the lower dose.

However, this fixed order introduces potential bias. Participants may have expected the second dose to be more effective. The researchers attempted to blind the participants to the dose, but most participants correctly guessed when they received the higher dose.

The duration of the effect was relatively short. The significant improvement lasted only one week. This suggests that a single dose may not be a long-term cure.

The short duration implies that repeated dosing might be necessary to maintain the benefits. Future research will need to investigate the safety and efficacy of administering psilocybin on a recurring basis.

The distinction between obsessions and compulsions also requires further study. The finding that compulsions improved more than obsessions is largely preliminary. Larger studies are needed to confirm if this is a consistent pattern.

The researchers suggest that combining psilocybin with psychotherapy could enhance the results. While this study used non-interventional support, active therapy might help patients integrate the experience. Therapies like Exposure and Response Prevention could be more effective during the window of reduced symptoms.

Future clinical trials should use larger samples and randomized designs. They should also explore the potential of multiple doses. Comparing the 10 mg dose directly against the standard 25 mg dose would also be valuable.

The study, “Single-dose (10 mg) psilocybin reduces symptoms in adults with obsessive-compulsive disorder: A pharmacological challenge study,” was authored by Luca Pellegrini, Naomi A. Fineberg, Sorcha O’Connor, Ana Maria Frota Lisboa Pereira De Souza, Kate Godfrey, Sara Reed, Joseph Peill, Mairead Healy, Cyrus Rohani-Shukla, Hakjun Lee, Robin Carhart-Harris, Trevor W. Robbins, David Nutt, and David Erritzoe.

Listening to music immediately after learning improves memory in older adults and Alzheimer’s patients

21 December 2025 at 15:00

Listening to music immediately after learning new information may help improve memory retention in older adults and individuals with mild Alzheimer’s disease. A new study published in the journal Memory provides evidence that emotionally stimulating music can enhance specific types of memory recall, while relaxing music might help fade negative memories. These findings suggest that low-cost, music-based interventions could play a supportive role in managing cognitive decline.

Alzheimer’s disease is a progressive condition that damages neurological structures essential for processing information. This damage typically begins in the hippocampus and entorhinal cortex. These areas are vital for forming new episodic memories. As the disease advances, individuals often struggle to recall specific events or details from their recent past.

A common symptom in the early stages of Alzheimer’s is false recognition. This occurs when a person incorrectly identifies a new object or event as something they have seen before. Memory scientists explain this through dual-process theories. These theories distinguish between recollection and familiarity. Recollection involves retrieving specific details about an event. Familiarity is a general sense that one has encountered something previously.

In Alzheimer’s disease, the capacity for detailed recollection often declines before the sense of familiarity does. Patients may rely on that vague sense of familiarity when trying to recognize information. This reliance can lead them to believe they have seen a new image or heard a new story when they have not. Reducing these false recognition errors is a key goal for cognitive interventions.

While specific memory systems degrade, the brain’s ability to process emotions often remains relatively intact for longer. Research indicates that emotional events are generally easier to remember than neutral ones. This emotional memory enhancement relies on the amygdala. This small, almond-shaped structure in the brain processes emotional arousal.

The amygdala interacts with the hippocampus to strengthen the storage of emotional memories. Activity in the amygdala can trigger the release of adrenal hormones and neurotransmitters like dopamine and norepinephrine. These chemicals help solidify neural connections. This process suggests that stimulating the amygdala might help strengthen associated memories.

Researchers have explored whether music can serve as that stimulus. Music is known to induce strong emotional responses and activate the brain’s reward systems. Previous studies with young adults found that listening to music after learning can improve memory retention. The research team behind the current study aimed to see if this effect extended to older adults and those with Alzheimer’s.

“Our team, led by Dr. Wanda Rubinstein, began researching music-based interventions to improve memory around ten years ago, with a focus on emotional memory. The results regarding the effect of music on younger adults’ memory were promising. When presented after the learning phase, music improved visual and verbal memory,” said study author Julieta Moltrasio, a postdoctoral researcher affiliated with the National Council for Scientific and Technical Research, the University of Palermo, and University of Buenos Aires.

“Additionally, several studies have shown that people with dementia can remember familiar songs even when they forget important events from their past. My supervisor and I work with people with dementia, so we wanted to further explore the use of music as an intervention for this population. Specifically, we wanted to explore whether music could help them learn new emotional material, such as emotional pictures.”

The study included 186 participants living in Argentina. The sample consisted of 93 individuals diagnosed with mild Alzheimer’s disease and 93 healthy older adults. A notable aspect of this group was their educational background. Many participants had lower levels of formal education than is typical in neuroscience research. This inclusion helps broaden the applicability of the scientific findings to a more diverse population.

The researchers engaged the participants in two sessions separated by one week. In the first session, participants viewed a series of 36 pictures. These images were drawn from a standardized database used in psychological research. The pictures varied in emotional content. Some were positive, some were negative, and others were neutral.

After viewing the images, the researchers divided the participants into three groups. Each group experienced a different auditory condition for three minutes. One group listened to emotionally arousing music. The researchers selected the third movement of Haydn’s Symphony No. 70 for this condition. This piece features unexpected changes in volume and rhythm intended to create high energy.

A second group listened to relaxing music. The researchers used Pachelbel’s Canon in D Major for this condition. This piece is characterized by a slow tempo and repetitive, predictable patterns. The third group served as a control and listened to white noise. White noise provides a constant background sound without musical structure.

Immediately after this listening phase, participants performed memory tasks. They were asked to describe as many pictures as they could remember. They also completed a recognition task. The researchers showed them the original pictures mixed with new ones. Participants had to identify which images they had seen before.

One week later, the participants returned for the second session. They repeated the recall and recognition tasks to test their long-term memory of the images. They did not listen to the music again during this second session. This design allowed the researchers to test whether the music played immediately after learning helped consolidate the memories over time.

The results showed that emotional memory was largely preserved in both groups. Both the healthy older adults and the patients with Alzheimer’s remembered emotional pictures better than neutral ones. This confirms that the ability to prioritize emotional information remains functional even when other cognitive processes decline.

The type of music played after the learning phase had distinct effects on memory performance one week later. For healthy older adults, listening to the emotionally arousing music led to better delayed recall. They were able to describe more of the positive and neutral pictures compared to those who listened to white noise. This suggests that the physiological arousal caused by the music helped lock in the memories formed just moments before.

For the participants with Alzheimer’s disease, the benefit manifested differently. The arousing music did not increase the total number of items they could recall. It did, however, improve their accuracy in the recognition task. Patients who listened to the stimulating music made fewer false recognition errors one week later. They were less likely to incorrectly confuse a new picture for an old one.

This reduction in false recognition implies that the music may have strengthened the specific details of the memory. By boosting the recollection process, the intervention helped patients distinguish between what they had actually seen and what merely felt familiar. This specific improvement in discrimination is significant for a condition defined by memory blurring.

The researchers also found a distinct effect for the relaxing music condition. Participants who listened to Pachelbel’s Canon showed a decrease in their ability to recognize negative pictures one week later. This finding was consistent across both the healthy older adults and those with Alzheimer’s.

“Our findings showed that emotionally arousing music improves memory in older adults and patients with dementia, while relaxing music decreases negative memories,” Moltrasio told PsyPost. “Based on previous research, we already knew that relaxing music could decrease memory, but we did not expect to find that it could specifically reduce negative memories in the populations we studied. If relaxing music can reduce the memory of negative images, these findings could be useful in developing treatments for people with negative memories, such as those with PTSD.”

“I also want to highlight that although the effects of highly familiar music on emotion and memory have been well-studied, our research proved that non-familiar music can also have a significant impact. This is important because it shows that music can have a powerful effect even if you don’t have a special connection to it.”

These observed effects align with the synaptic tagging hypothesis. This biological theory suggests that creating a memory involves a temporary tag at the synapse, the connection between neurons. If a strong stimulus follows the initial event, it triggers protein synthesis that stabilizes that tag. In this study, the music likely provided the necessary stimulation to solidify the preceding visual memories.

The research indicates that “even low-cost, easily replicable interventions, such as listening to music, can positively impact the memory of individuals experiencing memory loss,” Moltrasio explained. “These findings may help other researchers and developers create targeted treatments. Furthermore, certain brain regions (e.g., those related to music listening) can remain intact even when memory is impaired. We hope these findings offer researchers, caregivers, health professionals, and relatives of people with Alzheimer’s disease a glimmer of hope.”

“Although the results were promising, the size of the effects was not large. This means that the difference between the group that received the musical treatment and the group that did not is not very big. However, it is worth noting that we did find differences between the groups. This is the first study to prove that a music intervention after learning improves memory in Alzheimer’s disease.”

Additionally, the control condition used white noise. While standard in research, white noise can sometimes be aversive to listeners. Future studies might compare music to silence to ensure the effects are driven by the music itself and not a reaction to the noise. The researchers also note that they did not directly measure physiological arousal, such as heart rate, to confirm the participants’ physical reactions to the music.

Future research aims to explore these mechanisms further. The research team is interested in how familiar music might affect memory and whether active engagement, such as singing or playing instruments, might offer more potent benefits. They are also investigating how the ability to recognize emotions in music changes with dementia. Understanding these nuances could lead to more targeted, non-pharmacological therapies for memory loss.

“We are currently investigating how music is processed and the effects of musical training on cognition,” Moltrasio told PsyPost. “One line of research focuses on how young and older adults, as well as people with dementia, process emotions in music. Among younger adults, we have examined differences in music emotion recognition and other cognitive domains, such as short-term memory and verbal and nonverbal reasoning, between musicians and non-musicians. We have also examined how personality traits may affect the recognition of emotions in music.”

“Regarding Alzheimer’s disease, we are investigating whether the ability to detect emotions in music is impaired. Even when their ability to process other emotional cues, such as those expressed through facial expressions, is impaired, they may still be able to distinguish certain emotions in music. This could be useful for developing music-based interventions that build on these participants’ abilities.”

“Another line of research that I would like to pursue is the effect of familiar music on memory,” Moltrasio continued. “Based on this research, we could develop specific interventions for people with dementia using familiar music. I am not currently working on this line of research, but it could be the next step for me and my team.”

“Our study emphasizes the importance of researching simple, low-cost interventions for dementia. This is particularly relevant when considering the demographics of individuals living with dementia in countries like Argentina. Most neuroscience research does not include individuals with low educational levels, despite the fact that they represent the majority of older adults in our country. Therefore, it is crucial to encourage and support research incorporating more diverse populations.”

The study, “The soundtrack of memory: the effect of music on emotional memory in Alzheimer’s disease and older adults,” was authored by Julieta Moltrasio and Wanda Rubinstein.

Researchers find reverse sexual double standard in sextech use

20 December 2025 at 19:00

A new study published in The Journal of Sex Research has found that men who use sexual technology are viewed with more disgust than women who engage in the same behaviors. The findings indicate a “reverse sexual double standard” in which men face harsher social penalties for using devices like sex toys, chatbots, and robots, particularly as the technology becomes more humanlike. This research suggests that deep-seated gender norms continue to influence how society perceives sexual expression and the integration of technology into intimate lives.

The intersection of technology and human sexuality is expanding rapidly. Sex technology, or “sextech,” encompasses a wide range of devices designed to enhance sexual experiences. These range from traditional vibrators and dildos to advanced artificial intelligence chatbots and lifelike sex robots. Although the use of such devices is becoming increasingly common in solitary and partnered sexual activities, a social stigma remains attached to their use. Many users keep their habits discreet to avoid judgment.

Previous observations suggest that this stigma is not applied equally across genders. While the use of vibrators by women has been largely normalized and framed as a tool for empowerment or sexual wellness, men’s use of similar devices often lacks the same social acceptance. Media depictions frequently portray men who use sex robots or dolls as socially isolated or unable to form human connections.

“Anecdotally, but also in research, discussions around using sexech tend to highlight vibrator use as a positive and empowering addition to female sexuality, while the use of devices designed for male anatomy (like sex dolls or Fleshlights) is more often viewed negatively or as unnecessary,” said study author Madison E. Williams, a PhD student at the University of New Brunswick and member of the Sex Meets Relationships Lab.

“In the same vein, sex toys tend to be more socially accepted than more advanced forms of sextech (which are also typically marketed toward male users). Our study aimed to examine whether this apparent sexual double standard could be demonstrated empirically, and if women and men held different opinions.”

The researchers focused specifically on disgust, an emotion deeply linked to the avoidance of pathogens and the policing of social norms. Disgust serves as a psychological behavioral immune system, but it also reinforces moral boundaries. They proposed that sextech might trigger disgust by violating traditional sexual norms or by evoking the “uncanny valley” effect associated with humanlike robots.

A key rationale for the study was to understand how traditional gender scripts influence these perceptions. Conventional heterosexual scripts often position men as sexual experts who should always be ready for sex and capable of pursuing women. In this context, a man’s use of a sex toy might be interpreted as a failure to secure a human partner or a lack of sexual prowess.

To investigate these questions, the researchers recruited a sample of 371 adults through the crowdsourcing platform Prolific. The participants ranged in age from 18 to 81 years, with an average age of approximately 45. The sample was relatively balanced in terms of gender, consisting of 190 women and 181 men. The majority of participants identified as heterosexual and White.

The study employed a survey design to measure disgust sensitivity in response to specific scenarios. Participants were presented with six different items describing a person using sextech. These scenarios varied based on the gender of the user and the type of technology involved. The three types of technology assessed were sex toys, which represent the least humanlike option, erotic chatbots, which offer some conversational interaction, and sex robots, which are the most humanlike.

For each scenario, participants rated how disgusting they found the behavior on a scale from 1 to 7. A rating of 1 indicated “not at all disgusting,” while a rating of 7 indicated “extremely disgusting.” This measurement approach was adapted from established scales used to assess disgust sensitivity in other psychological research. The researchers compared these ratings to determine if the gender of the user or the type of device significantly influenced the emotional reaction of the observer.

The results provided clear evidence of a double standard. Across the board, participants rated men who used sextech as more disgusting than women who used the same devices. This effect was consistent regardless of the participant’s own gender. Both men and women viewed male sextech users more negatively. This confirms the hypothesis that men are penalized more heavily for incorporating technology into their sexual lives.

“Our findings suggest that men who use sex toys, exchange sexual messages with AI companions, or have sex with robots are perceived as more disgusting than women who engage in equivalent acts,” Williams told PsyPost. “This highlights a troubling double standard that penalizes men for using sexual devices, even though research has found they can offer both women and men similar sexual benefits.”

The study also found a clear hierarchy of disgust related to the type of device. Participants rated the use of simple sex toys as the least disgusting behavior. Engaging with an erotic chatbot elicited higher disgust ratings. The use of sex robots generated the strongest feelings of disgust. This supports the idea that as sexual technology becomes more humanlike, it triggers stronger negative emotional responses. This may be due to the eerie nature of artificial humans or concerns about technology replacing genuine human intimacy.

An interaction between the target’s gender and the type of device offered further nuance to the findings. The gap in disgust ratings between male and female users was widest regarding sex toys. While men were judged more harshly in all categories, the double standard was most pronounced for the simplest technology. As the technology became more advanced and stigmatized—such as with sex robots—the judgment became high for everyone, narrowing the gender gap slightly. However, men were still consistently rated as more disgusting than women even in the robot condition.

“Interestingly, although men were perceived to be more disgusting than women for their use of all forms of sextech, the gap was especially large for sex toys,” Williams said. “In other words, while overall reactions were generally more negative for more advanced technology like erotic chatbots and robots, the strongest gender difference appeared in the sex toy condition.”

The researchers also analyzed differences based on the gender of the participant. Consistent with previous psychological research on disgust sensitivity, women participants reported higher levels of disgust overall than men did. Women expressed stronger negative reactions to the depictions of sextech use across the scenarios. Despite this higher baseline of disgust among women, the pattern of judging men more harshly than women remained the same.

The researchers noted that while the double standard is statistically significant, the average disgust ratings were generally near the midpoint of the scale. The ratings indicate a moderate aversion that varies significantly based on context.

“It is important to note that for all items, disgust ratings remained around or below the midpoint of our scale – this indicates that while men were judged more harshly for their sextech use, these behaviours weren’t rated as extremely disgusting, overall,” Williams explained. “Additionally, people also considered women’s use of sextech to be somewhat disgusting, but on average men were judged more negatively.”

As with all research, there are some limitations to consider. The research relied on self-reported data, which can be influenced by social desirability bias. Participants might have modulated their answers to appear more open-minded or consistent with perceived norms. Additionally, the sample was predominantly heterosexual and Western. Perceptions of sextech and gender roles likely vary across different cultures and sexual orientations.

The study also measured disgust as a general concept without distinguishing between different types. Disgust can be driven by concerns about hygiene, violations of moral codes, or aversion to specific sexual acts. It is unclear which of these specific domains was the primary driver of the negative ratings. Future research could investigate whether the disgust comes from a perceived lack of cleanliness, a sense of unnaturalness, or a moral judgment against the user’s character.

The researchers suggest that future studies should explore how these perceptions change over time. As artificial intelligence and robotics become more integrated into daily life, the stigma surrounding their use in sexual contexts may shift. Longitudinal research could track whether familiarity with these technologies reduces the disgust response. It would also be beneficial to examine whether the context of use matters. For example, using a device alone versus using it with a partner might elicit different social judgments.

“We hope this work encourages more open, evidence-based conversations about men’s use of sextech, with the ultimate goal of reducing the stigma surrounding it,” Williams said. “Understanding that this double standard exists is the first step to normalizing and accepting all forms of sextech use, by all genders.”

The study, “Gross Double Standard! Men Using Sextech Elicit Stronger Disgust Ratings Than Do Women,” was authored by Madison E. Williams, Gabriella Petruzzello, and Lucia F. O’Sullivan.

Prenatal THC exposure linked to lasting brain changes and behavioral issues

20 December 2025 at 17:00

A recent study published in Molecular Psychiatry provides evidence that exposure to cannabis during pregnancy may alter the trajectory of brain development in offspring from the fetal stage through adulthood. The findings indicate that high concentrations of the drug can lead to sustained reductions in brain volume and anxiety-like behaviors, particularly in females. This research utilizes advanced imaging techniques in mice to track these developmental changes over time.

Cannabis contains delta-9-tetrahydrocannabinol, commonly referred to as THC. This compound is the primary psychoactive ingredient responsible for the effects associated with the drug. It works by interacting with the endocannabinoid system, a biological network that plays a role in regulating various physiological processes. This system helps guide how the brain grows and organizes itself before birth. It influences essential mechanisms such as the creation of new neurons and the formation of connections between them.

Public perception regarding the safety of cannabis has shifted alongside legal changes in many regions. As the drug becomes more accessible, usage rates among pregnant individuals have increased. Some use it to manage symptoms such as morning sickness, anxiety, or pain. However, modern cannabis products often contain significantly higher concentrations of THC than those available in previous decades.

Medical professionals need to understand how these potent formulations might influence a developing fetus over the long term. Existing data has been limited, often relying on observational studies in humans that cannot fully isolate the effects of the drug from other environmental factors. Most previous research has also looked at the brain at a single point in time rather than following its growth continuously.

“As cannabis is legalized in more countries around the world and U.S. States, it is also increasingly being viewed as natural and safe. More people, including pregnant people, are using cannabis, and the concentration of delta-9-tetrahydrocannabinol (THC), the main psychoactive component in cannabis, is increasing too,” said study author Lani Cupo, a postdoctoral researcher at McGill University and member of the Computational Brain Anatomy Laboratory.

“Pregnant people may use cannabis for a variety of reasons, either because they don’t know they are pregnant, to help manage mood changes, or to help treat symptoms associated with early pregnancy, such as nausea and vomiting accompanying morning sickness. People should be able to make their own informed decisions about what they do during pregnancy, but there is still a major gap in the scientific understanding of some of the long-term effects of cannabis exposure during pregnancy on brain development.”

The research team employed a mouse model to simulate prenatal exposure. Pregnant mice received daily injections of THC at a dose of 5 milligrams per kilogram from gestational day 3 to 10. This period corresponds roughly to the first trimester in human pregnancy. The dosage was intended to model moderate-to-heavy use, comparable to consuming high-potency cannabis products daily. A control group of pregnant mice received saline injections to provide a baseline for comparison.

To observe brain development, the scientists used magnetic resonance imaging, or MRI. They scanned the offspring at multiple time points to create a longitudinal dataset. The first set of images came from embryos extracted on gestational day 17. A second cohort of pups underwent scanning on alternate days from postnatal day 3 to 10. A third group was imaged during adolescence and adulthood, specifically on postnatal days 25, 35, 60, and 90. This approach allowed the team to track the growth curves of individual subjects throughout their lives.

Analysis of the embryonic images revealed that exposure to the drug affected physical development in the womb. Embryos exposed to THC had smaller overall body volumes compared to the control group. Despite the smaller body size, their brains showed enlargement in specific areas. The lateral ventricles, which are fluid-filled cavities within the brain, were significantly larger in the THC-exposed group. The corpus callosum, a bundle of nerve fibers connecting the brain’s hemispheres, also appeared larger at this stage.

As the mice entered the neonatal period, the pattern of growth shifted. The THC-exposed pups experienced a period of “catch-up” growth regarding their body weight. However, their brain development followed a different path. The rate of brain growth decelerated compared to the unexposed mice. This slowing of growth affected multiple regions, including the hippocampus, amygdala, and striatum.

By the time the animals reached adulthood, the structural differences remained evident. The reduction in brain volume persisted in regions such as the hippocampus and the hypothalamus. The data indicated a sex-dependent effect in the long-term outcomes. Female mice exposed to THC tended to show more pronounced volume reductions in adulthood compared to males. While male mice did exhibit some volume loss, they showed less severe reductions in specific areas like the cerebellum and olfactory bulbs compared to females.

“I was surprised by the apparent vulnerability in female mice compared to male mice when it came to effects in adulthood,” Cupo told PsyPost. “It is very clear from previous studies that sex as a biological variable is important in considering the impact of prenatal cannabis exposure, but the literature shows mixed results depending on the domain being investigated and the timing of outcomes and exposures.”

“Sometimes males are more impacted, sometimes females are more impacted. I think this highlights how critical it is to consider both biological sex and, in humans, gender, when studying prenatal exposures like cannabis. Unfortunately, some research still ignores this important consideration.”

The researchers also assessed behavior to see if these structural changes corresponded to functional differences. In the neonatal phase, researchers recorded ultrasonic vocalizations when pups were separated from their mothers. These high-frequency sounds serve as a form of communication for the young mice. Female pups exposed to THC produced fewer calls, which the authors suggest could indicate deficits in social communication. Conversely, male pups exposed to THC made more calls, potentially signaling increased anxiety or distress.

Later in adolescence, the mice underwent an open-field test to measure anxiety-like behavior. This test involves placing a mouse in a large box and observing its movement patterns. Animals that are anxious tend to stay near the walls and avoid the open center of the arena. The offspring exposed to THC moved less overall and spent significantly less time in the center of the box. This behavior is interpreted as an anxiety-like phenotype. The results provide evidence that the structural brain changes were accompanied by lasting behavioral alterations.

To investigate the cellular mechanisms behind these changes, the researchers used scanning electron microscopy. They examined brain tissue from the hippocampus at a very high resolution. In the embryonic stage, the THC group showed an increased number of dividing cells. This suggests that the drug might trigger premature cell proliferation. However, in the neonatal stage, they did not find a significant difference in the number of dying cells. This implies that the reduced brain volume observed later was likely not caused by mass cell death but perhaps by altered developmental timing.

“In short, we found that exposure to a high concentration of THC early in pregnancy can affect the brain until adulthood,” Cupo explained. “Specifically, we found larger volume of the ventricles, or fluid-filled cavities within the brain, before birth. Then, as the baby mice aged over the first two weeks of life, the brain of THC-exposed pups showed a decreased growth rate compared to the unexposed controls. This smaller volume was sustained until adulthood, especially in female mice.”

“Further, during adolescence the mice showed anxiety-like behavior. Notably, these results are fairly subtle, but they suggest that the trajectory of brain development itself can be impacted by exposure to cannabis early in pregnancy.”

While this study offers detailed insights into brain development, it relies on a rodent model. Mice and humans share many biological similarities, particularly in the endocannabinoid system, which makes them useful for studying basic developmental processes. However, the complexity of the human brain and environmental influences cannot be fully replicated in animal studies. For instance, the study used injections to deliver the drug, whereas humans typically inhale or ingest cannabis. The metabolism and concentration of the drug in the blood can differ based on the method of administration.

Despite these differences, animal models allow scientists to control variables that are impossible to manage in human research. They permit the isolation of a specific chemical’s effect without the confounding variables of diet, socioeconomic status, or other drug use that often complicate human studies. This specific study provided a level of anatomical detail through longitudinal imaging and microscopy that would be unethical or impossible to perform in living humans. The findings serve as a biological proof of principle that prenatal exposure can alter neurodevelopmental trajectories.

The study also utilized a relatively high dose of THC. While this was intended to mimic heavy usage, it may not reflect the effects of occasional or lower-dose use. Additionally, the study focused on THC in isolation. Commercial cannabis products contain a complex mixture of compounds, including cannabidiol (CBD) and terpenes, which might interact with THC to produce different effects.

“It can be easy to put a lot of pressure or even blame on people who use cannabis during their pregnancies, but the reality of the human experience is complex, especially during what can be such a transitional and tumultuous time,” Cupo said. “Although our results do show long-term impacts of cannabis exposure on brain outcomes, the reality of a human choosing to use cannabis or not is much more nuanced than we can recapitulate in a laboratory setting with rodents as a model.”

“In no way do I think these results should be used to shame or blame pregnant people. Instead I hope they can be seen as part of a bigger picture emerging to help supply pregnant people and their care providers with some useful information.”

Future research aims to address some of the current study’s limitations. The authors suggest investigating different methods of administration, such as vaporized cannabis, to better mimic human usage patterns. They also plan to examine the effects of other cannabinoids, such as CBD.

“We would also like to explore the timing of exposure, for example if it begins before conception, or if the father mouse consumes cannabis before conception,” Cupo added. “We would also like to explore more complex models, such as whether early life environmental enrichment can prevent some of the long-term impacts of cannabis exposure.”

“I would just like to re-emphasize that our study is a small piece of a much larger picture that researchers have been approaching from many different angles.”

The study, “Impact of prenatal delta-9-tetrahydrocannabinol exposure on mouse brain development: a fetal-to-adulthood magnetic resonance imaging study,” was authored by Lani Cupo, Haley A. Vecchiarelli, Daniel Gallino, Jared VanderZwaag, Katerina Bradshaw, Annie Phan, Mohammadparsa Khakpour, Benneth Ben-Azu, Elisa Guma, Jérémie P. Fouquet, Shoshana Spring, Brian J. Nieman, Gabriel A. Devenyi, Marie-Eve Tremblay, and M. Mallar Chakravarty.

Harvard scientist reveals a surprising split in psychological well-being between the sexes

20 December 2025 at 15:00

A new analysis of global data reveals that while men score higher on a majority of specific wellbeing metrics, women tend to report higher overall life satisfaction. The findings suggest that females often fare better on social relationship indicators, which appear to carry significant weight in subjective assessments of a good life. These results were published in The Journal of Positive Psychology.

Societal debates regarding how men and women fare relative to one another are common. However, existing scientific literature on this topic often suffers from specific limitations. Many studies rely on narrow definitions of wellbeing that focus heavily on mental or physical health diagnoses rather than a holistic view of human flourishing.

Additionally, much of the psychological research is conducted on Western, Educated, Industrialized, Rich, and Democratic (WEIRD) populations. This geographic bias limits the ability of scientists to make universal claims about human experience across different cultures.

Tim Lomas, a psychology research scientist at the Human Flourishing Program at Harvard University, aimed to address these gaps by applying a broad conceptual framework to a truly international dataset.

“For wellbeing researchers, any sociodemographic differences—such as between males and females in the present paper—are inherently interesting and valuable in terms of furthering our understanding of the topic,” Lomas explained. “More importantly, though, one would ideally hope that such research can actually help improve people’s lives in the world. So, if we have a better sense of the ways in which males and females might respectively be particularly struggling, then that ideally helps people (e.g., policy makers) address these issues more effectively.”

Lomas utilized data collected by the Gallup World Poll, which relies on nationally representative, probability-based samples of adults aged 15 and older. The methodology typically involves surveying approximately 1,000 individuals per country to ensure the data accurately reflects the broader population.

The analysis spanned three years of data collection from 2020 through 2022, a period that necessitated a mix of telephone and face-to-face interviews depending on local pandemic restrictions. The final aggregated sample included exactly 391,656 individual participants across 142 countries.

Lomas selected 31 specific items from the poll to assess wellbeing comprehensively. These items were categorized into three main areas: life evaluation, daily emotions and experiences, and quality of life factors. Life evaluation was measured using Cantril’s Ladder, a tool where participants rate their current and future lives on a scale from zero to ten.

Daily experiences were assessed by asking if participants felt specific emotions or had specific experiences “yesterday.” These included positive states like feeling well-rested, being treated with respect, smiling or laughing, and learning something interesting. They also included negative states such as physical pain, worry, sadness, stress, and anger.

Quality of life measures examined broader factors beyond immediate emotional states. These included satisfaction with standard of living, feelings of safety while walking alone, and satisfaction with the freedom to choose what work to do. The survey also asked about objective hardships, such as not having enough money for food or shelter.

The statistical analysis revealed that males scored more favorably than females on 21 of the 31 variables. Men were more likely to report feeling well-rested, learning something new, and experiencing enjoyment. They also reported lower levels of negative emotions like pain, worry, sadness, stress, and anger compared to women.

Men also scored higher on measures of personal safety and autonomy. For instance, men were more likely to feel safe walking alone at night. They were also more likely to report being satisfied with their freedom to make life choices.

Despite scoring lower on a greater number of individual metrics, females reported higher scores on overall life evaluation. This finding presents a paradox where men appear to have more advantages in daily experiences and safety, yet women rate their lives more positively overall.

“Curiously and significantly…females have higher life evaluation (both present, future, and combined) on Cantril’s (1965) ‘ladder’ item. The ‘curiosity’ aspect of that sentence is that life evaluation is often regarded and used as the single best summary measure of a person’s subjective wellbeing,” Lomas wrote in the study. “…while females would seem to have greater wellbeing if just based on the life evaluation metrics alone, when structuring wellbeing into different components, males appear to do better, at least numerically. It is possible however that even though males place higher on more items, the third of items on which females excel may be more important for wellbeing.”

The data indicates that women tended to fare better on outcomes related to social connection. Females were more likely to report being treated with respect and having friends or relatives they could count on in times of trouble. They also scored higher on measures of “outer harmony,” which relates to getting along with others. Lomas suggests that because social relationships are often the strongest predictors of subjective wellbeing, strength in this area might outweigh deficits in other domains for women.

“Overall, the differences between males and females on most outcomes are not especially large, and on the whole their levels of wellbeing are fairly comparable,” he told PsyPost. “But the differences, such as they are, are still interesting and moreover actionable (e.g., with policy implications).”

These patterns were not uniform across the globe. Cultural context appeared to play a role in how sex differences manifested. South Asia was the region where males fared best relative to females.

In contrast, East Asia was the region where females fared best relative to males. This geographic variation provides evidence that sex differences in wellbeing are not purely biological but are heavily influenced by societal structures. Lomas also compared Iceland and Afghanistan to illustrate the impact of societal gender equality.

In Afghanistan, males scored higher than females on every single wellbeing metric measured. This reflects the severe restrictions and hardships faced by women in that nation. In Iceland, which is ranked highly for gender equality, females often outperformed males even on metrics where men typically lead globally.

Demographic factors such as age and education also influenced the results. The data showed that getting older tended to favor males more than females regarding wellbeing outcomes. As age increased, the gap between men and women often widened in favor of men on various metrics.

However, higher levels of education and income appeared to benefit females slightly more than males. When comparing the most educated participants to the least educated, the relative position of women improved on 16 variables. A similar pattern emerged when comparing the richest quintile of participants to the poorest.

“Wellbeing is multifaceted, and people—from the individual up to whole societies—can be doing well in some ways and less well in others,” Lomas said. “This applies to comparisons between males and females, where overall both groups seem to experience advantages and disadvantages in relation to wellbeing.”

The study has some limitations that provide context for the findings. Lomas notes that the analysis relies on a specific set of 31 items available in the Gallup World Poll. It is possible that a different selection of questions could yield different results.

For example, if the survey included more nuanced questions about relationship quality, women might have outperformed men on even more metrics. The study is also cross-sectional, meaning it captures a snapshot in time rather than tracking individuals over years. This design makes it difficult to determine causal directions for the observed differences.

“Although it’s obvious to most people, I’d emphasize that the results in the paper involve averages, and there will always be exceptions and counterexamples,” Lomas noted. “This applies both at an individual level (e.g., even if males generally tend to struggle on a particular outcome, a minority will excel on it), but also at a societal level (i.e., the findings in the paper are averaged across all the countries in the World Poll, but one can usually find exceptions where countries go against the general trend).”

For future research, Lomas intends to expand this line of inquiry by conducting longitudinal analyses. “Firstly, it would be good to explore trends over time using the Gallup World Poll, which goes back to 2006,” he explained. “Additionally, we plan to use panel data from the Global Flourishing Study (for which I’m the project manager) for the same purpose, and although it has fewer years of data (its first wave was in 2023), it is a genuine panel study (unlike the World Poll, which is cross sectional), so we may get some better insights into causal dynamics.”

The study, “Global sex-based wellbeing differences in the Gallup World Poll: males do better on more metrics, but females generally do better on those that may matter most,” was authored by Tim Lomas.

❌
❌