Reading view

Younger adults show higher levels of Machiavellianism and psychopathy

Personality traits associated with manipulation, callousness, and grandiosity appear to decrease with age, according to a new study published in the journal Deviant Behavior. The research suggests that Machiavellianism and psychopathy, two of the three so-called Dark Triad traits, are more pronounced in younger adults and tend to decline as people get older. Narcissism, by contrast, remained relatively stable across age groups.

Understanding how socially aversive traits change throughout life has been a challenge for psychologists. While much is known about the development of more socially desirable traits, such as agreeableness or conscientiousness, the trajectory of traits often seen as antagonistic or maladaptive has received less attention. These traits—Machiavellianism, psychopathy, and narcissism—are collectively referred to as the Dark Triad.

Machiavellianism involves a tendency to manipulate others for personal gain, often through calculated and strategic behavior. Psychopathy is marked by impulsivity, lack of empathy, and emotional coldness, while narcissism reflects an inflated sense of self-importance and a strong need for admiration.

Previous research has hinted at a decrease in some of these traits over time, possibly due to increased social responsibility or life experience. However, many studies relied on small sample sizes or did not capture the full range of adult life. In the new study, researchers affiliated with the Interdisciplinary Research Team on Internet and Society aimed to provide a more detailed picture by analyzing patterns across a wide age range using a large sample.

“Our group has always been interested in how our personalities shift throughout life, especially the so-called dark traits. Most people believe that personality is fixed, either you’re kind or manipulative, but psychology tells a more dynamic story,” explained study author Bruno Bonfá-Araujo, an assistant professor at the Universidade Tuiuti do Paraná and postdoctoral researcher at Masaryk University.

“The Dark Triad traits (Machiavellianism, narcissism, and psychopathy) represent some of the most socially aversive aspects of human behavior, like manipulation, lack of empathy, and impulsiveness. And we know little about how these traits change as people grow older. So, we wanted to explore whether younger individuals really are ‘darker’ and if maturity and life experience soften these traits. The idea came from observing how young adults prioritize competition and self- promotion, while older adults tend to value stability and empathy.”

The researchers collected data from 1,079 Brazilian adults between the ages of 18 and 81. Participants completed the Short Dark Triad questionnaire, which measures Machiavellianism, psychopathy, and narcissism through self-reported agreement with statements like “I like to use clever manipulation to get my way” or “I know that I am special because everyone keeps telling me so.” The study used both traditional group comparisons and a person-centered approach to analyze the data.

Participants were divided into five age groups: 18–25, 26–35, 36–40, 41–59, and 60 and older. The researchers first compared average scores for each trait across age groups using analysis of variance. They found that Machiavellianism was highest among the youngest adults and declined steadily with age. The difference in Machiavellianism scores between the youngest and oldest groups was large. Psychopathy followed a similar trend, although the differences between age groups were somewhat smaller. Narcissism showed no significant differences across the age groups.

To examine how combinations of traits varied by age, the researchers also used a statistical method called latent profile analysis. This technique helps identify subgroups within each age range based on patterns of responses rather than looking only at average scores. Most age groups showed two distinct profiles: a high-dark-trait profile and a low-dark-trait profile. The group aged 36–40 showed three profiles, including a moderate-dark-trait group.

When comparing people with high dark trait profiles across age groups, the researchers observed that older adults in this category still scored lower in Machiavellianism and psychopathy than younger adults in the same category. For example, the level of psychopathy among older adults identified as high in dark traits was comparable to the level seen in younger adults identified as low in these traits. This finding supports the idea that dark traits tend to be less intense with age, even among those who show a generally high tendency toward them.

Among participants with low dark trait profiles, the oldest group again had the lowest scores in Machiavellianism and psychopathy. Interestingly, narcissism did not show a consistent trend. In some comparisons, middle-aged participants in low trait profiles showed slightly lower narcissism than younger or older adults.

The researchers also found that Machiavellianism was the most prevalent dark trait across all age groups, while psychopathy was the least common. Narcissism typically fell between the two.

This pattern may reflect how these traits are expressed and perceived in everyday life. Machiavellian tendencies, such as being strategic or secretive, might be more socially acceptable or even rewarded in certain contexts, especially in competitive environments like the workplace. Psychopathic behaviors, which often involve impulsivity or emotional coldness, may be less tolerated and decrease more noticeably with age.

The researchers noted that some dark traits might be sustained or reinforced by environmental factors. For example, younger adults facing pressure in competitive job markets may rely more on manipulative or self-serving strategies. Over time, as people gain experience or take on responsibilities such as parenting or long-term employment, these behaviors may become less adaptive or socially acceptable, leading to their decline.

“The most important takeaway is that personality isn’t set in stone, it evolves,” Bonfá-Araujo told PsyPost. “We found that two of the dark traits, Machiavellianism (manipulation and strategic thinking) and psychopathy (impulsivity and lack of remorse), tend to vary across age groups, with a tendency to decrease with age. In contrast, narcissism (feeling special or craving admiration) remains relatively stable across age groups.”

“These patterns suggest that as people move through life, they often report being less impulsive and more considerate or strategic in their behavior. Life experiences such as work responsibilities, relationships, and broader social roles may contribute to these differences. Our findings also remind us that having some ‘dark’ traits doesn’t automatically make someone bad. A little ambition or confidence can be useful, it’s when these traits dominate that they become harmful.”

“We were surprised by how stable narcissism appeared to be,” Bonfá-Araujo said. “While we expected all three dark traits to decline with age, narcissism didn’t follow that pattern. This suggests that the desire for recognition or admiration may be a deeper part of humans. It doesn’t vanish, but it may change in how it’s expressed. Younger people might show it through social comparison or status-seeking, while older adults might express it through self-confidence, leadership, or a sense of pride in accomplishments.”

Although the findings provide insight into how dark personality traits change across adulthood, the study was cross-sectional. As a result, it cannot establish how any one person’s traits change over time. Longitudinal research would be necessary to understand individual development and confirm the observed patterns.

“We compared people of different ages at one point in time rather than following the same individuals over several years,” Bonfá-Araujo noted. “So, while we can observe differences across age groups, we can’t say for sure that these changes happen within each person as they age.”

Another limitation is the composition of the sample. The majority of participants were women, which may have influenced the findings. While this gender imbalance is common in psychological research, it limits the ability to generalize the results to the broader population. Future studies should aim for more balanced samples to determine whether men and women show similar patterns of change in these traits.

There is also the possibility that current tools for measuring dark traits may not fully capture how these traits are expressed in older adults. For example, physical aging may limit the expression of impulsive behavior, which could affect how psychopathy is assessed. Future work may need to adapt measurement tools to better reflect the life contexts of older populations.

“As a group, we are interested in understanding how dark personality traits interact with culture, environment, and life experiences,” Bonfá-Araujo explained. “For example, do certain social or professional settings encourage manipulative or narcissistic behavior? How does the pressure to succeed or the influence of social media shape these traits in younger generations?”

“Long-term, we’d like to conduct longitudinal studies that follow individuals over time to see how their dark traits evolve with age, relationships, and life transitions. Ultimately, the goal is not to label people as ‘dark’ or ‘good’ but to understand how personality develops and how self-awareness can lead to healthier choices in work, love, and life.”

The authors suggest that understanding how dark traits evolve with age can help inform how people relate to one another in different life stages. They argue that traits often considered negative can serve adaptive purposes in certain environments.

“Our aversive traits are not something to fear, but to understand,” Bonfá-Araujo said. “Everyone has moments of selfishness, ambition, or even manipulation, it’s part of being human. What matters is how we use those tendencies. Some traits that look “dark” in one context can be useful in another.”

The study, “Could the Younger Be Darker? A Cross-Sectional Study on the Dark Triad Levels Across the Lifespan,” was authored by Bruno Bonfá-Araujo, Gisele Magarotto Machado, Nathália Bonugli Caurin, and Ariela Raissa Lima-Costa.

A new psychological framework helps explain why people choose to end romantic relationships

People often think of breakups as impulsive or emotionally driven events. But new research suggests that ending a romantic relationship is typically a deliberate decision shaped by a range of social, emotional, and cognitive influences.

The study, published in The Journal of General Psychology, proposes an integrative framework to explain why people choose to leave their romantic partners. Rather than focusing on isolated personality traits or relationship problems, the authors argue that breakups are better understood as intentional behaviors that reflect a person’s beliefs, emotions, social pressures, and motivations.

Romantic relationships can bring many benefits, including emotional support, companionship, and improved well-being. At the same time, breakups are common and can cause distress for both partners. Despite the frequency and impact of relationship dissolution, much of the psychological literature has focused on why people stay in relationships rather than why they end them.

The researchers, Anna M. Semanko and Verlin B. Hinsz, sought to address this gap by drawing from two well-known theories that explain intentional behavior: the reasoned action approach and the theory of interpersonal behavior. Both models have been used to study a wide range of decisions, such as using birth control or changing jobs.

“Romantic relationship dissolution is a complex topic. This research expands upon prior work by integrating cognitive, emotional, social, and attitudinal factors that influence how people decide and potentially follow through with the decision to end a romantic relationship. The goal was to highlight how key contributors, such as those from the Theory of Interpersonal Behavior (Triandis, 1977) and Reasoned Action Approach (Fishbein & Ajzen, 2011), lead to breakup intentions and behavior,” explained Semanko, an assistant professor of psychology at The College of St. Scholastica.

The researchers developed a theoretical model based on existing literature. They reviewed studies and conceptual work on relationship dissolution, behavioral intentions, and psychological factors related to decision-making. Their goal was to integrate insights from the two behavioral intention theories into a single framework that could explain the many influences on breakup decisions.

The reasoned action approach emphasizes that people’s intentions to act are shaped by their attitudes, perceived social norms, and sense of control over the behavior. For example, someone who believes breaking up is the right thing to do, perceives support from friends, and feels capable of doing it is more likely to follow through.

The theory of interpersonal behavior adds other factors to this equation. It highlights the roles of emotions, habits, social roles, and self-concept. According to this model, people don’t just weigh pros and cons cognitively. They also consider how the behavior makes them feel, how it aligns with their past patterns, and whether it fits with their identity.

The integrative framework proposed by the researchers combines both models and includes additional influences that may be particularly relevant in the context of romantic breakups. These include anticipated emotions (how someone expects to feel after the breakup), moral beliefs, and the individual’s attitude toward the breakup process itself.

The framework distinguishes between different types of influences. Affective influences refer to emotions and anticipated feelings. Social influences include perceived norms, roles, and self-concept. Cognitive influences involve beliefs about the consequences of breaking up and whether the person feels in control. Motivational factors like the formation of specific plans (called implementation intentions) are also included.

The authors explain that all these factors shape a person’s intention to break up. That intention, in turn, predicts whether the breakup will actually happen. However, they also acknowledge that this intention-behavior link is not always perfect. Strong emotions, unexpected obstacles, or changes in circumstances can interfere with someone’s original plan to end the relationship.

“Breaking up with a romantic partner is often a reasoned action – it involves a thoughtful decision-making process,” Semanko told PsyPost. “This work highlights the many factors that may facilitate (or constrain) the act of ending a romantic relationship. By highlighting these factors, individuals can better understand the underlying motivations and reasons behind this important decision.”

Importantly, the paper provides examples of how these factors operate in real life. For instance, someone might believe that ending their relationship would give them more independence, which leads to a favorable attitude toward breaking up.

But if they also expect to feel intense guilt or sadness, their emotional hesitation might reduce the strength of their intention. Or, they might believe that their friends would disapprove, weakening their motivation further. Conversely, someone who has broken up with past partners and found it empowering may have stronger habits and higher confidence that support their decision.

The framework also highlights how background characteristics like age, religion, or personality traits shape breakup behavior indirectly. Rather than having a direct effect, these traits influence beliefs and emotions that feed into the decision-making process.

“Much psychological research investigates individual differences (e.g., attachment styles),” Semanko said. “Although individual differences are important, broader factors – like social factors – substantially contribute too.”

Social norms and self-concept emerged as particularly important influences. If a person identifies strongly with being a committed partner or sees their relationship as central to their identity, they may feel greater reluctance to end it. On the other hand, someone whose social circle views breakups as common and acceptable may be more inclined to see dissolution as a viable option.

The researchers also discussed how implementation intentions—detailed plans for when and how to act—can increase the likelihood of following through. For example, deciding in advance to have a conversation with one’s partner during a quiet evening at home can make the breakup more likely to occur.

This work is conceptual rather than empirical. That means the authors did not test their framework with data. Instead, they built a model based on previous studies and theoretical reasoning. As a result, more research is needed to determine which factors in the framework are the most predictive of actual breakup behavior.

Another limitation is that the review does not account for cultural differences in relationships. Norms around dating and commitment can vary across cultures, which might affect the weight or relevance of certain beliefs or social influences.

“This framework will be tested to investigate which theoretical antecedents contribute the most to intentions to break up with a partner,” Semanko said.

By identifying the factors that make breakups more or less likely, the framework could help individuals better understand their own motivations and choices. It may also inform therapists or counselors who support people navigating the difficult decision to end a relationship.

“The proposed framework can be applied to other important interpersonal decisions too, such as the decision to end a friendship, leave an occupation, or even an intention to be married or have children,” Semanko added.

The study, “Intending to Break Up: Exploring Romantic Relationship Dissolution from an Integrated Behavioral Intention Framework,” was authored by Anna M. Semanko and Verlin B. Hinsz.

Real-world social ties outweigh online networks in predicting of voting patterns

A new study published in PNAS Nexus challenges prevailing views about the influence of online echo chambers on political behavior. The research provides evidence that Americans’ political environments in physical spaces—such as where they live, work, and spend time—are more predictive of voting patterns than their online social networks.

Political polarization in the U.S. has increasingly become a topic of national concern. Much of the public discussion has centered on the role of online spaces, particularly social media platforms, in dividing people along partisan lines. These digital environments have been blamed for fostering echo chambers, amplifying misinformation, and reducing contact with opposing viewpoints.

But people also live in neighborhoods, commute to work, and interact in public spaces. These physical environments may shape political views through casual or repeated encounters with others. While earlier research has examined polarization through residential data or online behavior, there has been limited large-scale analysis comparing these different types of exposure. The authors of this study aimed to fill that gap by examining how online, offline, and residential social networks relate to political segregation and vote choice in the U.S.

To compare different types of partisan exposure, the researchers relied on four major data sources. One set measured offline interactions using Facebook’s co-location data, which captures how often people from different counties are physically present in the same place for at least five minutes. This metric reflects passive encounters in public or shared spaces, such as on public transit, in stores, or at events.

A second set measured online social connections through Facebook’s Social Connectedness Index, which calculates the probability that two people from different counties are Facebook friends. These relationships are typically active, selective, and sustained over time.

The third data source involved voter registration records to estimate residential partisan exposure—that is, the likelihood that someone lives near others who are affiliated with the same or opposing political parties. Finally, the researchers used individual survey responses from the American National Election Studies to understand how people’s offline and online networks related to their actual votes in the 2020 presidential election.

The researchers first assessed how strongly each type of exposure—offline, online, and residential—was associated with voting patterns at the county level. They found that co-location data, which reflects in-person physical proximity, was a stronger predictor of how counties voted than either Facebook friendships or residential partisan makeup. Counties where people frequently encountered others who supported a particular party were more likely to vote in alignment with those patterns.

This pattern held up even after accounting for demographic and socioeconomic factors, such as education levels, race, and urban versus rural composition. Offline social networks appeared to explain more of the variation in voting outcomes than either online ties or residential clustering.

Next, the researchers looked at partisan segregation—the extent to which individuals are exposed mostly to co-partisans versus a mix of political affiliations. They found that physical spaces, including both co-location patterns and residential arrangements, were more politically segregated than online networks. Offline segregation tended to be more pronounced in metropolitan areas with higher educational attainment and larger African American populations, both of which were linked to greater exposure to Democratic voters.

In contrast, counties with lower education levels or predominantly rural populations tended to show stronger exposure to Republican partisans. Online networks, while still showing some degree of partisan sorting, were more diverse than offline environments and displayed greater “extroversion,” meaning people were more likely to be connected to others outside their local region.

The final part of the analysis focused on individual behavior. Survey participants were asked about the political leanings of their family, friends, and Facebook connections. These self-reported networks were then compared to their stated vote in the 2020 presidential election.

Again, offline social exposure had a stronger relationship with vote choice than online connections. People who reported that their friends and family were mostly Democrats were more likely to vote for Joe Biden. Those who were mostly surrounded by Republican-leaning offline contacts were more likely to vote for Donald Trump. While online exposure also mattered, its influence on vote choice was noticeably smaller. This pattern remained stable across two waves of data collection, before and after the 2020 election.

The researchers also conducted several robustness checks. For example, when they removed local exposure (connections within the same county) from the analysis, the predictive power of offline networks declined significantly. In these cases, online ties sometimes became more predictive of voting patterns. This finding suggests that local, everyday interactions are a key component of political influence in offline environments.

The study draws on large-scale datasets that offer unique insights, but there are still some limitations. The offline and online network data were derived from Facebook, which may not fully capture the behavior of groups less active on the platform, such as older adults or people in rural areas. While the researchers applied adjustments to improve representativeness, there may still be biases in who is included in the co-location and friendship data.

The analysis also compares different types of networks—casual physical proximity, sustained online friendships, and residential proximity—each of which may involve different levels of interaction and influence. For instance, co-location does not necessarily mean that two people talked or knew each other, and online friendships may vary widely in strength.

Future research could build on these findings by attempting to measure an individual’s complete social network, both online and offline, using a single comprehensive data source. Despite these limitations, the study suggests that while online platforms are an important part of modern social life, the nature of our real-world interactions and the physical spaces we share appear to be more fundamentally linked to our political behaviors.

The study, “Physical partisan proximity outweighs online ties in predicting US voting outcomes,” was authored by Marco Tonin, Bruno Lepri, and Michele Tizzoni.

New study sheds light on women’s attraction to aggression in pornography

A new study reports that many pornography viewers, especially women, find depictions of aggression arousing, particularly when scenes combine expressions of both pleasure and pain. The research provides evidence that for a sizable group of viewers, pleasure and pain are not seen as opposites but are often experienced as deeply connected. The work was published in the Archives of Sexual Behavior.

The study was designed to investigate how viewers interpret aggression, pain, and pleasure in pornographic content, with a particular focus on scenarios involving female performers on the receiving end of aggression. The researcher, Eran Shor, was interested in understanding whether viewers find these scenes arousing, how they reconcile feelings of discomfort or guilt, and what social or psychological meanings they attach to the experience.

The debate over the relationship between pleasure and pain has persisted for centuries. While early Western philosophers tended to view them as opposites, later perspectives from neuroscience, psychology, and sociology have pointed to more complicated interactions. Some evidence suggests that painful sensations can be transformed into pleasurable experiences depending on how they are perceived, the presence of emotional intimacy, and cultural context.

This research builds on work showing that consensual pain during sexual activity—particularly within BDSM practices—is often experienced positively. In these settings, pain is not simply tolerated but sometimes desired, interpreted by participants as contributing to arousal or emotional release.

The study draws on 302 in-depth interviews with adults who regularly watch pornography. Participants were recruited through online advertisements and social media platforms targeting students and general audiences in North America and abroad. The sample was diverse in terms of gender, ethnicity, sexual orientation, and geographic background, although younger, educated individuals were overrepresented.

Participants were interviewed in English or French by trained research assistants, with interviews conducted anonymously via audio calls to encourage openness. They were asked about their pornography preferences, specifically in relation to aggressive content, and how they perceived expressions of pain and pleasure during those scenes. Responses were analyzed through a mix of quantitative coding and qualitative thematic analysis.

Roughly half of the participants identified as women, and their responses were compared to those of men and gender-diverse individuals. Questions covered a range of topics, including the types of pornographic scenes they watched, whether they found aggression arousing, and under what conditions pain could be part of a sexually stimulating experience.

More than half of all participants said they found at least some level of aggression in pornography to be arousing. This preference was especially common among women. About 69 percent of women in the study said they enjoyed at least some aggressive content, compared to 40 percent of men. Women were also more likely than men to report arousal from “harder” forms of aggression, such as choking or gagging, and were more likely to actively seek out pornographic videos that featured aggression.

Notably, nearly 70 percent of all participants reported that they found it arousing when performers expressed pleasure in response to aggression. This was true regardless of whether the viewer typically enjoyed aggressive pornographic content overall. Some participants said the display of pleasure made the scene feel consensual or affirming. In their view, expressions of enjoyment from the recipient helped justify the aggression as mutual rather than abusive.

A smaller but still significant portion of participants—about one in four—said they found it arousing when performers displayed pain in response to aggression. Again, this was more commonly reported by women than men. However, many of these viewers emphasized that they only enjoyed pain in specific contexts, such as when it was paired with pleasure or presented as brief and voluntary.

The interviews revealed that many viewers conceptualized pain and pleasure not as opposites but as deeply intertwined. One 22-year-old woman from India stated it simply: “There’s no pleasure without pain.” Another male viewer explained his perspective, noting, “Pain is part of pleasure. So, if it’s obviously just pain, I’ll close the video. But if she’s uncomfortable and likes it, I like it.”

This idea of “good pain” was a recurring theme. A 24-year-old woman from Canada said she found pain arousing “if it looks like it’s good pain, enjoyable.” For many, the key was the performer’s perceived desire for the experience. One participant noted that she found a woman’s moans of pain arousing only under certain conditions: “If she’s saying ‘yeah, hit me’ or ‘do it!’”

The BDSM genre emerged as a key area where viewers found this mix of sensations appealing. Many respondents mentioned that BDSM scenarios helped them feel more comfortable with aggressive content because the genre often includes cues about consent. As one 21-year-old male student explained, “I need to know at end of day that woman wants it and isn’t being forced. She needs to enjoy and ask for it… I need them to enjoy it for me to enjoy it too.”

For some, the appeal of dominance and submission in these videos was about a form of release. A 41-year-old university administrator from Canada described her enjoyment of submission, stating, “I find it liberating… It’s a way to let go.”

A few interviewees described their enjoyment of pain and aggression in terms of emotional processing or trauma. One participant shared that her early experiences with sexual assault shaped her perception of sex and led her to seek out pornography with aggressive themes. For her, watching this content felt like a way to reframe her history. “Pain is also pleasure, so it empowers my past. It’s a way to cope,” she explained.

Although rare, a handful of participants reported enjoying depictions of pain without any accompanying pleasure. These individuals often expressed feelings of guilt about their preferences. One man explained, “I guess I’m a sadist; can’t explain it otherwise… It’s about dominance, almost like the degree of suffering it’s causing.”

Importantly, many participants viewed their pornographic interests as distinct from their real-life behaviors and desires. A woman who enjoyed aggressive fantasy content clarified this boundary, saying, “In the context of my work, how people talk to me and men being superior to me, I don’t believe that. I have no tolerance for men who treat women like that. It should stay in the fantasy world.”

Several respondents also expressed feelings of shame or moral conflict. These emotions appeared tied to broader cultural values around gender equality and consent. One woman described her feelings after watching, saying, “I reflect on it and find it problematic… I think it’s fucked up, but I like to see both pleasure and pain in response to aggression.” Another male viewer voiced his confusion: “I mean, like, yes, I like it, but its warped. I’m like, ‘Why am I liking this?’”

The findings from the new study reinforce and expand upon earlier work by Shor. In a 2021 study, he interviewed 122 viewers and reported a similar gender pattern, finding that approximately 66% of women enjoyed aggressive content, compared to 40% of men. The current, larger study provides evidence for this same trend.

Both studies also highlight that viewers’ enjoyment is highly conditional. The earlier work noted that participants overwhelmingly rejected nonconsensual aggression, and both investigations found that many women framed their interest as a fantasy that was separate from their real-life sexual desires. The new research builds on this foundation by specifically investigating how viewers interpret performers’ reactions, suggesting that the combination of pleasure and “good pain” is a key element of the appeal for many.

The new study provides a rich dataset due to its large number of interviews. Still, the sample was not representative of the broader population, as it relied on volunteers who may have been more willing to discuss unconventional views. Older individuals and people from working-class backgrounds were underrepresented.

Future research could explore how these patterns manifest across different age groups and cultural contexts. The gender differences in responses also raise questions about how sexual scripts are evolving. Researchers might also investigate how viewers interpret the line between fantasy and harm, and what factors influence whether aggression is seen as acceptable or troubling.

The study, “‘It’s a Way to Let Go’: The Intersection of Pleasure and Pain in Pornography,” was authored by Eran Shor.

Dolphins exposed to Florida algal blooms show gene changes linked to Alzheimer’s disease

Dolphins living along Florida’s coast appear to be affected by the same types of environmental factors that are being investigated for their potential role in human neurodegenerative diseases. A study published in Communications Biology provides evidence that repeated seasonal exposure to algal bloom toxins is associated with molecular and cellular changes in dolphin brains that mirror some features of Alzheimer’s disease. The findings raise concerns about the long-term effects of harmful algal blooms on marine wildlife and possibly on human health in areas where such blooms are common.

Harmful algal blooms, also called HABs, occur when colonies of algae grow out of control, often fueled by warm water temperatures and nutrient pollution. These blooms release toxins that can accumulate through the food chain, affecting both aquatic animals and land mammals. Acute effects of these toxins are well documented, but their long-term impact on brain health is less well understood.

The Indian River Lagoon, a large estuary stretching along Florida’s east coast, has experienced recurring algal blooms over the last two decades. As climate change warms the region, these events are occurring more frequently and lasting longer.

Dolphins that inhabit the lagoon are long-lived, apex predators. Because they share similar exposure risks with humans and display age-related brain changes, they serve as a valuable species for studying the possible neurological consequences of environmental toxin exposure.

Dolphins have been shown to develop some of the same brain abnormalities seen in people with Alzheimer’s disease, including accumulations of amyloid-beta plaques and tau tangles. The co-occurrence of these markers and the dolphins’ repeated exposure to algal bloom toxins offered a unique opportunity to study how environmental changes may affect brain biology over time.

“In this study, we aimed to determine if seasonal change can have an impact on brain health. We focused on harmful algal bloom season, since these blooms can produce a number of neurotoxins. We hypothesized that exposure to neurotoxins would be higher during bloom seasons and neurotoxicity would parallel seasonal change,” explained study author David A. Davis, a research associate professor and associate director of the Brain Endowment Bank at the University of Miami Miller School of Medicine.

The research team analyzed brain tissue from 20 bottlenose dolphins that had stranded and died in the Indian River Lagoon area between 2010 and 2019. The dolphins were divided into two groups based on when they died: bloom season (June through November) and non-bloom season (December through May). The researchers aimed to determine whether the timing of death, which served as a proxy for toxin exposure, was associated with measurable changes in the brain.

To measure toxin levels, the team used a highly sensitive mass spectrometry method to detect the presence of 2,4-diaminobutyric acid (2,4-DAB), a neurotoxin produced by algae. They found that 2,4-DAB was present in all dolphin brains but was almost 3,000 times more concentrated in those that died during bloom season.

Next, the researchers examined changes in gene expression across the brain. They used RNA sequencing to map which genes were turned on or off in the cerebral cortex. The dolphins that died during bloom season showed altered expression in more than 500 genes, including many involved in nervous system function and immune response. Several of these genes have also been linked to Alzheimer’s disease in humans.

In addition to transcriptomic changes, the team performed immunohistochemical analysis on brain sections from a subset of the dolphins. These tests revealed deposits of amyloid-beta and phosphorylated tau proteins, which are considered hallmarks of Alzheimer’s pathology. These changes were found in dolphins from both seasons, but the data suggest that the factors associated with bloom season, including higher 2,4-DAB concentrations, may interact with and worsen this underlying pathology.

The study provides evidence that seasonal exposure to 2,4-DAB is linked to biological processes that resemble those seen in Alzheimer’s disease. Specifically, dolphins exposed during bloom seasons showed altered signaling in GABA-producing neurons and changes in genes involved in the brain’s basement membrane, both of which play a role in maintaining brain function and structure.

“Our study shows a direct relationship between seasonal change, toxin exposure and brain health,” Davis told PsyPost. “This study is a clue to the Alzheimer’s disease exposome which is believed to consist of multiple types of exposures throughout the lifespan that contribute to the development of the disease.”

One of the most consistent findings was a seasonal increase in the expression of the APOE gene, a well-known genetic risk factor for Alzheimer’s. APOE was upregulated in dolphins from the bloom season, which is characterized by high 2,4-DAB exposure, and this increase tended to rise with each successive bloom season.

“The most surprising observations were the temporal increase in Alzheimer’s disease risk factor genes with each sequential bloom season,” Davis said.

Other Alzheimer’s-related genes, such as APP and MAPT, were also elevated during bloom seasons, and their expression correlated with the concentration of 2,4-DAB in the brain.

In addition, dolphins exposed to more bloom seasons showed increased expression of genes linked to inflammation and cell death. Several of these genes, including TNFRSF25 and CIRBP, are also involved in human neurodegenerative diseases and stress responses.

As with all research, there are limitations. The study relied on opportunistic samples collected from stranded dolphins, which means the causes of death were not controlled by the researchers. While the dolphins selected had similar age, sex, and health profiles, other unknown factors may have contributed to their deaths or the observed brain changes.

The sample size was relatively small, in part due to the challenge of obtaining high-quality brain tissue from wild marine mammals. Despite this, the researchers applied rigorous standards for RNA integrity and used multiple validation techniques to strengthen the reliability of the data.

Importantly, while the study found gene expression changes that match those seen in human Alzheimer’s cases, it did not assess cognitive function in the dolphins. The presence of Alzheimer’s-like markers suggests a similarity in biological response, but it does not confirm that dolphins experience dementia in the same way humans do.

The findings point to the need for more detailed investigations into how 2,4-DAB affects brain cells over time, and whether this toxin interacts with other environmental or genetic risk factors. Future studies could also explore the prevalence of these changes in dolphins that have not stranded, using non-lethal sampling methods, or in controlled laboratory models.

“We plan to investigate the 2,4-DAB toxin that was detected in the dolphin brain in more detail to investigate its role in triggering neurodegeneration,” Davis said.

Because dolphins are considered sentinel species, their health can provide early warning signs of environmental risks that may also affect humans. Given that South Florida has one of the highest rates of Alzheimer’s disease in the United States, the findings suggest a need to explore whether chronic exposure to algal bloom toxins contributes to regional patterns in human neurodegenerative diseases.

“Our study highlights the relationship between environment and brain health,” Davis concluded. “South Florida had the highest prevalence of Alzheimer’s disease in 2024. Our study focused on dolphins found beached in Florida. The data here could provide a link between increased prevalence of the disease in certain geographics.”

The study, “Alzheimer’s disease signatures in the brain transcriptome of estuarine dolphins,” was authored by Wendy Noke Durden, Megan K. Stolen, Susanna P. Garamszegi, Sandra Anne Banack, Daniel J. Brzostowicki, Regina T. Vontell, Larry E. Brand, Paul Alan Cox, and David A. Davis.

Neuroscientists discover a key brain signal that predicts reading fluency in children

A new study has discovered a direct link between the number of milliseconds it takes a child’s brain to process the form of a printed word and how well that child understands what they are reading. The finding provides a new way to measure this neural timing in individual children with millisecond precision, a breakthrough that could advance our understanding of how reading skills develop. The research was published in the journal Developmental Cognitive Neuroscience.

The investigation was led by a team of researchers at Stanford University who were interested in the brain changes that support the development of fluent reading from late childhood into early adolescence. During this period, reading often transforms from a slow, effortful task into an automatic and engaging activity. The speed of recognizing individual words is known to be a key element in this transition, but the neural mechanisms behind it are not fully understood.

Previous methods for measuring the brain’s processing speed for words, often using a technique called event-related potentials, have been limited by low reliability when applied to individuals. This makes it difficult to connect brain activity directly to a specific child’s reading ability. The researchers aimed to develop and validate a more precise and stable method to measure this neural timing.

“This study emerged from a unique ‘research practice partnership’ between an innovative Bay Area K-8 school, the Synapse School, and the Stanford Educational Neuroscience Initiative (SENSI),” explained senior author Bruce D. McCandliss, the Pigott Family Graduate School of Education Professor in Educational Neuroscience at Stanford University.

“The collaboration began with a series of roundtable discussions involving teachers and researchers to find synergies between my long-term research goals and the topics that educators found most meaningful. This effort was also informed by my multi-year reflections on the challenges that prevent neuroscience from making a meaningful connection with education.”

“Our first collaborative focus was on how reading changes the brain. We knew we could bring brainwave technology to the school, but a significant limitation of current science is its difficulty in delivering what teachers value most: information that is meaningful at the individual student level. Standard approaches are not yet able to provide this, as their conclusions tend to apply to groups rather than individuals.”

“The teachers stressed the importance of protocols brief enough for students to complete within a single class period,” McCandliss continued. “As a science team, we took this as a design challenge, and created innovative approaches that required only a few minutes of data collection for each measure. The science team also contributed back to the teachers the value that can come from measuring reading skills in units of physics… such as capturing the duration of a specific neural computation with millisecond-level precision.”

To achieve this, the researchers and school staff enabled 68 typically developing children between the ages of 8 and 15 to volunteer for the study during their ordinary school day. Each child participated in a session where their brain activity was recorded using electroencephalography, or EEG, a method that measures electrical signals from the brain through sensors on the scalp.

The children also completed a series of standardized tests to assess their reading abilities, including their speed in reading single words and their fluency and comprehension of sentences.
During the EEG recording, participants viewed rapid streams of four-character stimuli presented at a steady rhythm of precisely three items per second. These stimuli included real words, nonwords made of jumbled letters, and strings of unfamiliar symbols called pseudofonts.

This steady, rhythmic presentation is part of a technique known as Steady-State Visual Evoked Potentials. It is designed to elicit a brain response that follows the same rhythm as the flashing images. The brain produces signals not only at the primary frequency of the stimulus, in this case 3 Hz, but also at its multiples, known as harmonics, such as 6 Hz and 9 Hz.

The researchers analyzed the timing, or phase, of the brainwaves produced at these different frequencies. By examining how the phase of the response changed across the harmonics, they were able to calculate a precise processing delay for each child. This delay, called cortical latency, represents the time it takes for information to travel from the eyes to the brain regions that process visual word forms. This approach allowed for the calculation of a stable neural timing marker for each individual participant.

The researchers found that the measurement of cortical latency was consistent for each child. This neural processing speed remained stable regardless of whether the child was viewing actual words, nonwords, or abstract symbols. This high reliability suggests the method is capturing a fundamental aspect of an individual’s visual processing system.

The researchers also found a strong relationship between cortical latency and the participants’ reading skills and age. Children with faster brain processing speeds, indicated by shorter latencies, tended to have higher scores on tests of both single-word reading efficiency and sentence-level reading comprehension. Similarly, older children generally exhibited shorter latencies than younger children, suggesting that this neural process becomes more efficient with age and experience. These relationships held even after accounting for non-verbal intelligence.

A third finding provided insight into how these processes are connected. The study suggests that the link between rapid neural processing and fluent reading comprehension is largely explained by single-word reading speed. In other words, a faster initial neural response to visual text appears to facilitate quicker and more automatic recognition of individual words. This efficiency at the word level may then free up cognitive resources, allowing the reader to focus on understanding the meaning of entire sentences and passages.

“Your brain is operating at multiple time-scales at the same time,” McCandliss told PsyPost. “You might be aware of how your thoughts or feelings are changing from one moment to the next, and how going to school and learning things allows you to recall new facts. But there’s also many faster time-scales, like the time it takes information to get from your eye to computations that route information to the part of your brain that can recognize words.”

“Remarkably, tiny little differences in neural computation speed for visual words is powerfully tied to your fluency in reading comprehension. As reading improves, this neural timing tends to get faster. Education progressively shapes the speed of this rapid neural process.”

The practical significance of these findings lies in both the strength of the relationship and the reliability of the measurement. The connection between cortical latency and reading fluency suggests that a meaningful portion of the differences seen in children’s reading ability, from a struggling third-grader to a proficient eighth-grader, can be accounted for by millisecond-level variations in neural processing speed.

More importantly, the method used to measure this neural timing proved to be exceptionally stable for individuals. This high reliability is a key advance, as it makes it feasible to track subtle changes in a single child’s brain function over time.

“Because we collaborated with a school in designing and carrying out this study, we know that we can now measure this neural speed incredibly precisely and reliably, like a mechanic timing the sparks in your car’s carburetor, in nearly every school child, within schools, without missing anything more than a single class,” McCandliss explained. “This means kids can see the results of their hard work as learning to read progressively refines this core neural function.”

One of the most surprising outcomes for the researchers was the success of the research-practice partnership itself.

“Collaborating with schools to do brain science was not thought to be viable by most, both on the science side and on the education side,” McCandliss said. “When I told them what I was envisioning, I had people in both science and educational practices look at me in ways that made me think ‘this is pretty crazy,’ or at best ‘that’s a pretty whimsical way to invest your research bets after tenure.’ Looking at the number of scientific discoveries we’ve published, I think the surprise is really how promising these sorts of research-practice partnerships can really be, both for science, and for education.”

A second surprise emerged from the data. The research team was initially uncertain whether their technique would yield a meaningful and stable measurement for any single person. There was a genuine concern that an individual’s brainwave data might be too inconsistent to provide anything other than random noise. “We literally had no idea how well this could work at an individual level,” McCandliss said. Fortunately, the results showed a precise and reliable signal at the individual level “in a way that surpassed what we could have hoped for.”

But there are still some limitations to consider. The study’s design is correlational, which means it identifies a relationship between neural processing speed and reading skill but cannot establish causality. For instance, it remains unclear whether faster processing is a cause of proficient reading, or if, conversely, extensive reading practice is what leads to more efficient neural responses.

“Of course, finding a ‘link’ between measures of a child’s cortical latency and academic achievement in reading is really just the beginning of untangling the dynamics of how this relationship develops,” McCandliss said. “It begs for new studies exploring both how increasingly engaging in reading changes neural timing as well as exploring how differences in neural timing might bias the experience of fluent reading, and how each of these causal pathways may play out over the course of reading development.”

Also, because the study observed children across a wide age range at a single point in time, it is difficult to fully separate the effects of natural brain maturation from the effects of accumulating reading experience.

“Given how this science centers around one of our most vulnerable populations — developing children — it is critical to put this in a developmental context as well as an educational context, which means these are values in flux within a person — they are likely changing as they get more experience in reading and more experience in general,” McCandliss told PsyPost. “The true value of these measures will ultimately be in how we can better understand the way they are changing over time within an individual when given highly valuable learning opportunities.”

Future research could build on these findings in several ways, including by testing the same set of children over several years to observe how their neural processing speed changes in relation to their reading instruction and practice. Such longitudinal studies could help clarify the distinct contributions of age and reading experience to the development of neural efficiency.

The researchers are now conducting additional studies to better understand the nature of this neural signal. They plan to test whether the brain’s processing speed changes in response to different types of stimuli, such as high- versus low-frequency words, or other visual forms like faces and cars. A key goal is to determine if this rapid neural response is truly specific to written language or if it also extends to other complex visual categories that the brain specializes in processing, such as faces.

“One of our next scientific goals is to bring education and developmental neuroscience closer together, which means bringing portable EEG tools into schools, collaborating with the schools to devise precise metrics that can track growth over time in neural computation speed, and ultimately relate time series changes within an individual to variations that matter in their educational experiences,” McCandliss said.

“We also plan to expand these findings into research aimed at the specific challenges facing individuals living with clinically significant neurobiological challenges, such as developmental dyslexia, attention challenges, and autistic spectrum disorder.”

“This study is one part of a suite of papers that resulted from the collaboration between my group at Stanford and the Synapse School,” he added. “I encourage readers to look at them as an interlocking set that shows the true potential of research practice partnerships in advancing developmental cognitive neuroscience research related to education.”

“For example, our group is ecstatic that our latest study was just accepted this month for publication in one of the Nature publishing group journals (npj Science of Learning). This new study shows an actual causal impact of two weeks of teacher’s vocabulary activities on neural responses to words that show up in children’s books less than one in a million times, and brings them to levels of cortical responses equivalent to the highest frequency words we’ve ever tested.”

The current study, “Cortical latency predicts reading fluency from late childhood to early adolescence,” was authored by Fang Wang, Quynh Trang H. Nguyen, Blair Kaneshiro, Anthony M. Norcia, and Bruce D. McCandliss.

Higher fluid intelligence is associated with more structured cognitive maps

People with higher reasoning skills appear to be better at forming internal maps of how different objects are related in space, according to a study published in Cell Reports. The research provides evidence that a key feature of intelligence may stem from the way the brain encodes relationships between experiences, especially through a region called the hippocampus.

The findings point to a link between general cognitive ability and how well people integrate separate pieces of information into a structured whole. Rather than focusing on how smart individuals perform in specific tasks, the researchers examined how their brains encode the structure of experiences. The results suggest that intelligence may involve the ability to form a mental map of the world that helps guide flexible thinking and decision making.

The study was conducted by researchers Rebekka Tenderra and Stephanie Theves at the Max Planck Institute for Empirical Aesthetics and the Max Planck Institute for Human Cognitive and Brain Sciences in Germany. They were interested in exploring how intelligence, especially fluid intelligence, might be related to specific patterns of brain activity during learning.

Fluid intelligence refers to the capacity to solve new problems and recognize patterns, often measured by reasoning tests. It has long been considered one of the core components of general intelligence and is strongly associated with performance across a wide range of tasks. Although past studies have identified brain areas linked to intelligence, such as the frontal and parietal cortices, the specific neural processes that underlie it are less well understood.

The researchers hypothesized that one way people differ in intelligence may be through how well they organize information into relational structures. In other words, smarter individuals might be more likely to represent different pieces of information as part of a broader map, capturing how elements relate to each other in space or conceptually. Previous studies had suggested that the hippocampus, a part of the brain known for memory and spatial navigation, plays a role in creating these kinds of cognitive maps.

To test their hypothesis, the researchers used brain imaging to observe how participants learned the locations of various objects placed within a virtual arena. Participants saw six different objects, each assigned to a specific spot in a circular space. They practiced placing the objects in the correct locations and received feedback after each attempt.

As the learning progressed, participants became more accurate at remembering where each object belonged. After completing the task, they were asked to arrange the objects from a bird’s-eye view, demonstrating how well they had internalized the layout. Their answers closely matched the actual object positions, suggesting that most participants were able to form a fairly accurate mental map of the environment.

Meanwhile, the researchers used functional magnetic resonance imaging (fMRI) to record activity in the hippocampus while participants viewed the objects. They looked for patterns indicating whether the brain represented the spatial relationships between the objects. Specifically, they examined whether neural activity patterns were more similar for objects that were closer together in the learned layout, and more different for objects that were farther apart.

The key finding was that individuals with higher fluid intelligence scores showed stronger signs of this “map-like” encoding in the right hippocampus. That is, their brain activity reflected a clearer sense of the distances between object locations. This connection between relational encoding and intelligence remained even after accounting for how well the participants performed on the memory tasks, suggesting that the brain patterns were not simply a reflection of who remembered more accurately.

Additional analyses showed that this brain-behavior link was consistent across various cognitive tasks, especially those that were more strongly related to fluid intelligence. The correlation was not driven by any single test but rather reflected a broad tendency among smarter individuals to organize their learning experiences in a more structured, map-like way.

When the researchers compared people with higher versus lower fluid intelligence scores, they found differences in how consistently the brain represented object relationships. Those in the lower intelligence group had neural representations that were less consistent with a two-dimensional map. In practical terms, this suggests they may have encoded some object pairs in ways that didn’t align well with an overall spatial layout, pointing to lapses in integrating relationships across the whole scene.

To further investigate this idea, the researchers asked participants to estimate distances between object pairs on a sliding scale. Again, the results showed that people with higher intelligence scores provided more geometrically consistent estimates, reinforcing the idea that they were better at integrating object relations into a cohesive map.

The researchers also tested whether the observed relationship between intelligence and brain activity was specific to relational encoding. They did this by including a separate memory task where participants simply had to recognize whether they had seen an object before. This task measured basic item memory, not how the objects related to each other.

While participants performed well overall, the strength of brain responses in the hippocampus during this task did not correlate with intelligence scores. This suggests that general reasoning ability is specifically tied to the ability to encode relationships, not just memory in general.

These results add to a growing body of work that views the hippocampus not only as a center for memory and spatial navigation but also as a hub for organizing information in ways that support flexible thinking. The ability to represent how different elements of an experience are related may provide a foundation for solving problems in new contexts or drawing inferences from limited data.

The researchers acknowledge that their study focused on a specific type of relational learning involving spatial arrangements. It remains to be seen whether the same principles apply to other kinds of abstract relationships, such as those involving concepts or rules. In addition, the study was conducted with a relatively homogenous group of healthy adults, which may limit how broadly the findings apply.

Since the research was cross-sectional, it cannot speak to causality. It’s unclear whether having a more structured way of encoding relationships contributes to higher intelligence or whether people with higher intelligence naturally develop better strategies for organizing information. Long-term studies could help clarify how these abilities develop over time and interact.

The researchers suggest that future studies could explore whether these relational encoding patterns show up in other brain regions involved in reasoning or generalization, such as the prefrontal cortex. There is some evidence that these areas also represent structured information, although it is not yet clear how their role compares to that of the hippocampus.

The study, “Human intelligence relates to neural measures of cognitive map formation,” was authored by Rebekka M. Tenderra and Stephanie Theves.

AI roots out three key predictors of terrorism support

A new study suggests that individuals who justify terrorism tend to share a distinct worldview characterized by the normalization of violence, moral flexibility, and anti-democratic sentiments. Using a machine learning analysis of data from over 90,000 people across 65 countries, the research provides evidence that support for terrorism is embedded in a broader set of attitudes rather than being an isolated belief. The findings were published in the journal Aggressive Behavior.

Public discourse often links terrorism to specific religious or political ideologies, yet extremist violence emerges from a wide range of backgrounds, including white supremacist and anti-government movements. This suggests that the psychological and social factors underlying radicalization may be more general than commonly assumed.

Previous research has identified many potential risk and protective factors for aggression, but these studies have often been limited to smaller, nationally specific samples. The current research aimed to apply a comprehensive, data-driven approach on a global scale to identify which attitudes and beliefs are most consistently associated with the justification of terrorism.

“The study was motivated by evidence that extremist ideologies are evolving in many parts of the world. For example, in recent years, pro-violence right-wing movements, white supremacist groups, and other forms of radicalization within established democracies have gained strength and attracted increasing numbers of followers,” said study author Mohsen Joshanloo, an associate professor at Keimyung University and honorary principal fellow at the Centre for Wellbeing Science at the University of Melbourne.

“These developments suggest that some traditional assumptions about who justifies violence, including terrorism, may no longer hold. Public discourse has not fully kept pace with these changes and requires revision. The aim of this study was to identify patterns in a global dataset and provide an updated, evidence-based understanding of the factors associated with justifying terrorism.”

To conduct the analysis, the researcher utilized data from the seventh wave of the World Values Survey, a large-scale project that gathers information on the beliefs and values of people around the globe. The final sample included 91,659 participants from 65 nations. The study’s outcome of interest was a single question that asked respondents to rate the justifiability of “terrorism as a political, ideological or religious means” on a 10-point scale.

A machine learning algorithm known as Random Forest was employed to analyze the predictive power of 360 different variables from the survey. This method is well-suited for identifying complex patterns in large datasets without being constrained by predefined assumptions. The algorithm builds hundreds of decision trees to determine which factors are the most important predictors of an outcome, in this case, the justification of terrorism.

The final model, which included 271 of the most relevant predictors, was able to explain approximately 64% of the variation in people’s attitudes toward terrorism justification. The analysis revealed that a small number of predictors were especially powerful. These top factors fell into three distinct but related domains.

The first domain was a general normalization of violence. Individuals who were more likely to justify terrorism also tended to believe that other forms of aggression are acceptable. This included the justification of political violence, violence against other people, a man beating his wife, and parents beating their children. This pattern suggests that support for terrorism is not an isolated belief but part of a wider cognitive framework where aggression is viewed as a legitimate tool for resolving conflicts or achieving goals.

A second major pattern involved what the study describes as moral flexibility and a tendency to violate rules. People who justified terrorism were also more likely to approve of dishonest behaviors such as accepting a bribe, stealing property, cheating on taxes, and claiming government benefits one is not entitled to.

These individuals may operate with a moral code that permits breaking established social and ethical norms when it appears personally or ideologically useful. This mindset aligns with psychological concepts of moral disengagement, where individuals cognitively separate their actions from their moral principles, allowing them to engage in unethical behavior without self-censure.

The third key domain was a preference for religious and political authoritarianism. Respondents who justified terrorism were more likely to support a system where religious authorities interpret laws and were less likely to see free elections as an essential characteristic of democracy. They also tended to view a system of governance based on religious law, without political parties or elections, more favorably.

This combination of attitudes points to a skepticism toward democratic institutions and a preference for strong, centralized, and non-pluralistic forms of rule. The authoritarian preference appeared to be focused on political structures, as these same individuals showed more permissive attitudes toward certain personal behaviors like casual sex and homosexuality.

“Those who consider terrorism acceptable often share a worldview in which violence is normalized, moral rules are flexible, and democratic principles are devalued,” Joshanloo told PsyPost. “This worldview is not confined to any single group, religion, or region. Anyone can hold it or adopt it, regardless of location or identity. Outdated and simplistic explanations ignore the fact that pro-violence views are not limited to any particular demographic. To effectively reduce support for terrorism (and violence more broadly), we must address the underlying attitudes that legitimize violence everywhere, without assuming that one’s own group or oneself is immune to such beliefs.”

“Strategies to reduce violence should be guided by current evidence, not outdated stereotypes. For example, strengthening democratic institutions, addressing their weaknesses, and building community trust and cooperation are crucial steps, even though these measures are far more challenging than relying on easy but ineffective assumptions and prejudices.”

The findings can also be interpreted through the lens of the “Dark Triad” of personality traits: Machiavellianism, psychopathy, and narcissism. Although these traits were not measured directly, the observed patterns align with them conceptually.

“For example, endorsing bribery, fraud, and other rule- breaking behaviors for personal gain may reflect Machiavellian tendencies, that is, strategic, self-serving disregard for norms,” Joshanloo explained. “The normalization of violence and lack of empathy align with psychopathic traits, which involve callousness and instrumental use of aggression.”

“Narcissism may be reflected in the preference for authoritarian governance and rejection of democratic principles, signaling an inflated sense of ideological entitlement and disdain for pluralism. Taken together, these connections suggest that support for terrorism is not only ideological but may also stem from deeper psychological dispositions that erode empathy and moral constraints.”

While the study offers a robust global perspective, it has some limitations. The analysis was restricted to the variables available in the World Values Survey, meaning other potentially important factors, such as exposure to extremist propaganda or group grievances, were not included. Because the data was cross-sectional, it identifies associations but cannot establish causality.

Future research could build on these findings by incorporating a wider range of variables, using more detailed psychological measures, and employing longitudinal designs to better understand how these attitudes develop over time.

“It is important to note that this study does not examine engagement in terrorism, but rather the extent to which individuals view terrorism as justifiable,” Joshanloo noted. “The results should be understood as general global patterns, offering a broad picture of what tends to predict justification of terrorism globally. They are not meant to describe any single country, region, group, or context in detail.

“A factor that appears unimportant at the global level might be highly significant in a specific local setting. Therefore, these findings should serve as a backdrop for more nuanced local research rather than a substitute for it. The value of global insights is that they reveal common psychological and ideological tendencies that transcend borders, helping us understand what is broadly associated with terrorism justification worldwide. This big-picture perspective provides a reference point for interpreting local findings and designing strategies that combine global insights with context-specific solutions.”

The study, “Who Considers Terrorism Justifiable? A Machine Learning Analysis Across 65 Countries,” was authored by Mohsen Joshanloo.

Shyness linked to spontaneous activity in the brain’s cerebellum

A recent study provides new evidence on the neural basis of shyness, suggesting a link between this personality trait and spontaneous activity in the cerebellum. The research indicates that the strength of this relationship is partly explained by an individual’s sensitivity to potential social threats. The findings were published in the journal Personality and Individual Differences.

Previous research has explored connections between shyness and brain regions involved in emotion and social processing, such as the prefrontal cortex and the amygdala. However, findings have been inconsistent, leaving the specific neural architecture of shyness unclear.

One prominent model suggests shyness emerges from a conflict between the motivation to approach social situations and the motivation to avoid them. To investigate this, researchers often use the concepts of the Behavioral Inhibition System (BIS) and the Behavioral Activation System (BAS).

The BIS is associated with avoidance motivation, making individuals more sensitive to potential punishment or negative outcomes, while the BAS is tied to approach motivation and sensitivity to rewards. The present study aimed to connect these motivational systems to the spontaneous, or resting-state, brain activity associated with shyness.

“Shyness is a common personality trait, but its neural basis has remained elusive. Most existing research has focused on the prefrontal cortex and amygdala, while the role of the cerebellum—traditionally viewed as a ‘motor’ region—has been largely overlooked,” said study author Hong Li, a psychology professor at South China Normal University.

“Yet recent evidence shows that the cerebellum also contributes to emotion and social processing. We wanted to understand whether the cerebellum plays a meaningful role in shyness and how motivational systems—especially the Behavioral Inhibition System (BIS), which governs our sensitivity to threat—might link brain activity to shy behavior. This question bridges an important gap between biological mechanisms and everyday emotional experience.”

The researchers recruited 42 healthy university students. Participants completed questionnaires to measure their levels of trait shyness. They also filled out surveys to assess the sensitivity of their Behavioral Inhibition System and Behavioral Activation System. For example, a high BIS score might reflect agreement with a statement like, “If I think something unpleasant is going to happen, I usually get pretty ‘worked up.’”

Each participant also underwent a resting-state functional magnetic resonance imaging (fMRI) scan. This technique measures brain activity while a person is at rest and not performing any specific task, allowing scientists to observe the brain’s baseline or spontaneous neural patterns. The researchers then analyzed the fMRI data using a method called Regional Homogeneity, or ReHo. This technique measures the degree of synchronized activity among neighboring points in the brain, essentially gauging the local functional harmony within a specific area.

The analysis first looked for direct correlations between shyness scores and ReHo values across the entire brain. The results pointed to a significant association in one particular area: the right posterior lobe of the cerebellum. Specifically, individuals who reported higher levels of shyness tended to have lower ReHo values in this region. This suggests that greater shyness is associated with less synchronized local neural activity in this part of the cerebellum when the brain is at rest.

No other brain regions showed a significant relationship with shyness in this analysis.

“We initially expected the prefrontal cortex to play a stronger role, given previous findings,” Li told PsyPost. “Instead, the cerebellum showed a clear and specific association with shyness. This was surprising and exciting—it suggests that the cerebellum contributes not only to coordination and timing, but also to the fine-tuning of emotional and social responses.”

The researchers also examined the relationships between the personality measures. They found that shyness scores were strongly and positively correlated with scores on the Behavioral Inhibition System. This aligns with the idea that shy individuals tend to be more sensitive to potential threats and social punishments. In contrast, there was no significant correlation between shyness and the Behavioral Activation System, which relates to reward-seeking.

With these connections established, the team performed a mediation analysis to see if the BIS or BAS could explain the link between cerebellar activity and shyness. This statistical method examines whether one factor helps explain the relationship between two others.

The analysis revealed that the Behavioral Inhibition System did indeed play a mediating role. The data suggest that lower synchronized activity in the right posterior cerebellum is associated with a more sensitive behavioral inhibition system, which in turn is linked to higher levels of shyness. The BIS appears to function as a partial bridge connecting the neural pattern to the personality trait.

The Behavioral Activation System, on the other hand, did not show any significant mediating effect. This result provides evidence that shyness may be more strongly driven by avoidance and inhibition motivations than by a lack of approach or reward-seeking motivations. The findings refine the motivational conflict model of shyness, pointing to the primary influence of the brain’s threat-detection system.

“Our results show that people who are more shy tend to have lower spontaneous neural activity in a specific part of the cerebellum (the right posterior lobe),” Li explained. “This relationship is partly explained by higher activity in the Behavioral Inhibition System, which makes people more cautious or anxious in social situations.”

“In simpler terms, shyness may not just come from “overthinking” or lack of confidence—it might also reflect how certain brain systems regulate our sensitivity to potential social threat. This understanding can help us view shyness not as a flaw, but as a meaningful difference in how the brain balances safety and connection.”

The study does have some limitations to consider. The sample size was relatively modest and consisted only of university students, which may limit how broadly the findings can be applied to the general population. The study’s cross-sectional design identifies associations between brain activity and personality, but it cannot establish a direct causal relationship. It remains unclear whether the brain patterns contribute to shyness or if experiences related to shyness shape the brain over time.

“It’s important not to interpret these findings as showing that shyness is ’caused’ by a single brain region,” Li noted. “The cerebellum does not make someone shy by itself. Rather, shyness arises from complex interactions among brain systems, personality, and experience. Our data are correlational, so we can’t infer direct causality—but they point to a promising direction for future longitudinal and experimental research.”

“While the effect sizes in our mediation model are moderate—indicating a partial but meaningful role for the BIS in linking cerebellar activity to shyness—readers should view them as foundational rather than definitive, given our exploratory approach and sample size. Practically, this suggests that targeting the BIS through therapies could have tangible benefits for reducing shyness, though the effects might vary across individuals; it’s not a ‘cure-all’ but a stepping stone toward personalized interventions that could improve social functioning in everyday contexts like work or relationships.”

The researchers also noted that resting-state fMRI captures only one aspect of brain function. Incorporating task-based fMRI, where participants engage in social tasks during the scan, could provide a more complete picture of the neural processes at play.

“We are currently planning to explore how training or modulation of the cerebellum and BIS-related circuits might reduce excessive social inhibition,” Li explained. “For example, neurofeedback and real-time fMRI could be used to help individuals gain more control over their behavioral inhibition responses. We also aim to examine different subtypes of shyness—such as ‘positive shyness’ and ‘fearful shyness’—to see whether they involve distinct neural patterns.”

“I hope this study encourages people to think about shyness with greater compassion. Being shy does not mean being socially deficient—it often reflects a heightened sensitivity to social cues and a desire to interact carefully and meaningfully. Understanding the brain basis of shyness helps us appreciate it as a form of emotional intelligence, rather than simply a barrier to overcome.”

The study, “Associations between trait shyness and cerebellar spontaneous neural activity are mediated by behavioral inhibition,” was authored by Liang Li, Yujie Zhang, Benjamin Becker, and Hong Li.

Scientists pinpoint genetic markers that signal higher Alzheimer’s risk

A new study has uncovered evidence suggesting that a person’s inherited predisposition for higher levels of the tau protein in their blood is associated with an increased likelihood of developing Alzheimer’s disease or its precursor stage. The findings, which also point to potential differences in risk based on sex and age, were published in the journal Neurology.

Alzheimer’s disease is a progressive brain disorder that gradually impairs memory and thinking skills. At the molecular level, it is characterized by the accumulation of two key proteins in the brain: amyloid-beta, which forms plaques between nerve cells, and tau, which forms tangles inside them. While tau protein normally helps stabilize the internal skeleton of brain cells, in Alzheimer’s disease it becomes abnormal and aggregates, disrupting cell function and contributing to neurodegeneration.

Because elevated tau levels in the blood can reflect ongoing damage to brain cells, they are considered an important biomarker for the disease. In the new study, led by geneticist Niki Mourtzi and neurology professor Nikolaos Scarmeas of the National and Kapodistrian University of Athens Medical School, researchers sought to move beyond measuring current tau levels and instead investigate the underlying genetic factors.

“Early detection of Alzheimer’s disease remains challenging, as most biomarkers require invasive procedures or expensive imaging. We aimed to fill this gap by investigating
whether a polygenic risk score for plasma tau, a minimally invasive biomarker, could identify individuals at higher risk for developing Alzheimer’s disease or amnestic mild cognitive impairment,” the researchers told PsyPost.

A polygenic risk score is a tool that estimates an individual’s inherited susceptibility to a specific condition. This single numerical value is calculated by combining the small effects of numerous genetic variants from across a person’s entire genome.

“An important advantage of a polygenic risk score is that it captures inherited genetic variation, allowing us to predict disease risk from birth, decades before amyloid and tau start to accumulate in the brain,” Mourtzi and Scarmeas explained. “Unlike prior studies that focused on cognitive scores, our study evaluated a clinically meaningful outcome over time, providing a more direct link between genetic risk and disease development.

The investigation was conducted in two main phases, using data from two distinct populations. The first part of the study involved the Hellenic Longitudinal Investigation of Aging and Diet (HELIAD), a community-based study in Greece. The researchers analyzed data from 618 participants, who were 65 years or older and did not have Alzheimer’s or amnestic mild cognitive impairment at the beginning of the study. Amnestic mild cognitive impairment is a condition involving memory loss that is often a precursor to Alzheimer’s disease.

For each participant, the team calculated a polygenic risk score for tau based on 21 genetic variations located near the gene that provides the instructions for making the tau protein. The participants were followed for an average of about three years. During this period, 73 individuals were diagnosed with either Alzheimer’s disease or amnestic mild cognitive impairment.

The analysis provided evidence of an association between the genetic score and disease risk. The results showed that for every one standard deviation increase in the polygenic risk score, there was an associated 29% higher risk of developing one of the cognitive conditions. This relationship appeared to be independent of other known risk factors, including age, sex, education, and the presence of the APOE e4 gene, which is the most well-established genetic risk factor for Alzheimer’s.

“Our study is among the first to link a polygenic risk score for plasma tau directly to clinical outcomes rather than cognitive scores,” Mourtzi and Scarmeas said.

When the researchers examined specific subgroups, they observed that the association was not uniform. The link between a higher genetic score and disease risk was stronger in women, who showed a 45% increase in risk for every standard deviation increase in their score. The association also tended to be more pronounced in younger participants (those below the group’s median age of 73), who had an 87% higher risk. In contrast, the associations were not statistically significant for men or for the older participants in the cohort.

“People with a higher genetic predisposition to elevated plasma tau levels face an increased risk of Alzheimer’s disease or its prodromal stage,” Mourtzi and Scarmeas told PsyPost. “In the HELIAD study, those with higher genetic risk had about a 28.5% greater chance of developing Alzheimer’s disease or amnestic mild cognitive impairment. The effect was stronger in women and younger individuals, suggesting that both sex and age influence how genetic risk translates into disease. Early identification of those at higher genetic risk could enable earlier interventions, lifestyle modifications, or monitoring, potentially improving outcomes.”

“We were somewhat surprised by the pronounced sex- and age-specific effects. This may be influenced by sex-specific genetic mechanisms: for example, X-linked genes such as USP11 are more highly expressed in female brains and can promote tau accumulation, while other X chromosome loci like CHST7 may facilitate tau fibril formation and propagation. We also found that genetic risk was more relevant in younger participants, suggesting that inherited tau- related risk is more influential earlier in life before lifestyle, comorbidities, or other environmental factors become dominant.”

To see if these findings were robust, the researchers sought to replicate them in a much larger and more diverse group of people from the UK Biobank. This second part of the analysis included over 142,000 individuals aged 60 and older who were free of dementia at the start of the study. These participants were followed for an average of nearly 13 years, during which 2,737 developed Alzheimer’s disease.

In this large cohort, a higher polygenic risk score for tau was also associated with an increased risk of an Alzheimer’s diagnosis, which supports the initial findings. The effect size was smaller, with a one standard deviation increase in the score corresponding to about a 5% increase in risk.

The subgroup analyses by sex and age did not produce significant results in this larger sample. However, when the researchers created a smaller UK Biobank subsample that was statistically matched to the Greek cohort based on age, sex, and other characteristics, the results were more aligned. In this matched group, a higher score was linked to a 50% increased risk of developing Alzheimer’s.

“Although the individual effect of the tau PRS is modest, it remained consistent across two large, independent cohorts, reinforcing its potential utility,” Mourtzi and Scarmeas said. “When combined with established risk factors such as APOE genotype, age, sex, or genetic risk for other Alzheimer’s-related biomarkers (e.g., amyloid, hippocampal atrophy, white matter hyperintensities) can help identify people who may be at higher risk for Alzheimer’s disease, potentially years or even decades before symptoms appear.”

It is important to note that polygenic risk scores are predictive tools, not diagnostic certainties. They are based on common genetic variants and do not account for the influence of rare genes, lifestyle choices, or environmental factors, all of which play a part in the development of complex diseases like Alzheimer’s. The score was also developed and tested in populations of European ancestry, meaning its predictive power might not be the same in individuals from other backgrounds.

“It represents only one factor, as other variables like lifestyle, environment, and chance also play a significant role,” the researchers noted. “A high polygenic risk score does not guarantee a person will develop Alzheimer’s disease, and a low polygenic risk score does not exclude the possibility of developing it. However, polygenic risk scores can be seen as an important tool to identify individuals at higher risk and take early preventive actions, such as lifestyle modifications, monitoring, or participation in clinical studies aimed at reducing risk.”

Looking ahead, the research team suggests that this polygenic risk score for tau could be combined with other genetic scores, such as those for amyloid buildup or brain atrophy, to create a more comprehensive risk assessment model. Such a multifaceted approach could improve the ability to stratify individuals by their overall genetic risk, helping to target preventive strategies and guide enrollment in clinical trials for new therapies.

“We aim to integrate tau-related polygenic risk scores with additional genetic and imaging biomarkers to develop comprehensive, multifactorial models for Alzheimer’s disease risk prediction,” Mourtzi and Scarmeas explained. “We have already computed polygenic risk scores for other relevant endophenotypes, including amyloid deposition, hippocampal atrophy, and white matter hyperintensities, and intend to combine these scores into a single composite measure that captures overall genetic and neuroimaging risk. This integrative approach has the potential to enable early, personalized interventions and to refine risk stratification strategies in both research and clinical settings.”

The study, “Longitudinal Association of a Polygenic Risk Score for Plasma T-Tau With Incident Alzheimer Dementia and Mild Cognitive Impairment,” was authored by Niki Mourtzi, Sokratis Charisis, Eva Ntanasi, Alexandros Hatzimanolis, Alfredo Ramirez, Stefanos N. Sampatakakis, Mary Yannakoulia, Mary H. Kosmidis, Efthimios Dardiotis, George Hadjigeorgiou, Paraskevi Sakka, Eirini Mamalaki, Christopher Papandreou, Marios K. Georgakis, and Nikolaos Scarmeas.

COVID-19 exposure during pregnancy may increase child’s autism risk

Children whose mothers had COVID-19 during pregnancy appear to have an increased likelihood of being diagnosed with developmental conditions by age three, including speech delays and autism. This new research, published in Obstetrics & Gynecology, suggests that maternal COVID-19 infection may influence fetal brain development.

The study provides evidence that exposure to the SARS-CoV-2 virus in the womb may be associated with neurodevelopmental differences in early childhood. This association appears more pronounced in male children and when the infection occurred in the third trimester of pregnancy.

COVID-19 is a respiratory illness caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The pandemic that began in early 2020 raised many questions about the virus’s impact on various aspects of health, including pregnancy and child development.

Previous research on other maternal infections during pregnancy has indicated a potential link to various neurodevelopmental conditions in children. For instance, studies have shown that immune system activation in a pregnant individual can disrupt the developing brain of the fetus and affect offspring behavior in animal models.

Researchers at Mass General Brigham conducted this study to examine whether SARS-CoV-2 infection during pregnancy could be associated with similar outcomes. They had previously observed an elevated risk of neurodevelopmental diagnoses at 12 and 18 months in children exposed to maternal SARS-CoV-2 infection during pregnancy. The current study aimed to determine if these potential effects persisted into early childhood, specifically looking at diagnoses by age three.

The researchers analyzed data from 18,124 live births that occurred within the Mass General Brigham health system between March 2020 and May 2021. This period was selected because it featured universal SARS-CoV-2 testing in labor and delivery units and widespread screening for COVID-19 symptoms during pregnancy, which helped ensure reliable identification of both positive and negative cases. The team linked data from mothers and their children, examining maternal medical history, vaccination status, and sociodemographic information.

The main factor of interest was a positive SARS-CoV-2 PCR test result during pregnancy. The primary outcome the researchers looked for was at least one neurodevelopmental diagnosis within the first three years after birth, identified through specific diagnostic codes from medical records. These codes covered a range of conditions, including disorders of speech and language, motor function, and autism spectrum disorder. They also considered potential influencing factors such as maternal age, race, ethnicity, insurance type, and whether the birth was preterm.

Among the 18,124 live births included in the study, 861 children were exposed to maternal SARS-CoV-2 infection during pregnancy. The researchers found that 140 of these 861 children (16.3%) received a neurodevelopmental diagnosis by age three.

In comparison, among the 17,263 children whose mothers did not have SARS-CoV-2 infection during pregnancy, 1,680 (9.7%) received such a diagnosis. After accounting for other factors that could influence neurodevelopment, maternal SARS-CoV-2 infection during pregnancy was associated with a 29% higher odds of a child receiving a neurodevelopmental diagnosis by age three.

“These findings highlight that COVID-19, like many other infections in pregnancy, may pose risks not only to the mother, but to fetal brain development,” said senior author Andrea Edlow, MD MSc, a Maternal-Fetal Medicine specialist in the Department of Obstetrics and Gynecology at Mass General Brigham.

The study also investigated specific patterns within these findings. The association between maternal SARS-CoV-2 infection and neurodevelopmental diagnoses was found to be more pronounced when the infection occurred during the third trimester of pregnancy. Children exposed during the third trimester had a significantly increased risk of a neurodevelopmental diagnosis compared to the unexposed group. However, exposure during the first or second trimesters did not show a statistically significant difference in risk from the unexposed group.

Additionally, the researchers observed a difference in risk between male and female offspring. Third-trimester maternal SARS-CoV-2 infection was significantly associated with an increased risk of neurodevelopmental diagnosis in male children.

The magnitude of risk in female offspring with third-trimester exposure was smaller and did not reach statistical significance in this study. The most frequently identified neurodevelopmental diagnoses included disorders of speech and language, developmental disorder of motor function, autistic disorder, and other specified or unspecified disorders of psychological development.

While reducing risk is important, co-senior author Roy Perlis of the Mass General Brigham Department of Psychiatry noted, that the “overall risk of adverse neurodevelopmental outcomes in exposed children likely remains low.”

While these findings suggest an association, there are some aspects to consider. The study relied on medical record diagnoses, which may not capture all neurodevelopmental conditions and could potentially lead to some misclassification. However, such misclassification would likely lead to results that underestimate the true effect. Children who received diagnoses outside the Mass General Brigham health system would not have been included in the dataset. Also, asymptomatic SARS-CoV-2 infections during pregnancy might not have been consistently detected, which could also lead to an underestimation of the effects.

Future research could involve continued follow-up of these children to assess the long-term persistence and clinical impact of these early neurodevelopmental observations. Further studies may also explore the underlying biological mechanisms in greater detail.

First author and Maternal-Fetal Medicine specialist Lydia Shook added: :Parental awareness of the potential for adverse child neurodevelopmental outcomes after COVID-19 in pregnancy is key. By understanding the risks, parents can appropriately advocate for their children to have proper evaluation and support.:

The study, “Neurodevelopmental Outcomes of 3-Year-Old Children Exposed to Maternal Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) Infection in Utero,” was authored by Lydia L. Shook, Victor Castro, Laura Ibanez-Pintor, Roy H. Perlis, and Andrea G. Edlow.

Disgust sensitivity is linked to a sexual double standard, study finds

A new study provides evidence that negative attitudes toward sexually expressive people may apply to adults of all ages, rather than being a bias specifically aimed at older adults. The research also suggests that an individual’s sensitivity to disgust can influence these judgments differently depending on whether they are evaluating a man or a woman. The findings were published in The Journal of Sex Research.

The motivation for this research stems from the rapidly aging population in the United States and the need to better understand the sexual and relationship needs of older adults. Previous research has shown that negative attitudes about the sexuality of older people, a phenomenon known as sexual ageism, can be a barrier to their well-being. However, the existing body of research on this topic has produced mixed results.

Some studies indicate that older adults are often stereotyped as asexual or are viewed negatively when they do express sexuality. Other studies suggest that people hold neutral or even positive views. The researchers behind this new work noted that much of the prior research lacked important comparison groups.

Without comparing judgments of older adults to judgments of younger adults, or judgments of sexual behavior to non-sexual behavior, it is difficult to determine if negative reactions are due to a person’s age or simply due to their sexual expression. This study was designed to disentangle these possibilities.

“Older adults frequently report that others treat them as asexual or dismiss their sexuality, and we wanted to design a study that would test whether older adults faced stigma for their sexual expression, but also whether this sexuality-based stigma was present for younger adults as well,” said study author Gabriella Rose Petruzzello, a PhD student at the University of New Brunswick and member of the Sex Meets Relationships Research Lab.

“Simultaneously, a bunch of research has shown that people who have higher levels of the emotion of disgust tend to report more homophobia, transphobia, and more negative attitudes towards individuals who violate sexual norms. We live in a sex-saturated world, but one where a sizable portion of individuals, knowingly or unknowingly, continue to stigmatize certain forms of sexual expression. This research sought to identify this sexual stigma and determine whether an individual’s likelihood of experiencing disgust was one factor that could predict this stigmatization.”

The research consisted of two separate experiments. In the first study, 303 participants were recruited online and randomly assigned to read one of four different informational flyers. The flyers were designed to introduce a new neighbor, a woman named Elizabeth, who was either 25 or 65 years old. The content of the flyer was also varied; it described Elizabeth as having either a vibrant “romantic life” or a vibrant “sex life” with her husband.

After reading one of the four versions of the flyer, participants rated Elizabeth on several scales. These scales were used to assess their general interpersonal evaluations of her, such as viewing her as good or bad, and their perceptions of her lifestyle, such as viewing it as safe or risky. Participants also completed a questionnaire to measure their own level of disgust sensitivity, a personality trait related to how easily a person feels disgust in response to various situations.

The results of the first study did not show strong evidence for ageism. The 65-year-old woman was not evaluated more negatively or as being riskier than the 25-year-old woman. A clear pattern did emerge regarding sexual expression. The woman described as having a vibrant sex life was rated more negatively and as riskier than the woman described as having a vibrant romantic life, and this was true for both the younger and older targets.

The researchers also found a connection with the participants’ own disgust sensitivity. Individuals who were more easily disgusted tended to judge the sexually open women more harshly, viewing them more negatively and as being riskier. This relationship was not present when they evaluated the women described in romantic terms. This suggests that for women, being openly sexual invites more negative scrutiny from people who are high in disgust sensitivity.

The second study followed a nearly identical design to investigate whether these patterns would hold true for male targets. A new group of 375 participants read one of four flyers introducing a man who was either 25 or 65 years old and was described as having either a vibrant romantic or sex life. Participants then provided the same types of ratings as in the first study.

Consistent with the first study, the results provided evidence for a general sexual stigma. Men described as being sexual were rated more negatively and as riskier than men described as being romantic, regardless of their age. The findings on age were slightly different. The younger men tended to be perceived slightly more negatively and as riskier than the older men, a finding that runs contrary to the concept of ageism against older adults.

A notable difference emerged in the role of disgust sensitivity. When evaluating the male targets, participants higher in disgust sensitivity tended to rate the sexually open men more positively. This is the opposite of the pattern observed in the first study, where higher disgust sensitivity was linked to more negative evaluations of sexually open women. Disgust sensitivity was not found to be related to how risky the men were perceived to be.

“The key takeaways are 1) sexual stigma appears to transcend age and gender related boundaries with both younger and older men and women being rated as more negative and as riskier for their sexual expression compared to their romantic expression,” Petruzzello told PsyPost. “We were surprised by just how present this stigma appeared to be and how this stigma was present across age and gender groups. 2) Disgust sensitivity was not universally related to negative perceptions of sexuality. Individuals higher in disgust sensitivity appear to penalize sexually open women but actually reward sexually open men. This would point to disgust as a factor reinforcing sexual double standard beliefs, whereby women are penalized for their sexuality and men are rewarded.”

“Collectively, this study offers evidence that sexual stigma is alive and well and that individual difference variables like disgust can perpetuate harmful sexual stigmas against some groups (like women) more so than other groups (like men).”

The researchers acknowledge some limitations of their work. The use of an experimental flyer might not perfectly reflect real-world social interactions, as such a direct disclosure about one’s sex life to a new neighbor could be seen as a violation of social norms. The negative reactions might be partly due to this oversharing, rather than simply the sexual content itself.

Additionally, the flyers in the study only featured White individuals. This means the findings may not be generalizable to people of other racial or ethnic backgrounds, where cultural norms and stereotypes around sexuality and aging might be different. Future research could explore how factors like race and sexual orientation intersect with age and sexual expression to shape people’s perceptions. Future studies could also include male and female targets in a single experiment to more directly compare judgments and confirm the patterns observed.

“Our lab wants to continue understanding how feelings of disgust contribute to sexual stigma and can also undermine individual’s general and sexual well-being,” Petruzzello said. “We’re excited that we were able to add to the body of research showing that disgust sensitivity is related to harmful beliefs about certain groups, which has important implications for future efforts to mitigate the effects of these biases.”

The study, “Sexual Ageism or Sexual Stigma? Sexual Double Standards and Disgust Sensitivity in Judgments of Sexual and Romantic Behavior,” was authored by Gabriella Petruzzello, Lucia F. O’Sullivan, and Randall A. Renstrom.

New review questions the evidence for common depression treatments

A new review of depression treatments suggests that the scientific evidence for many common strategies used when a first antidepressant fails is not as strong as widely believed. The findings, which reexamine the influential Sequenced Treatment Alternatives to Relieve Depression (STAR*D) trial, indicate that the benefits observed in that study may stem more from factors like patient expectations than the specific pharmacological action of the medications. The analysis was published in the Journal of Clinical Psychopharmacology.

The new review was conducted by a team of researchers led by Kevin P. Kennedy at the Corporal Michael J. Crescenzo VA Medical Center. Their work was prompted by the widespread influence of the STAR*D trial.

Before STAR*D, most clinical trials for antidepressants studied medications in carefully selected patient groups. These trials often excluded individuals with other medical conditions, co-occurring psychiatric disorders, or chronic depression, meaning the results were not always applicable to the more complex patients seen in everyday clinical practice.

Published in the early 2000s, STAR*D was a large and ambitious study designed to fill this knowledge gap. As a pragmatic trial, it was conducted in real-world primary care and psychiatric clinics and enrolled a diverse group of over 4,000 patients, making it the largest study of its kind.

All participants began treatment with the antidepressant citalopram. Those who did not achieve remission after this first step could proceed through a sequence of up to three additional treatment levels, where they were offered different strategies, such as switching to another antidepressant or augmenting their current medication with a second one.

The findings from STAR*D have shaped depression treatment for years. The study reported that while only about a third of patients recovered after the first treatment, sequential treatment steps offered continued hope. The most widely cited conclusion was that by trying up to four different strategies, a cumulative remission rate of nearly 70% could be achieved among patients who remained in the study.

However, the original STAR*D study was open-label, meaning both patients and their doctors knew which medication was being prescribed, and there was no placebo group for comparison. This design makes it difficult to separate the true effect of a drug from other factors like a patient’s expectations or the natural course of the illness.

Since STAR*D’s publication, many of its treatment steps have been tested in double-blind, placebo-controlled randomized trials, which are considered a more rigorous way to measure effectiveness. Kennedy and his colleagues set out to compare the results from STAR*D with this newer body of evidence.

The researchers performed a detailed review of the scientific literature, searching for meta-analyses and high-quality randomized controlled trials that investigated the specific treatment strategies used in the different stages, or “levels,” of the STAR*D study.

They then systematically compared the findings from these blinded, controlled studies to the outcomes reported in the original STAR*D trial for each corresponding strategy. These strategies included increasing the dose of an initial antidepressant, switching to a different one, and augmenting an antidepressant with a second medication.

The first strategy examined was the practice of increasing an antidepressant dose if a patient does not respond to the initial starting dose. In STAR*D, patients who did not achieve remission on the antidepressant citalopram had their dose systematically increased.

The review by Kennedy and colleagues found that this common practice is not well supported by subsequent controlled trials. Multiple meta-analyses that pooled data from thousands of patients provided evidence that increasing the dose of a selective serotonin reuptake inhibitor (SSRI), the class of drug that includes citalopram, offers no significant benefit over simply continuing the original, standard dose.

These analyses also suggested that higher doses tend to be associated with a greater likelihood of side effects, potentially making the treatment less tolerable for patients without providing additional antidepressant effects. While a couple of analyses identified a very modest benefit at certain higher doses, the overall body of evidence points toward a flat dose-response relationship for SSRIs, meaning that once a standard therapeutic dose is reached, higher doses do not appear to provide a clinically meaningful improvement.

The next strategy evaluated was switching to a different antidepressant after the first one proved ineffective. This was a core component of Levels 2 and 3 in the STAR*D trial.

The review found a similar lack of supporting evidence from blinded trials for this approach. A meta-analysis of studies in which patients were randomly assigned to either switch to a new antidepressant or continue their original one found no advantage for the switching strategy. In these controlled settings, patients who switched medications did not experience greater symptom reduction than those who stayed on their initial medication.

The researchers also examined the strategy of augmentation, which involves adding a second medication to the first antidepressant. In STAR*D’s Level 2, patients could have their citalopram augmented with either bupropion or buspirone. For buspirone, the review found consistent evidence from blinded trials that it performs no better than a placebo when added to an SSRI. This finding stands in contrast to STAR*D, where buspirone augmentation was associated with remission rates nearly identical to bupropion augmentation.

The evidence for bupropion augmentation was more complex but generally did not replicate STAR*D’s positive results. A comprehensive meta-analysis found that when all trials were considered, adding bupropion was not superior to antidepressant monotherapy. While a small subset of trials involving patients who had previously not responded to treatment showed a marginal benefit, these studies had limitations. The larger, higher-quality trials failed to show a clear advantage for the combination treatment.

The review then moved to the augmentation strategies used in STAR*D’s Level 3, which were reserved for patients who had not responded to two previous treatment attempts. These strategies involved adding either T3 thyroid hormone or lithium. For T3, the available evidence from controlled trials is limited, but existing meta-analyses do not suggest that it outperforms a placebo. Studies looking at both T3 augmentation and co-prescribing it with an antidepressant from the start have not found a significant benefit in remission or response rates.

Lithium augmentation, on the other hand, appeared to be one of the few STAR*D strategies with some support from controlled trials. Meta-analyses of placebo-controlled studies have consistently found that adding lithium to an antidepressant is an effective strategy for treatment-resistant depression. However, the researchers noted an important limitation. The evidence base is surprisingly small, and very few of these trials have specifically studied lithium in combination with the modern SSRIs that are most commonly prescribed today.

Finally, the researchers looked at the Level 4 strategy of combining the antidepressants venlafaxine and mirtazapine for highly treatment-resistant patients. A large meta-analysis provides evidence of a benefit for this type of combination therapy compared to monotherapy. This finding seems to support the strategy used in STAR*D.

Yet, the review authors point to significant limitations within that meta-analysis. They note that the positive result appears to be heavily influenced by many small studies, while the five largest and highest-quality trials on the topic were all negative. This suggests the possibility of publication bias, where smaller studies with positive results are more likely to be published than larger studies with negative results. After accounting for this potential bias, the benefit of the combination was reduced to a level that may not be clinically meaningful.

The authors of the review acknowledge several limitations in their own analysis. The patients included in randomized controlled trials are often different from the “real-world” patients in STAR*D, who had more co-occurring medical and psychiatric conditions. It is possible that treatments that fail in controlled trials could still have an effect in a more diverse population.

Additionally, the specific treatment protocols in the controlled trials did not always perfectly match the steps taken in STAR*D, and the review itself was not a formal systematic one, meaning some relevant studies may have been missed.

The findings from this review have several important implications. They suggest that many treatment guidelines, which were shaped by STAR*D, may be based on strategies whose effectiveness is not confirmed by blinded, placebo-controlled evidence. The discrepancy between STAR*D’s outcomes and the results of controlled trials highlights the powerful role of non-pharmacological factors in treating depression. These factors, such as patient expectancy and the therapeutic relationship, may account for much of the improvement seen in open-label settings.

Future research should focus on conducting high-quality, blinded trials for second- and third-step depression treatments to provide clinicians and patients with clearer guidance. The review also suggests that findings from pragmatic trials should be interpreted with caution until they are validated by more rigorous studies.

The study, “What if STAR*D Had Been Placebo-Controlled? A Critical Reexamination of a Foundational Study in Depression Treatment,” was authored by Kevin P. Kennedy, Jonathan P. Heldt, and David W. Oslin.

In shock discovery, scientists link mother’s childhood trauma to specific molecules in her breast milk

A new study published in Translational Psychiatry reports that mothers with a history of adverse childhood experiences tend to have a distinct molecular profile in their breast milk. These differences in specific microRNAs and fatty acids were also associated with aspects of their infants’ temperament in the first year of life.

A growing body of evidence suggests that the health consequences of early life stress can be transmitted across generations. Adverse childhood experiences, such as abuse, neglect, or household dysfunction, can have lasting effects on an individual’s mental and physical health. Research also indicates that children of parents who were exposed to such adversity are at a higher risk for developing their own behavioral and metabolic issues.

Scientists are working to identify the biological pathways through which these effects might be passed down. Breast milk, a complex fluid rich in bioactive compounds that influence infant development, presents a plausible route for this transmission.

“Our main motivation was to examine the relevance of breast milk to the emerging concept of ‘transgenerational trauma’. Our previous work identified a role for sperm epigenetics in potential biological transmission of psychiatric disease susceptibility through the patriline (fathers),” said study author Ali Jawaid, principal investigator at the Translational Neuropsychiatry Research Group (TREND Lab) at the Polish Center for Technology Development.

“Breast milk introduces an additional pathway that is relevant for matrilineal (mothers) influences. We wanted to test whether epigenetic signatures of adverse childhood experiences in mothers could be detected in their breast milk, and whether they are associated with early behavioral measures in their infants. This is, indeed, what the study showed.”

For their study, the researchers conducted a prospective study with 103 mother-child pairs from Wroclaw, Poland. The participants were assessed at birth, and then again at 5 and 12 months after birth. At the 5-month visit, mothers provided breast milk samples and completed questionnaires about their infant’s temperament.

At the 12-month visit, mothers completed a questionnaire to assess their own history of adverse childhood experiences before the age of 12. This timing was chosen to prevent any stress from the questionnaire from influencing the composition of the milk samples.

The research team analyzed the breast milk for two types of molecules: microRNAs, which are small molecules that help regulate which genes are turned on or off, and fatty acids, which are fundamental components of fats. The mothers were categorized into a “high adversity” group if they had experienced two or more traumatic events in childhood, and a “low adversity” group if they had experienced zero or one. The scientists then compared the molecular composition of the milk between these two groups.

The analysis revealed distinct differences in the milk’s microRNA content. Milk from mothers in the high adversity group showed elevated levels of three specific microRNAs, identified as miR-142-3p, miR-142-5p, and miR-223-3p.

Further analysis indicated a positive correlation, suggesting that as a mother’s number of adverse childhood experiences increased, the levels of these three microRNAs in her breast milk also tended to increase. These associations were present even when accounting for symptoms of postpartum depression, which did not differ between the groups.

“We were surprised that the alterations of microRNAs in milk were not mediated or confounded by postpartum depression,” Jawaid told PsyPost. “One might expect that mothers with more adverse childhood experiences would also have higher postpartum depression, and that this could explain the effects observed in milk. However, this was not the case in our cohort.”

The researchers also identified differences in the fatty acid composition of the breast milk. Specifically, mothers in the high adversity group had lower concentrations of medium-chain fatty acids in their milk compared to mothers in the low adversity group. This finding held true even after the researchers statistically controlled for other factors that could influence fatty acid levels, such as the mother’s dietary fat intake and body mass index.

“Signatures of childhood traumatic experiences can persist biologically for a long time and can be detectable even in body fluids such as breast milk,” Jawaid said. “A next step will be to examine whether enriching experiences or therapy before or during pregnancy can modify these signals.”

The researchers then explored potential links between these molecular signatures in the milk and the infants’ temperament, which was assessed through maternal reports. The results suggest a connection. For example, higher expression of miR-142-5p in breast milk was associated with infants showing more high-intensity pleasure at 12 months. At the same time, lower expression of this microRNA was linked to infants showing more distress when faced with limitations.

Similarly, the levels of medium-chain fatty acids in the milk were associated with certain infant behaviors. Higher concentrations of these fatty acids were correlated with a greater “falling reactivity,” which reflects an infant’s reaction to loss of support or balance, at 5 months of age. Other types of fatty acids in the milk were also linked to temperamental traits such as activity level and soothability at 12 months.

“Epigenetic signatures in milk were associated with different early temperaments in newborns,” Jawaid told PsyPost. “However, this should NOT be interpreted as breastfeeding being harmful. Breast milk is protective in many ways. We need more work to clarify whether these epigenetic signals in the milk that are impacted by mothers’ childhood adversity are just biomarkers or transmit risk or adaptability and resilience to the next generation.”

The authors note some limitations of their work. The findings are based on correlations, which means the study identifies associations but cannot prove that the changes in breast milk directly cause the differences in infant temperament. The study was also conducted with a specific group of participants from an urban Polish population, so the results may not apply to all populations.

“The study involved 103 mother child dyads from a highly educated urban Polish cohort, so the implications should be interpreted with nuance,” Jawaid noted. “Still, the findings show that maternal adverse childhood experiences are associated with measurable epigenetic alterations in human breast milk, and that these signatures relate to early infant behavioral profiles.”

The researchers also emphasize that these findings should not be interpreted to discourage breastfeeding, as breast milk provides numerous established benefits for infant health. The study instead points to early life trauma as a public health issue with long-lasting biological consequences, highlighting the need to develop strategies that might mitigate the transmission of risk across generations.

“This study should not be misused to blame mothers or to argue for formula feeding,” Jawaid explained. “Breast milk provides many protective factors, and we cannot say — at this point — that altered epigenetic factors in milk lead to psychiatric disease risk. Importantly, our previous work has also identified trauma related epigenetic changes in sperm. The biological and psychosocial contributions of both mothers and fathers matter. Trauma is the problem, not mothers or fathers.”

For future research, the scientists are planning studies with animal models to better understand the potential causal links between milk components and offspring outcomes. They are also continuing to follow the children from this study as they grow older to see how these early associations relate to later health and behavior.

“Our long-term goal is to identify biomarkers and mechanisms of intergenerational transmission of psychiatric disease risk in humans,” Jawaid said. “We are following this cohort longitudinally, and are studying parallel cohorts in Bosnia, Pakistan, and Rwanda. Ultimately, we hope to develop biomarkers, guidelines and mitigation strategies to prevent the transmission of psychiatric risk across generations.”

The study, “Differential microRNAs and metabolites in the breast milk of mothers with adverse childhood experiences,” was authored by Weronika Tomaszewska, Anna Apanasewicz, Magdalena Gomółka, Maja Matyas, Patrycja Rojek, Marek Szołtysik, Magdalena Babiszewska-Aksamit, Bartlomiej Gielniewski, Bartosz Wojtas, Anna Ziomkiewicz and Ali Jawaid.

Altered sense of self in psychosis traced to the spinal cord

A recent study suggests that individuals with psychotic disorders process sensations they produce themselves, such as their own touch or heartbeat, differently from people without these conditions. This altered processing appears not only in the brain but also at the level of the spinal cord, potentially affecting the fundamental sense of self. The findings, published in the Molecular Psychiatry, provide a deeper look into the biological underpinnings of self-disturbance in psychosis.

Psychotic disorders like schizophrenia are often characterized by a disrupted sense of self. This can manifest in symptoms like hallucinations or delusions, where individuals might misattribute their own inner thoughts or actions to an outside source. Researchers have long theorized that these complex symptoms may originate from more fundamental difficulties in processing basic bodily signals.

A team of scientists, primarily from Linköping University in Sweden, sought to investigate this idea by examining how the nervous system handles sensations that are self-generated compared to those that come from the external world. Their goal was to use a variety of methods to get a comprehensive picture of self-related processing across different sensory systems.

“Schizophrenia is a complex disorder, and its underlying neurobiological mechanisms are still not understood. Especially, how hallucinations and delusions develop and are maintained remains unclear,” said study author Rebecca Böhme, an associate professor at the Center for Social and Affective Neuroscience at Linköping University.

“We hypothesized that a disturbance in the ability of identify self-produced sensations can underlie these symptoms, for example when the own thoughts are not identified as ‘self-produced,’ then they might cause the experience of voices in the head or being controlled from outside forces. Similar for touch: not identifying self-evoked tactile sensations can cause the feeling of ‘something else’ touching you, which the brain then will try to explain – potentially with a quite irrational story, because the brain always looks for causes to its experiences. It might for example come up with the idea, that an invisible demon is following and controlling you through touch.”

The research team conducted a series of experiments with 35 patients diagnosed with psychotic disorders and 35 healthy control participants who were matched for age and sex. The experiments were designed to measure neural and behavioral responses to both touch and internal body signals.

“A very common misconception is that schizophrenia means having two or more personalities,” Böhme noted. “Schizophrenia is a complicates psychiatric condition, where affected individuals experience symptoms like hallucinations and delusions, but also depression, executive dysfunction, and difficulties in social interactions.”

In one part of the study, participants underwent functional magnetic resonance imaging, or fMRI, which measures brain activity by detecting changes in blood flow. While in the scanner, they were asked to perform or receive gentle strokes on their left forearm. The conditions included touching their own arm, being touched by an experimenter, and touching a pillow as a control for movement.

The results indicated that when participants with psychosis touched themselves, a brain region known as the right superior temporal gyrus showed significantly higher activation compared to the control group. In healthy individuals, the brain tends to show reduced activity in response to self-touch, a phenomenon thought to occur because the sensation is predictable. The heightened activation in the patient group suggests there may be a mismatch between the brain’s prediction of the sensation and the sensory information it actually receives.

To investigate sensory processing at an even earlier stage, the researchers used a technique to measure somatosensory evoked potentials. This method involves delivering small, non-painful electrical pulses to a nerve in the hand and then recording the speed and strength of that signal as it travels up the spinal cord and into the brain. These measurements were taken during different conditions, including self-touch and other-touch.

In the control group, there was a measurable difference in the timing of the signal at the spinal cord level between self-touch and other-touch. For the patient group, this distinction was significantly smaller, providing evidence that the ability to differentiate between self and other may be altered at a very basic level of the nervous system.

A behavioral experiment provided further support for these neural findings. The researchers measured participants’ tactile thresholds, or the lightest touch they could feel, using a set of fine filaments. This test was also conducted during self-touch and other-touch.

Healthy controls tended to be less sensitive to the filaments while touching themselves, consistent with the idea that the brain dampens the perception of predictable, self-produced sensations. The patients with psychosis did not show this difference in sensitivity between the two conditions, suggesting an alteration in this sensory filtering mechanism.

The researchers also explored interoception, which is the sense of the internal state of the body. Participants performed a heartbeat detection task where they tried to tap a button in sync with their own heartbeat without feeling their pulse. The patients were found to be less accurate in this task compared to controls. Both groups performed equally well on a control task where they tapped along to a recorded heartbeat, indicating that the difficulty was specific to perceiving internal signals and not related to general attention or motor coordination.

Finally, the researchers measured heartbeat-evoked potentials, which are the brain’s electrical responses to the signals from the heart. The analysis showed that patients with psychosis had a reduced brain response to their own heartbeat signals.

Together, these interoceptive findings point to a broad disruption in the processing of self-generated internal signals, which are essential for maintaining a stable sense of one’s own body. The researchers also found that the degree of alteration in touch-related measures was associated with the severity of certain symptoms, and the brain activity during self-touch was a strong predictor of group membership.

The results show “that it is crucial for all of us to be able to differentiate between ‘self’ and ‘other,'” Böhme told PsyPost. “This basic self-other-distinction forms the basis of our self-experience. If this ability is altered or disrupted in some way, also our higher order sense of self will be affected.”

“Schizophrenia, the condition we studied here, is an example of such an alterations. It has been suggested before that schizophrenia can be understood as a disorder of the self. The other key take-away is that this difference is not only sensed in the brain but already affects earlier processing, like in our study the neural processing of self- or other-touch in the spinal cord.”

But the study, like all research, has some limitations. The patient sample was taking medication, which could potentially influence sensory processing, although the researchers conducted additional analyses that did not suggest a clear medication effect. The patients also had relatively low levels of active symptoms and included a mix of different psychotic disorders.

The researchers suggest that future work could examine individuals at earlier stages of the illness or before they begin treatment to see if these findings hold. Investigating these mechanisms further could open new avenues for therapies aimed at correcting the fundamental sensory and self-processing disruptions seen in psychosis.

“My lab studies the sense of self in many different psychiatric conditions and life situations,” Böhme said. “We have also investigated self-other-distinction for example in autism, ADHD, and anorexia. In my newest study, I will investigate how the sense of self is altered in times of grief, i.e. when losing a loved one, and whether people with complicated grief can be better supported using the psychedelic substance psilocybin.”

The study, “Altered processing of self-produced sensations in psychosis at cortical and spinal levels,” was authored by Paula C. Salamone, Adam Enmalm, Reinoud Kaldewaij, Marie Åman, Charlotte Medley, Michal Pietrzak, Håkan Olausson, Andrea Johansson Capusan, and Rebecca Boehme.

Wikipedia’s news sources show a moderate liberal leaning

A recent study suggests that the news media sources cited across the English version of Wikipedia have a moderate but consistent liberal bias. This pattern appears to persist even when the factual reliability of the sources is taken into account. The research was published in 2024 the journal Online Information Review.

Wikipedia is a vast, collaboratively edited online encyclopedia that has become a primary source of information for millions of people worldwide. The platform operates on three core content policies: maintaining a neutral point of view, ensuring information is verifiable through reliable sources, and prohibiting original research. These principles are intended to make its articles balanced and accurate.

Because of these policies, the quality and neutrality of Wikipedia depend heavily on the external sources its volunteer editors cite. News media outlets are a significant source of these citations. The researchers behind this study sought to investigate whether the selection of these news sources introduced a political leaning into the encyclopedia, potentially affecting its commitment to a neutral point of view.

To conduct their analysis, the researchers began with a large public dataset called Wikipedia Citations, which contains over 29 million citations from more than 6 million articles in the English Wikipedia. They extracted the domain name, such as nytimes.com or foxnews.com, from each cited link. This process allowed them to identify the specific news outlets being referenced.

Next, the team enriched this data using two external rating systems. For political leaning, they used data from the Media Bias Monitor, a system that calculates a political polarization score for a news outlet based on the self-reported political leanings of its audience on Facebook. This score ranges from -2 (very liberal) to +2 (very conservative), with 0 representing a moderate or balanced audience.

For factual reliability, they turned to ratings from Media Bias Fact Check, an organization that assesses the accuracy of news sources. This service rates outlets on a scale from “VERY HIGH” for sources that are consistently factual to “VERY LOW” for those that rarely use credible information. By matching the domains from Wikipedia to these two databases, the researchers could assign both a political polarization score and a reliability rating to millions of individual citations.

The analysis of political polarization provided evidence of a consistent lean. The average score for all news sources cited in Wikipedia was -0.51, which falls on the liberal side of the spectrum. The distribution of scores showed that the majority of news citations came from outlets with polarization scores between -1 (liberal) and 0 (moderate).

This tendency was not confined to articles on political topics. The liberal-leaning pattern was observed across broad subject categories, including Culture, Geography, and STEM (science, technology, engineering, and math). It also appeared in articles associated with various editor communities, known as WikiProjects, from Politics and India to Biography and Military History. This suggests the effect is widespread across the encyclopedia.

The researchers then explored if this political leaning was connected to the factual reliability of the sources. One might speculate that editors favor sources with a certain political leaning because they perceive them as more factually reliable. To examine this relationship, they used a statistical technique known as multiple linear regression, which can help determine how different factors, like reliability and article topic, are associated with an outcome, in this case, the political polarization score.

The model indicated a complex relationship rather than a simple one. For instance, sources rated as “High” in reliability tended to lean liberal, while those rated “Very High” tended to lean conservative. At the other end of the spectrum, sources with “Mixed” reliability were associated with a liberal leaning, while sources with “Low” and “Very Low” reliability were associated with a conservative leaning.

This outcome suggests there is no simple, direct line connecting higher reliability with one particular political viewpoint in Wikipedia’s sources. The finding that sources with a liberal bent are chosen seems independent of a straightforward preference for the most reliable outlets available. The moderate liberal bias in sourcing appears to be a distinct phenomenon.

The authors acknowledge several limitations to their work. The study’s conclusions depend on the specific methodologies of the external rating services used to measure political leanings and reliability. Additionally, the analysis focused on a one-dimensional political spectrum from liberal to conservative, which does not capture the full complexity of political viewpoints.

Another limitation is that the analysis was conducted at the domain level, meaning it assessed the general leaning of an entire news outlet, not the content of a specific article cited. A news article from a generally liberal-leaning outlet could itself be perfectly neutral. The study also focused exclusively on the English version of Wikipedia, and the patterns may differ in other languages.

Future research could expand on this study by analyzing other language versions of Wikipedia to see if similar patterns exist. A more granular analysis that examines the content of individual news articles and how they are used to support claims within Wikipedia would also offer deeper insights. Understanding the underlying reasons for this sourcing bias, whether it stems from the media landscape or the demographics of Wikipedia’s editors, remains an open area for investigation.

The study, “Polarization and reliability of news sources in Wikipedia,” was authored by Puyu Yang and Giovanni Colavizza.

New psychology research sheds light on the dark side of intimate touch

A new study suggests that certain personality traits and past relationship patterns are linked to whether an individual avoids intimate touch or uses it to control their romantic partner. The research also indicates that the mechanisms behind these behaviors may differ between men and women. The findings were published in the journal Current Psychology.

The study, led by University of Virginia PhD student Emily R. Ives and Binghamton University Professor Richard Mattson, focused on a set of personality characteristics known as the Dark Triad: Machiavellianism, psychopathy, and narcissism. Machiavellianism refers to a strategic and cynical approach to manipulating others for personal gain. Psychopathy is primarily marked by a lack of empathy, impulsivity, and shallow emotions, while narcissism involves a grandiose sense of entitlement and a persistent need for admiration.

The researchers also examined attachment theory, which proposes that our early life experiences with caregivers shape our expectations and behaviors in adult relationships. These experiences can lead to insecure attachment styles, such as an anxious style characterized by a fear of rejection, or an avoidant style marked by discomfort with closeness.

The researchers sought to understand if there was a connection between these concepts and the less-studied aspects of physical touch. While touch is often seen as a positive force in relationships, some people experience touch aversion, finding physical affection intrusive. Others may engage in coercive touch, using physical contact not for affection but to exert dominance or manipulate a partner.

“I was interested in touch as a tool for communication in relationships. While research in this area often describes touch as a means to communicate positive emotions and provide support, it can also be used to communicate ‘darker’ messages—those that convey power over one’s partner and facilitate self-serving motivations,” explained Ives.

“My research team and I were also interested in whether those who would use touch in a coercive fashion would also themselves demonstrate a discomfort with being touched affectionately. If so, one potential factor that could explain this overall negative orientation toward touch was a characterological discomfort comfort with interpersonal closeness and proximity, known as attachment style, which is further linked to psychopathic, narcissistic, and Machiavellian personality traits.”

“These dispositions are collectively referred to as the ‘Dark Triad’ and center on being manipulative and self-oriented. We reasoned that individuals would be the most likely to wield touch in untoward ways towards others, including romantic partners, to the extent they were in some way insecure in their interpersonal relationships and also demonstrated these Dark Triad traits.”

For their study, the researchers recruited 526 undergraduate students who were currently in a romantic relationship. Participants completed a series of questionnaires designed to measure their attachment style, assessing their levels of attachment anxiety and avoidance. They also completed a survey to measure their levels of the three Dark Triad personality traits.

Finally, they answered questions about their experiences with physical intimacy, specifically focusing on touch aversion and the use of coercive touch. Questions related to touch aversion assessed the extent to which participants found physical contact from their partner to be intrusive or uncomfortable, often leading them to actively avoid being touched. Coercive touch, on the other hand, was evaluated by asking if participants ever use physical contact as a tool to assert control, express dominance, or manipulate their partner into compliance.

The initial results provided evidence for their main hypothesis. The analysis showed that the shared element among the three Dark Triad traits, which can be described as an antagonistic interpersonal style, was associated with both greater touch aversion and a higher tendency to use coercive touch.

The findings also suggested a pathway from early relationship patterns to these touch behaviors. Individuals with higher levels of either anxious or avoidant attachment tended to report higher levels of Dark Triad traits. These personality traits then appeared to function as an intermediary, linking the insecure attachment styles to the negative touch outcomes.

“Touch is a powerful tool used to communicate many things in relationships, from love and support to control over one’s partner, and not all people are receptive to touch in either case,” Ives and Mattson told PsyPost. “The use of touch as a form of manipulation can vary over time within and across different relationships, but for some may represent a more stable interpersonal approach or trait.”

Interestingly, when the researchers examined the data separately for men and women, a more complex picture emerged. The proposed pathway appeared to hold true mainly for women. For female participants, insecure attachment styles were associated with higher scores on the Dark Triad traits, and these traits fully accounted for their increased likelihood of reporting touch aversion and using coercive touch.

The pattern for men was different. For them, attachment insecurity seemed to have a more direct impact on touch behaviors, with Dark Triad traits playing a less significant role. Men who reported a more avoidant attachment style also tended to report greater touch aversion directly, independent of their personality scores. Similarly, men with a more anxious attachment style were more likely to use coercive touch, a connection that did not appear to be explained by their Dark Triad traits.

This suggests that for women in the study, a tendency toward manipulative or antagonistic personality traits may be a key factor driving negative touch behaviors. For men, these same behaviors might be more directly tied to their underlying insecurities and fears about relationships, such as a fear of abandonment or a discomfort with emotional vulnerability. An additional analysis confirmed that coercive touch was distinct from outright physical aggression, suggesting it is a unique form of manipulation within relationships.

“We were interested in potential gender differences in one’s orientation towards touch, but this line of research is so new that we did not have much on which to build specific predictions,” the researchers explained. “In that sense, I think we could say that finding stark differences in the reasons why women and men used or oriented to touch in problematic ways was somewhat surprising!”

“Put simply, issues related to touch for men boiled down to relationship insecurity regardless of other traits whereas this emerged for relationally insecure women only when they were also elevated on Dark Triad personality characteristics. In hindsight, it is possible that women are more socialized in our society to use touch to communicate and therefore, women high in Dark Triad traits may feel more comfortable using this communication method to manipulate their partners.”

“This is not to say men do not have methods of manipulating their partners, but that potentially they do this in different ways. For instance, it is also possible that men higher on Dark Triad traits use methods other than touch to manipulate or ensure compliance, such as physical or psychological aggression.”

The study has some limitations to consider. The participants were primarily white, heterosexual undergraduate students from one university, so the findings may not apply to other populations. The study design was also correlational, meaning it identifies associations between variables but cannot prove that one causes another.

“Our sample was made up entirely of undergraduates whose psychiatric history is not known to us,” Ives and Mattson noted. “That is to say, we have no idea if any of these people would be diagnosable as psychopathic, for instance, or if there are other traits that run alongside the Dark Triad that can better explain our findings.”

“Correspondingly, it is also important to highlight that coercive touch and touch aversion were normally distributed, meaning that many individuals in the sample reported some use of coercive touch or times when they react negatively to touch. Our findings suggest that this is more prevalent as individuals are more insecure in relationships and/or carry certain personality characteristics. Simply because your partner used touch in a coercive way or withdrew from a hug does not therefore imply that they are a Machiavellian, psychopathic or narcissistic.”

“Finally, this is just one study on a relatively restricted group. It would be great if more research could be done on this relationship as well as the relationships between personality, attachment, and touch as a whole.

The study, “The dark side of touch: how attachment style impacts touch through dark triad personality traits,” was authored by Emily R. Ives, Bridget N. Jules, Samantha L. Anduze, Samantha Wagner, and Richard E. Mattson.

A woman’s choice of words for her genitals is tied to her sexual well-being, study finds

A new study suggests that the names women use for their genitals are associated with their body image, sexual pleasure, and certain health behaviors. The research indicates that using playful or childish terms for genitals in everyday life is linked to more negative outcomes, while using vulgar terms during sex is connected to more positive sexual experiences. The findings were published in the journal Sex Roles.

While many scholars and educators believe language shapes our body image, this idea has rarely been put to the test. A team of researchers led by Tanja Oschatz of Johannes-Gutenberg-University and Rotem Kahalon of Bar-Ilan University aimed to provide the missing scientific evidence, particularly regarding the terms women use for their genitals.

“For years, both feminist scholars and sex educators have emphasized that language matters—that the words we use to talk about our bodies can shape how we feel about them. Yet, despite this widely accepted idea, there was surprisingly little empirical evidence showing how this plays out with regards to women’s genitals,” Oschatz told PsyPost.

“Although there were studies from 20 year ago that catalogued the many terms women use to describe their genitals, no one had examined whether using different terms is actually linked to women’s feelings, attitudes, or behaviors. Our first goal was to close this gap.”

“Secondly, we wanted to update previous findings on women’s genital naming. Language is constantly evolving—especially around gender, sexuality, and the body. What women call their genitals today may carry different meanings and social implications than it did two decades ago, and we wanted to capture this contemporary picture.”

To conduct their study, the researchers surveyed 457 women from the United States with diverse ages. Participants were asked what terms they most commonly use to refer to their genitals in two different scenarios: a general, non-sexual context and a partnered, sexual context.

The women also completed a series of questionnaires designed to measure their feelings and attitudes. These included scales assessing their genital self-image, their overall sexual pleasure, orgasm frequency, attitudes toward oral sex, and certain health-related behaviors, such as the use of vaginal cleaning products and their openness to labiaplasty, a type of cosmetic genital surgery.

After collecting the terms, the researchers performed a content analysis and grouped the words into nine distinct categories. These categories included anatomical (e.g., “vagina,” “vulva”), vulgar (e.g., “pussy”), playful/childish (e.g., “hoo-ha,” “vajayjay”), and euphemisms (e.g., “down there,” “private parts”), among others. The team then used statistical analyses to see if using terms from a particular category was associated with the participants’ self-reported attitudes and behaviors.

In general, non-sexual contexts, the study found that a majority of women, about 75%, reported using at least one anatomical term, with “vagina” being the most frequent. However, playful/childish terms and euphemisms were also common, each used by roughly 15% of the participants.

“We found that genital naming among women is very diverse—and that it depends strongly on the context,” Oschatz explained. “For example, when women were asked, ‘What term do you generally use?’, the majority mentioned at least one anatomical term such as ‘vagina’ or ‘vulva.’ In contrast, when asked what term they use in a sexual context, most women reported using more informal or vulgar terms like ‘pussy.’ Compared to data from twenty years ago, we also found that the term ‘vulva’ (referring to the outer parts of women’s genitals) and words referring to the clitoris have become more common, suggesting a more differentiated and anatomically informed vocabulary today.”

The researchers found that women who used playful/childish terms tended to report a more negative genital self-image. This connection appeared to extend to other areas as well. The use of these terms was also linked to a lower perception of a partner’s enjoyment of giving oral sex, a greater likelihood of using vaginal cleaning products, and a higher interest in getting labiaplasty.

The researchers found that a more negative genital self-image helped explain the connection between using playful terms and the greater openness to labiaplasty, as well as the lower perceived partner enjoyment of oral sex. This suggests that the negative feelings women have about their genitals may be a key factor driving these other outcomes.

“Our findings show that the words women use are indeed related to their attitudes and experiences,” Oschatz told PsyPost. “Women who used childish terms such as ‘hoo-ha’ or ‘vajayjay’ tended to report more negative feelings about their genitals. These terms were also linked to sexual and health behaviors and attitudes—such as a more negative perception of partner’s oral sex enjoyment, greater use of vaginal cleaning products, and higher openness to labiaplasty.”

When the researchers analyzed the terms women used in a sexual context with a partner, the linguistic landscape changed significantly. In this setting, the most common category was vulgar terms, with nearly 45% of women reporting their use. The most frequent word in this category was “pussy.” Anatomical terms were the second most common.

The analysis showed that using vulgar terms during sex was associated with positive sexual outcomes. Women who used these terms reported experiencing greater general sexual pleasure, more frequent orgasms, and a stronger desire to receive oral sex.

“Context really matters,” Oschatz emphasized. “The associations between language and attitudes differed depending on when the terms were used. For instance, childish terms were linked to more negative feelings only when used in non-sexual contexts, but not during sexual ones. Interestingly, using the word ‘pussy’ in sexual contexts was associated with greater sexual pleasure and more frequent orgasms. This suggests that a word once considered derogatory may now be reclaimed by many women and carry an element of empowerment.”

Contrary to what researchers expected, the use of euphemisms was not associated with a negative genital self-image or any other adverse outcomes in the study.

“We were surprised to find that using euphemisms—vague and indirect terms like ‘own there’ or ‘private area’—was not associated with more negative attitudes toward women’s own genitals,” Oschatz said. “We had expected that these terms might carry an element of shame or discomfort, which could be linked to a more negative genital self-image. However, our findings suggest otherwise. Instead, it was really the use of childish language that was related to negative feelings and attitudes.”

As with all research, the study has some limitations. The participants were predominantly white and highly educated, so the findings may not apply to women from other racial, ethnic, or socioeconomic backgrounds where language and cultural norms may differ. The research was also focused exclusively on cisgender women.

Additionally, because the study shows a correlation, it cannot determine causation. It is unclear if using certain words influences a woman’s feelings and behaviors, or if her existing feelings and behaviors influence her choice of words. It is also possible that the relationship works in both directions.

Future research could explore these dynamics in more diverse populations and use methods that help establish the direction of the relationship over time. Researchers also suggest a deeper exploration into the complex nature of reclaimed terms to better understand how and when they contribute to a sense of genuine empowerment.

The study’s authors note that the findings have practical implications, particularly for health and education. Discouraging infantilizing language and promoting the use of accurate anatomical terms in medical, educational, and family settings could help reduce shame and improve body literacy and well-being among women.

The study, “Vagina, Pussy, Vulva, Vag – Women’s Names for Their Genitals are Differentially Associated with Sexual and Health Outcomes,” was authored by Tanja Oschatz, Verena Klein, Veronica Kovalcik, and Rotem Kahalon.

Neural synchrony is shaped by both relationship type and task demands

A new study finds that the alignment of brain activity between two people, known as interpersonal neural synchrony, varies based on their relationship and what they are doing together. The research, published in NeuroImage, found that mother-child pairs tend to have lower synchrony than adult friends or romantic partners, and that passively sharing an experience can sometimes generate more neural alignment than active cooperation.

Scientists have long been interested in how our brains coordinate during social interactions, but research findings have often been difficult to compare. Studies have examined different types of relationships, from strangers to romantic partners, and used a wide range of tasks, from simple cooperation games to open-ended conversations. This variability has made it challenging to build a cohesive understanding of the factors that govern brain-to-brain synchrony.

To address this, a team of researchers from the University of Trento in Italy and the University of Vienna in Austria, led by PhD student Alessandro Carollo and Professor Gianluca Esposito, designed a study to systematically investigate two key dimensions: interpersonal closeness and the level of social interactivity.

“Most neuroscience research on social interaction has relied on highly controlled, artificial tasks where people are tested alone and exposed to social stimuli such as faces or voices on a screen. While this work has deepened our understanding of how the brain processes social information, it tells us little about what happens during real interactions between people,” the researchers told PsyPost.

“In this study, we wanted to explore how the brain supports active coordination and communication in natural settings — across different kinds of relationships, from close friends to romantic partners and mothers with their children. Our goal was to identify the neural mechanisms that allow people to connect with one another in everyday life.”

The researchers recruited 142 pairs of participants, divided into three groups based on their relationship: 70 dyads of close friends, 39 romantic partner dyads, and 33 mother-child dyads. Each pair’s brain activity was recorded simultaneously using a technique called functional near-infrared spectroscopy, or fNIRS. This non-invasive method involves wearing a cap with sensors that use light to measure changes in blood oxygen levels in the brain, providing a proxy for neural activity.

During the experiment, each pair engaged in three distinct activities designed to elicit different levels of interaction. In a passive condition, they watched a short animated video together without speaking. In a structured active condition, they played a cooperative game of Jenga. In an unstructured active condition, they engaged in a five-minute free-form conversation.

The researchers focused on activity in two brain areas on both sides of the brain: the inferior frontal gyrus and the temporoparietal junction. These regions are known to be involved in social cognitive processes like understanding others’ actions and intentions.

First, the team confirmed that the synchrony they observed was meaningful. They compared the brain alignment in the real pairs to that of “surrogate” pairs, created by randomly matching data from individuals who had not actually interacted. The real pairs showed significantly higher neural synchrony, indicating that the alignment was a genuine product of the shared social experience. This effect was particularly strong for connections involving the right inferior frontal gyrus, a brain area associated with action observation and imitation.

“We found that people’s brains do tend to ‘sync up’ when they interact, and that this synchrony is influenced by both who we are interacting with and how,” Carollo and Esposito explained. “Even simple, everyday moments of connection can lead to similar patterns of brain activity among people.”

When examining the effect of relationship type, the study produced an unexpected outcome. The researchers had hypothesized that mother-child pairs, representing a foundational attachment bond, would show the highest levels of synchrony.

Instead, they found that both close friends and romantic partners exhibited significantly higher synchrony than mother-child pairs. This could suggest that the brains of close-knit adults, which are fully mature and have a long history of attuning to social equals, may coordinate more readily. The lower synchrony in mother-child pairs might also reflect ongoing developmental processes in the child’s brain, which is still maturing in its ability to engage in complex social coordination.

The results related to social activity were also contrary to the team’s initial predictions. They had expected that more active and unstructured interactions would require greater neural coordination, leading to higher synchrony. However, the data revealed that across all groups, the passive task of watching a video together produced the highest overall synchrony. The structured cooperative game ranked second, while the unstructured free conversation was associated with the lowest levels of synchrony. This pattern was most clear in the adult-adult pairs.

“We actually expected the opposite pattern regarding interpersonal closeness and social interactivity,” Carollo and Esposito told PsyPost. “We thought that closer relationships and more interactive contexts would show higher levels of brain synchrony. Instead, synchrony was sometimes stronger in less interactive settings, such as when close friends watched a movie together. This suggests that simply sharing an experience, being present together, can promote alignment between brains, even without active communication.”

The researchers suggest that when two people passively observe the same dynamic stimulus, their brains process the information in a similar way and at a similar pace, leading to a strong, stimulus-driven alignment. In more open-ended interactions like conversation, the social signals are less predictable and more complex, potentially leading to less consistent moment-to-moment alignment across the measured brain regions.

“The level of synchrony isn’t always higher in closer or more interactive relationships,” Carollo and Esposito noted. “In some cases, greater alignment may actually reflect the brain’s effort to coordinate during newer or less familiar interactions.”

“It’s tempting to think that higher neural synchrony means ‘better’ communication or a stronger emotional bond, but that’s not always the case. Synchrony reflects coordination at the neural level, not the depth or quality of a relationship. It’s also shaped by factors like attention, task structure, and developmental stage.”

“And importantly, neural synchrony is not a form of telepathy. It doesn’t mean people are literally sharing thoughts. Instead, it likely reflects how the brain aligns with shared rhythms of communication, things like gaze, speech, gestures, and mutual attention, that help us stay ‘in tune’ with and possibly understand each other during social interactions.”

A more detailed analysis showed that the effect of the task depended on the specific brain regions involved. While many brain connections were most synchronized during the passive video-watching task, synchrony between the left inferior frontal gyrus of both participants, and between one person’s left inferior frontal gyrus and the other’s right temporoparietal junction, peaked during the cooperative Jenga game.

This suggests that while passive shared experience drives one form of neural alignment, active, goal-directed cooperation relies on the coordination of a different set of neural pathways involved in joint action and strategic thinking.

“Our results suggest that neural synchrony can act as a kind of ‘neural signature’ of social coordination, much like the behavioral, physiological, or hormonal synchrony seen in earlier studies,” Carollo and Esposito said. “Over time, these brain-to-brain measures could help us better understand how social experiences shape development, relationships, and individual differences in social functioning. Ultimately, our findings reinforce the idea that the human brain is profoundly social. It is wired to respond to, adapt to, and resonate with the dynamics of our interpersonal world.”

The study has some limitations. The data for the adult pairs and mother-child pairs were collected in two different countries using slightly different equipment, though procedures were standardized as much as possible. The study also did not include fine-grained behavioral analysis, which could link specific actions like eye contact or gestures to fluctuations in neural synchrony. The inherent age difference between mother-child pairs and adult pairs also makes it difficult to completely separate the effects of relationship type from developmental factors.

Future research could build on these findings by including other types of relationships, such as non-parental adult-child pairs or child-child friendships, to better isolate the influence of development and interpersonal closeness.

“We hope to contribute to a more standardized framework for studying neural synchrony across labs,” the researchers said. “Future work will look more closely at how synchrony relates to specific relational variables, for instance, co-regulation in mother–child pairs or the quality of friendship among peers. In the longer term, we aim to understand how brain-to-brain alignment functions in larger groups and whether it might predict collective performance in collaborative or team settings.”

The study, “Interpersonal neural synchrony across levels of interpersonal closeness and social interactivity,” was authored by Alessandro Carollo, Andrea Bizzego, Verena Schäfer, Carolina Pletti, Stefanie Hoehl, and Gianluca Esposito.

Hair shine linked to perceptions of youth and health in women

A new study provides evidence that specific hair characteristics, namely alignment and shine, play a significant part in how a woman’s age, health, and attractiveness are perceived. The research, published in the International Journal of Cosmetic Science, suggests that women with straighter and shinier hair are consistently judged as being younger, healthier, and more attractive.

Scientific investigations into female physical appearance have historically concentrated on facial features like symmetry or skin condition. In many of these studies, information about hair is intentionally removed, either by having participants wear a hairband or by digitally editing it out of images. This approach has left a gap in understanding how hair, which is easily altered, contributes to social perceptions.

“Research investigating female physical appearance mostly considered the role of facial features in assessments of, for example, attractiveness. Hair has typically been removed or covered in rating studies,” said study author Bernhard Fink, who is affiliated with the Department of Evolutionary Anthropology at the University of Vienna and is the CEO of Biosocial Science Information.

“Yet people report high concern with the appearance of their hair, and poor hair condition can impact self-perception and self-esteem. We had evidence from previous research using computer-generated (rendered) female hair that human observers are sensitive to even subtle variations of hair diameter, density, and style. Here, we extend this evidence to the study of natural hair wigs, worn by female models, and the systematic manipulation of hair alignment, shine, and volume.”

The research consisted of two experiments in which female participants rated images of a woman wearing different wigs. In the first experiment, the researchers focused specifically on the impact of hair shine. They prepared 10 pairs of natural Caucasian hair wigs that varied in color, length, and style, including straight and curly options. For each pair, one wig was treated to be high-shine, while the other was treated with a dry shampoo to appear low-shine.

A female model wore each of the 20 wigs and was photographed from a three-quarter back view, so that her facial features were not visible. These image pairs were then shown to 1,500 female participants from three countries: the United States, Germany, and Spain. Participants were asked to look at each pair and choose which image showed a woman who appeared younger, healthier, or more attractive.

The results of this first experiment were consistent across all three countries. For nearly all wig styles, the high-shine version was selected as appearing more youthful and more attractive. The preference for high-shine hair was even stronger when participants were asked to judge health, with the shiny version being chosen for all 10 hair types. This suggests that hair shine is a potent signal of health and vitality that is recognizable across different Western cultures.

The second experiment was designed to explore a more complex picture by adding two more hair features: alignment and volume. The researchers prepared wigs in both neutral blonde and dark brown. They created eight different versions for each color by combining high and low levels of shine, alignment, and volume. For example, one wig might have high alignment (very straight), high shine, and low volume.

A model was photographed wearing each of these wigs from three different angles: front, three-quarter front, and three-quarter back. A group of 2,000 women in the United States then rated the resulting images for youth, health, and attractiveness. This design allowed the researchers to determine the relative importance of each hair feature and see if the effects changed with hair color or viewing angle.

The findings from this experiment pointed to hair alignment as the most influential factor. Hair that was straight-aligned was consistently perceived as the most youthful, healthy, and attractive, regardless of its color or the angle from which it was viewed. High shine also had a positive effect on ratings, though its impact was not as strong as that of straight alignment.

“Most participants provided their assessments of hair images on mobile devices,” Fink noted. “One would assume that subtle variations in hair condition are evident only when presented and viewed on larger screens. This was not the case. Although the hair manipulations were subtle, especially those of alignment and shine, participants were sensitive and provided systematic responses. This has practical implications, as consumers’ assessment of ‘beautiful hair,’ e.g., through viewing on the Internet, influences their wishes for their own hair.”

In contrast, high volume did not receive such positive assessments. The combination that was rated most favorably across the board was hair with high alignment, high shine, and low volume. The study also detected some minor interactions between the hair features and the viewing angle, but the main effects of alignment and shine were far more significant. These results suggest that the smooth, orderly appearance of straight, shiny hair sends powerful positive signals.

“The key message of the study is that female head hair plays a role in assessments of age, health, and attractiveness,” Fink told PsyPost. “Straight hair and shiny hair are perceived as youthful, healthy, and attractive. This observation was made by systematically manipulating hair features using natural hair wigs, thus carefully controlling hair alignment, shine, and volume. High volume did not have as positive an impact on hair assessments as alignment and shine had. The positive effect of shiny hair on assessments was observed in raters from three countries (USA, Germany, Spain).”

But as with all research, there are limitations to consider. The experiments used only natural Caucasian hair wigs, so the findings may not apply to hair from other ethnic groups, which can have different fiber characteristics. It is also worth noting that the participants in both experiments were women judging images of other women. This means the results capture a female-to-female perspective, and it remains unclear whether these preferences would be shared by male observers.

The researchers also disclosed that several authors are employees of or consultants for The Procter & Gamble Company, a leading manufacturer of hair care products. “I would like to note that this study was conducted in collaboration with partners from Procter & Gamble,” Fink said. “This is important because the systematic manipulations of hair were made by professional stylists. Likewise, the imaging setup required work, resulting in a system dedicated to capturing the subtle hair feature variations, especially those that result from light interacting with hair fibers.”

For future directions, the researchers aim to expand this research to non-Western populations, such as in Asian countries, to see if the preference for shiny, aligned hair is a more universal phenomenon. Examining how hair features interact with other variables, like skin pigmentation, is another avenue for further investigation.

The study, “Perceptions of female age, health and attractiveness vary with systematic hair manipulations,” was authored by Susanne Will, Mandy Beckmann, Kristina Kunstmann, Julia Kerschbaumer, Yu Lum Loh, Samuel Stofel, Paul J. Matts, Todd K. Shackelford, and Bernhard Fink.

Cognitive issues in ADHD and learning difficulties appear to have different roots

A new study reports that the widespread cognitive difficulties in children with learning problems appear to be a core feature of their condition, independent of their attentional behaviors. In contrast, the more limited cognitive challenges found in children with Attention Deficit Hyperactivity Disorder (ADHD) who do not have learning difficulties may be consequences of their inattention and hyperactivity. The research was published in the Journal of Attention Disorders.

Children with ADHD and those with specific learning difficulties often exhibit overlapping challenges with attention and certain thinking skills. This has led researchers to question the nature of this relationship: Are the difficulties with memory and processing simply a side effect of being inattentive or hyperactive? A team of researchers sought to disentangle these factors to better understand the underlying cognitive profiles of these distinct but frequently co-occurring conditions.

“While there have been previous studies that examined the link between ADHD symptoms and learning or cognitive skills in groups of children with ADHD or learning difficulties, there has been no study that examined how ADHD symptoms influence cognitive skills that are key to learning in these neurodivergent groups,” said study author Yufei Cai, a PhD researcher in the Department of Psychiatry at the University of Cambridge.

“Understanding how ADHD attentional behaviors influence these cognitive skills that are essential for successful learning in these neurodivergent populations can offer suggestions for designing interventions that might improve cognitive or learning functioning in these neurodivergent groups.”

To investigate, the researchers performed a detailed analysis of existing data from the Centre for Attention, Learning, and Memory, a large cohort of children referred for concerns related to attention, memory, or learning. They selected data from 770 children, aged 5 to 18, and organized them into four distinct groups. These groups included children with a diagnosis of ADHD only, children with learning difficulties only, children with both ADHD and learning difficulties, and a comparison group of children with no known neurodevelopmental conditions.

Each child had completed a broad range of standardized tests. These assessments measured fundamental cognitive skills such as verbal and visuospatial short-term memory, working memory (the ability to hold and manipulate information temporarily), processing speed, and sustained attention. Higher-level executive functions, like the ability to flexibly shift between different tasks or rules, were also evaluated. Alongside these direct assessments, parents provided ratings of their child’s daily behaviors related to inattention and hyperactivity or impulsivity.

“Our study is the first to date that has (1) a relatively large neurodivergent sample size, (2) a comprehensive battery of cognitive and learning measures, and (3) the inclusion of a co-occurring condition group of those with both ADHD and learning difficulties to examine the extent to which elevated scores of ADHD symptoms can account for the group differences in cognitive skills that are key to learning between these neurodivergent and comparison groups,” Cai told PsyPost.

“The study aims to characterize the cognitive profiles of these three neurodivergent groups, as well as examine the associations between ADHD symptoms (i.e., inattentive and hyperactive/impulsive behaviors) and cognitive skills that are key to learning in children with ADHD, learning difficulties, and those with both conditions.”

The core of the analysis involved a two-step comparison. First, the researchers compared the performance of the four groups across all the cognitive tests to identify where differences existed. Next, they applied a statistical approach to see what would happen to these differences if they mathematically adjusted for each child’s level of parent-rated inattention and hyperactivity. If a group’s cognitive weakness disappeared after this adjustment, it would suggest the cognitive issue might be a consequence of attentional behaviors. If the weakness remained, it would point to a more fundamental cognitive deficit.

The results revealed a clear divergence between the groups. Children with learning difficulties, both with and without a co-occurring ADHD diagnosis, displayed a broad pattern of lower performance across many cognitive domains. They showed weaknesses in short-term memory, working memory, processing speed, sustained attention, and the ability to sequence information.

When the researchers statistically accounted for levels of inattention and hyperactivity, these cognitive deficits largely persisted. This outcome suggests that for children with learning difficulties, these cognitive challenges are likely foundational to their condition, not just a byproduct of attentional issues.

The profile for children with ADHD only was quite different. This group performed at age-appropriate levels on many of the cognitive tasks, including verbal short-term memory, working memory, processing speed, and sustained attention. They did show some specific difficulties, particularly in visuospatial short-term memory and the ability to quickly sequence numbers or letters.

However, these particular challenges were no longer apparent after the statistical analysis adjusted for their levels of inattention and hyperactivity. This finding indicates that for these children, their attentional behaviors may directly interfere with performance on certain cognitive tasks.

One specific challenge did appear to be independent of attentional behaviors for the ADHD only group. Their difficulty with set shifting, or mentally switching between different task rules, remained even after accounting for inattention and hyperactivity. This points to a more specific executive function challenge in ADHD that may not be fully explained by its primary behavioral symptoms.

Overall, the findings paint a picture of two different neurodevelopmental pathways. For children with learning difficulties, core cognitive weaknesses appear to drive their academic struggles. For many children with ADHD alone, their primary attentional challenges may be what creates more limited and specific hurdles in their cognitive performance.

“Children with learning difficulties, either with or without ADHD, had lower levels of cognitive skills than children with ADHD without co-occurring learning difficulties and those in the comparison group,” Cai explained. “Elevated levels of inattention and hyperactive/impulsive behaviors did not influence the low cognitive performance observed in these children. Instead, this lower cognitive performance may be more closely associated with their learning ability, which is central to their neurodevelopmental characteristics.”

“However, these attentional behaviors are closely linked to the more limited cognitive challenges observed in children with ADHD without co-occurring learning difficulties. Understanding whether neurodivergent children with ADHD, learning difficulties, or both experience cognitive or learning-related challenges provides a valuable framework for designing targeted intervention and support strategies.”

But as with all research, the study includes some limitations. The group with learning difficulties was identified based on low scores on academic tests rather than formal clinical diagnoses of conditions like dyslexia or dyscalculia, which might not be perfectly equivalent. The study’s design provides a snapshot at a single point in time, so it cannot capture how these relationships might evolve as children develop.

Future research could build on these findings by following children over several years to observe these developmental trajectories directly. Incorporating a wider array of cognitive measures and gathering behavioral information from multiple sources, including teachers, could also help create an even more detailed understanding. Such work could help refine support strategies, ensuring that interventions are targeted to a child’s specific profile of cognitive and behavioral needs.

The study, “Associations Between ADHD Symptom Dimensions and Cognition in Children With ADHD and Learning Difficulties,” was authored by Yufei Cai, Joni Holmes, and Susan E. Gathercole.

Men’s brains shrink faster with age, deepening an Alzheimer’s mystery

A new large-scale brain imaging study suggests that the normal process of aging does not affect female brains more severely than male brains. In fact, the findings indicate that men tend to experience slightly greater age-related decline in brain structure, a result that challenges the idea that brain aging patterns explain the higher prevalence of Alzheimer’s disease in women. The research was published in the Proceedings of the National Academy of Sciences.

Alzheimer’s disease is a progressive neurodegenerative condition that impairs memory and other essential cognitive functions. It is the most common cause of dementia, and women account for a significant majority of cases worldwide. Because advancing age is the single greatest risk factor for developing Alzheimer’s, researchers have long wondered if sex-based differences in how the brain ages might contribute to this disparity.

Previous studies on this topic have produced mixed results, with some suggesting men’s brains decline faster and others indicating the opposite. To provide a clearer picture, an international team of researchers led by scientists at the University of Oslo set out to investigate this question using an exceptionally large and diverse dataset. They aimed to determine if structural changes in the brain during healthy aging differ between men and women, and if any such differences become more pronounced with age.

“Women are diagnosed with Alzheimer’s disease more often than men, and since aging is the main risk factor, we wanted to test whether men’s and women’s brains change differently with age. If women’s brains declined more, that could have helped explain their higher Alzheimer’s prevalence,” said study author Anne Ravndal, a PhD candidate at the University of Oslo.

To conduct their investigation, the researchers combined data from 14 separate long-term studies, creating a massive dataset of 12,638 magnetic resonance imaging (MRI) scans from 4,726 cognitively healthy participants. The individuals ranged in age from 17 to 95 years old. The longitudinal nature of the data, with each person being scanned at least twice over an average interval of about three years, allowed the team to track brain changes within individuals over time.

Using this information, they measured changes in several key brain structures, including the thickness and surface area of the cortex, which is the brain’s outer layer responsible for higher-level thought.

The analysis began by examining the raw changes in brain structure without any adjustments. In this initial step, the team found that men experienced a steeper decline than women in 17 different brain measures. These included reductions in total brain volume, gray matter, white matter, and the volume of all major brain lobes. Men also showed a faster thinning of the cortex in visual and memory-related areas and a quicker reduction in surface area in other regions.

Recognizing that men typically have larger heads and brains than women, the researchers performed a second, more nuanced analysis that corrected for differences in head size. After this adjustment, the general pattern held, though some specifics changed. Men still showed a greater rate of decline in the occipital lobe volume and in the surface area of the fusiform and postcentral regions of the cortex. In contrast, women only exhibited a faster decline in the surface area of a small region within the temporal lobe.

The findings were in line with the researchers expectations: “Although earlier studies have shown mixed findings, especially for cortical regions, our results align with the overall pattern that men show slightly steeper age-related brain decline,” Ravndal told PsyPost. “Still, it was important to demonstrate this clearly in a large longitudinal multi-cohort sample covering the full adult lifespan.”

The study also revealed age-dependent effects, especially in older adults over 60. In this age group, men showed a more rapid decline in several deep brain structures, including the caudate, nucleus accumbens, putamen, and pallidum, which are involved in motor control and reward. Women in this age group, on the other hand, showed a greater rate of ventricular expansion, meaning the fluid-filled cavities within the brain enlarged more quickly.

Notably, after correcting for head size, there were no significant sex differences in the rate of decline of the hippocampus, a brain structure central to memory formation that is heavily affected by Alzheimer’s disease.

The researchers also conducted additional analyses to test the robustness of their findings. When they accounted for the participants’ years of education, some of the regions showing faster decline in men were no longer statistically significant.

Another analysis adjusted for life expectancy. Since women tend to live longer than men, a man of any given age is, on average, closer to the end of his life. After accounting for this “proximity to death,” several of the cortical regions showing faster decline in men became non-significant, while some areas in women, including the hippocampus in older adults, began to show a faster rate of decline. This suggests that differences in longevity and overall biological aging may influence the observed patterns.

“Our findings add support to the idea that normal brain aging doesn’t explain why women are more often diagnosed with Alzheimer’s,” Ravndal said. “The results instead point toward other possible explanations, such as differences in longevity and survival bias, detection and diagnosis patterns, or biological factors like APOE-related vulnerability and differential susceptibility to pathological processes, though these remain speculative.”

The study, like all research, has some caveats to consider. The data were collected from many different sites, which can introduce variability. The follow-up intervals for the brain scans were also relatively short in the context of a human lifespan. A key consideration is that the participants were all cognitively healthy, so these findings on normal brain aging may not apply to the changes that occur in the pre-clinical or early stages of Alzheimer’s disease.

It is also important to that although the study identified several statistically significant differences in brain aging between the sexes, the researchers characterized the magnitude of these effects as modest. For example, in the pericalcarine cortex, men showed an annual rate of decline of 0.24% compared to 0.14% for women, a difference of just one-tenth of a percentage point per year.

“The sex differences we found were few and small,” Ravndal told PsyPost. “Importantly, we found no evidence of greater decline in women that could help explain their higher Alzheimer’s disease prevalence. Hence, if corroborated in other studies, the practical significance is that women don’t need to think that their brain declines faster, but that other reasons underlie this difference in prevalence.”

Future research could explore factors such as differences in longevity, potential biases in how the disease is detected and diagnosed, or biological variables like the APOE gene, a known genetic risk factor that may affect men and women differently.

“We are now examining whether similar structural brain changes relate differently to memory function in men and women,” Ravndal said. “This could help reveal whether the same degree of brain change has different cognitive implications across sexes.”

The study, “Sex differences in healthy brain aging are unlikely to explain higher Alzheimer’s disease prevalence in women,” was authored by Anne Ravndal, Anders M. Fjell, Didac Vidal-Piñeiro, Øystein Sørensen, Emilie S. Falch, Julia Kropiunig, Pablo F. Garrido, James M. Roe, José-Luis Alatorre-Warren, Markus H. Sneve, David Bartrés-Faz, Alvaro Pascual-Leone, Andreas M. Brandmaier, Sandra Düzel, Simone Kühn, Ulman Lindenberger, Lars Nyberg, Leiv Otto Watne, Richard N. Henson, for the Australian Imaging Biomarkers and Lifestyle flagship study of ageing (AIBL), the Alzheimer’s Disease Neuroimaging Initiative (ADNI), Kristine B. Walhovd, and Håkon Grydeland.

Your politics are just as hot as your profile picture, according to new online dating study

A new study has found that a person’s political affiliation is a powerful factor in online dating choices, carrying about as much weight as physical attractiveness. At the same time, the research suggests that a willingness to date someone from an opposing party, a signal of political tolerance, is an even more desirable trait. The findings, published in Political Science Research and Methods, provide a nuanced look at how political divisions are shaping our most personal relationships.

The research was conducted by a team from Queen Mary University of London and the London School of Economics and Political Science. They were motivated by the observation that political polarization has begun to influence decisions far outside the voting booth, from hiring to personal friendships.

The researchers questioned whether this bias is purely about party labels, or if those labels act as a shorthand for other assumed characteristics, such as values or lifestyle. By focusing on the complex world of online dating, they sought to disentangle the raw effect of partisanship from the many other factors that guide the search for a partner.

To investigate these questions, the scientists designed a realistic online dating simulation for 3,000 participants in the United Kingdom. Each participant was shown a series of paired dating profiles and asked to choose which person they would prefer to date. The profiles were generated with a mix of randomly assigned traits, creating a wide variety of potential partners. This method, known as a conjoint experiment, allows researchers to precisely measure the independent influence of each characteristic on a person’s choice.

The profiles included key political attributes, such as party affiliation (Labour or Conservative) and political tolerance. The tolerance attribute was presented as a statement in the profile’s bio, either expressing openness (“Open to match with anyone”) or intolerance (“No Tories/Labour!”). Profiles also featured nonpolitical traits common on dating apps, including physical appearance, race, education level, height, and even dietary habits, such as being vegetarian. The use of actual photographs, pre-rated for attractiveness, was intended to make the experience more similar to using a real dating app.

The results showed that political identity has a substantial effect on dating decisions. On average, a person was 18.2 percentage points more likely to be chosen if they shared the same party affiliation as the participant. This effect was similar in magnitude to the preference for a physically attractive person and was twice as strong as the preference for a potential date with a university degree. This suggests that in the modern dating market, political alignment can be just as important as conventional standards of attraction.

However, the single most influential trait was not party affiliation, but political tolerance. A profile that signaled an openness to dating people from any political background was nearly 20 percentage points more likely to be chosen than a profile expressing intolerance. This preference for open-mindedness was slightly stronger than the preference for a shared party. Participants appeared to value tolerance even when evaluating someone from their own party, indicating a genuine appreciation for the trait rather than just an aversion to being rejected themselves.

The study also uncovered a notable asymmetry in partisan behavior. While supporters of both major parties preferred to date within their own political group, this tendency was much stronger on the left. Labour supporters were approximately twice as likely to choose a fellow Labour supporter compared to the rate at which Conservatives chose other Conservatives. This finding points to different social dynamics within the two partisan groups in the UK.

Another surprising asymmetry emerged when participants encountered profiles that defied political stereotypes. Conservative participants were more likely to select a Labour supporter who broke from the typical mold, for example, by being White or holding “traditional” values.

In contrast, Labour supporters were less likely to choose a Conservative profile that broke stereotypes, such as a Black or vegetarian Conservative. The researchers suggest this could be related to a negative reaction against individuals who violate strong group expectations, making them seem unfamiliar.

The researchers acknowledge certain limitations. The study focused only on Labour and Conservative supporters, which may not capture the full complexity of the UK’s multiparty political system. While the experiment identifies these differing preferences between partisan groups and genders, it does not fully explain the underlying psychological reasons for them. Future research could explore these motivations in greater depth.

Additional work might also examine the role of geography, as dating pool size and composition in urban versus rural areas could alter how people weigh political and nonpolitical traits. The influence of other major political identities, such as a person’s stance on Brexit, could also be a productive area for investigation.

The study’s findings suggest that while partisan divides are real and affect relationship formation, they are not absolute. An expressed sense of tolerance may be one of the most effective ways to bridge these political gaps in the personal sphere.

The study, “‘Sleeping with the enemy’: partisanship and tolerance in online dating,” was authored by Yara Sleiman, Georgios Melios and Paul Dolan.

New study finds CBD worsens cannabis effects in schizophrenia

A new study has found that, contrary to expectations, pre-treatment with cannabidiol, or CBD, exacerbated the acute memory impairment and psychotic symptoms caused by cannabis in patients with schizophrenia. This research, which offers a more complex picture of how cannabinoids interact in this clinical population, was published in Neuropsychopharmacology.

Researchers have long observed that cannabis use can worsen symptoms and increase the risk of relapse in people diagnosed with schizophrenia. The adverse effects of cannabis are largely attributed to one of its main components, delta-9-tetrahydrocannabinol, or THC. Another major component of the cannabis plant is cannabidiol, or CBD.

While structurally similar to THC, CBD acts quite differently in the body and does not produce an intoxicating “high.” Its exact mechanism of action is still an area of active investigation, but it is thought to interact with the body’s endocannabinoid system in complex ways. One leading theory suggests CBD alters the function of the brain’s primary cannabinoid receptor, known as CB1, changing how it responds to THC and the body’s own cannabinoid molecules.

Because of these properties, CBD has been investigated as a potential treatment for psychosis. Several clinical trials have suggested that high doses of CBD can help reduce psychotic symptoms in people with schizophrenia. It also appears to have a favorable safety profile and is generally well-tolerated by patients, making it a promising candidate for a new therapeutic approach.

The question remained, however, whether CBD could also protect against the acute negative effects of THC. Previous experimental studies in healthy volunteers have produced mixed results. Some found that CBD could lessen THC-induced impairment, while others reported no effect or even an increase in some adverse effects. These discrepancies could be due to variations in dosage, the timing of administration, and whether the substances were inhaled or taken orally.

The new study was designed to clarify this relationship in a clinically relevant population: individuals with schizophrenia who also regularly use cannabis. The researchers hypothesized that a high dose of CBD given before cannabis use would protect against THC-induced memory problems and psychotic symptoms.

“Cannabis addiction is fairly common in people with schizophrenia and is linked to poor outcomes. I always encourage my patients to try and reduce their use, as this should improve their quality of life and risk of relapse, but there’s a large group of people who don’t want to stop,” said study author Edward Chesney, a clinical lecturer at King’s College London.

“Since CBD is being developed as a treatment for schizophrenia, and for cannabis addiction too, we designed this laboratory study to see if CBD could be used to prevent or reduce cannabis-induced psychosis. We therefore recruited people with schizophrenia who use cannabis, randomized them to treatment with a clinical dose of CBD or a placebo, and then gave them a large dose of vaporized cannabis.”

A randomized, double-blind, placebo-controlled, crossover trial is considered a robust method for testing interventions. Thirty participants, all diagnosed with schizophrenia or schizoaffective disorder and a co-occurring cannabis use disorder, completed the main part of the study. Each participant attended two separate experimental sessions.

In one session, they received a 1000 mg oral dose of CBD. In the other session, they received an identical-looking placebo capsule. The order in which they received CBD or placebo was random, and neither the participants nor the researchers knew which treatment was given on which day. Three hours after taking the capsule, to allow the CBD to reach its peak concentration in the body, participants inhaled a controlled dose of vaporized cannabis containing THC.

The researchers measured several outcomes. The primary measure of cognitive function was a test of delayed verbal recall, which assesses the ability to remember a list of words after a short delay. To measure psychotic symptoms, they used a standardized clinical interview called the Positive and Negative Syndrome Scale, focusing on the positive symptoms subscale which includes items like paranoia and disorganized thinking. The team also collected blood samples to measure the concentrations of THC and CBD in the participants’ systems.

The results of the experiment were the opposite of what the researchers had predicted. When participants were pre-treated with CBD, their performance on the memory test was worse than when they were pre-treated with the placebo. On average, they recalled about 1.3 fewer words after receiving CBD compared to the placebo condition.

Similarly, the psychotic symptoms induced by cannabis were more severe following CBD pre-treatment. The average increase in the psychosis rating scale score was 5.0 points after CBD, compared to an increase of 2.9 points after the placebo. The researchers noted that large increases in these symptoms were observed in seven participants in the CBD condition. Specifically, CBD appeared to heighten cannabis-induced conceptual disorganization and feelings of suspiciousness.

“The effects were very clear and clinically meaningful,” Chesney told PsyPost. “Almost all the large psychotic reactions we observed were in the CBD pre-treatment group. The results were completely unexpected. We thought CBD would reduce the effects of THC, but the opposite happened — CBD actually increased THC’s adverse effects.”

“Interestingly, CBD didn’t change how strong or long the high felt, nor did it affect anxiety levels. I had initially assumed that CBD had increased all the effects of the cannabis, but it seems to have specifically increased the psychotic and cognitive symptoms for reasons we don’t yet understand.”

To understand why this might be happening, the researchers examined the blood samples. They looked for a pharmacokinetic interaction, which would occur if CBD changed the way the body metabolizes THC, perhaps by increasing the levels of THC in the blood. They found no evidence for this. The plasma concentrations of THC and its main active metabolite, 11-hydroxy-THC, were not significantly different between the CBD and placebo conditions. This suggests the effect was likely pharmacodynamic, meaning it relates to how the two substances interact with receptors and systems in the brain, rather than how they are processed by the body.

The findings highlight “that cannabinoids and the endocannabinoid system are very complex,” Chesney said. “We didn’t observe a pharmacokinetic interaction between CBD and THC, so perhaps there’s something more interesting at play – perhaps there’s something different about the brains of people with schizophrenia, or heavy cannabis users, which makes them sensitive to the effects of CBD as well as THC.”

The study has some limitations. The findings apply to a specific population of patients with both schizophrenia and a cannabis use disorder, and the results may not generalize to people with schizophrenia who do not use cannabis regularly. The experiment used a single high dose of CBD, and the effects could be different at other doses. Also, the cannabis dose was fixed by the researchers, which differs from real-world scenarios where users can adjust their intake.

Future research could explore whether these effects are present in people with schizophrenia who do not have a cannabis use disorder, or in people with a cannabis use disorder who do not have schizophrenia. This would help determine if the observed interaction is specific to the combination of these two conditions.

Despite these limitations, the study provides important information about the complex interactions between cannabinoids, particularly in a vulnerable clinical population. The results suggest that for patients with schizophrenia who use cannabis, taking CBD may not be a safe strategy to mitigate the harms of THC and could potentially make them worse.

“I don’t think this makes it less likely that CBD will work as a treatment for schizophrenia,” Chesney added. “It’s just a single study, and we only used a single dose of CBD. With antidepressants, for example, you often see an initial increase in anxiety levels and restlessness before you start to see some benefit. The results of clinical trials of CBD, where patients have received treatment for several weeks, are still very encouraging. I still come across lots of people who think that CBD is just a placebo, the results of my study suggest that it is definitely doing something.”

The study, “Does cannabidiol reduce the adverse effects of cannabis in schizophrenia? A randomised, double-blind, cross-over trial,” was authored by Edward Chesney, Dominic Oliver, Ananya Sarma, Ayşe Doğa Lamper, Ikram Slimani, Millie Lloyd, Alex M. Dickens, Michael Welds, Matilda Kråkström, Irma Gasparini-Andre, Matej Orešič, Will Lawn, Natavan Babayeva, Tom P. Freeman, Amir Englund, John Strang, and Philip McGuire.

Google’s AI co-scientist just solved a biological mystery that took humans a decade

Can artificial intelligence function as a partner in scientific discovery, capable of generating novel, testable hypotheses that rival those of human experts? Two recent studies highlight how a specialized AI developed by Google not only identified drug candidates that showed significant anti-fibrotic activity in a laboratory model of chronic liver disease but also independently deduced a complex mechanism of bacterial gene transfer that had taken human scientists years to solve.

The process of scientific discovery has traditionally relied on human ingenuity, combining deep expertise with creative insight to formulate new questions and design experiments. However, the sheer volume of published research makes it challenging for any single scientist to connect disparate ideas across different fields. A new wave of artificial intelligence tools aims to address this challenge by augmenting, and accelerating, human-led research.

One such tool is Google’s AI co-scientist, which its developers hope will significantly alter the landscape of biomedical research. Recent studies published in Advanced Science and Cell provide early evidence of this potential, showing the system’s ability to not only sift through vast datasets but also to engage in a reasoning process that can lead to high-impact discoveries.

Google’s AI Co-scientist: A Multi-Agent System for Discovery

Google’s AI co-scientist is a multi-agent system built upon the Gemini 2.0 large language model, designed to mirror the iterative process of the scientific method. It operates not as a single entity, but as a team of specialized AI agents working together. This structure is intended to help scientists generate new research ideas, create detailed proposals, and plan experiments.

The system operates through a “scientist-in-the-loop” model, where human experts can provide initial research goals, offer feedback, and guide the AI’s exploration using natural language. The specialized agents each handle a distinct part of the scientific reasoning process. The Generation Agent acts as a brainstormer, exploring scientific literature and engaging in simulated debates to produce initial ideas. The Reflection Agent serves as a peer reviewer, critically assessing these ideas for quality, novelty, and plausibility.

Other agents contribute to refining the output. The Ranking Agent runs an Elo-based tournament, similar to chess rankings, to prioritize the most promising hypotheses. The Evolution Agent works to improve top-ranked ideas by combining concepts or thinking in unconventional ways. A Meta-review Agent synthesizes all the feedback to improve the performance of the other agents over time. This collaborative, self-improving cycle is designed to produce increasingly novel and high-quality scientific insights.

AI Pinpoints New Drug Candidates for Liver Fibrosis

In the study published in Advanced Science, researchers partnered with Google to explore new ways of treating liver fibrosis, a progressive condition marked by excessive scarring in the liver. Current treatment options are extremely limited, in part because existing models for studying the disease do not accurately replicate how fibrosis develops in the human liver. These limitations have hindered drug development for years.

To address this gap, the research team asked the AI co-scientist to generate new, testable hypotheses for treating liver fibrosis. Specifically, they tasked the AI with exploring how epigenomic mechanisms—chemical changes that influence gene activity without altering the DNA sequence—might be targeted to reduce or reverse fibrosis.

“For the data used in the paper, we provided a single prompt and received a response from AI co-scientist, which are shown in supplemental data file 1,” explained Gary Peltz, a professor at Stanford University School of Medicine. “The prompt was carefully prepared, providing the area (epigenomic effects in liver fibrosis) and experimental methods (use of our hepatic organoids) to focus on. However, in most cases, it is important to iteratively engage with an AI in order to better define the question and enable it to provide a more complete answer.”

The AI system scanned the scientific literature and proposed that three classes of epigenomic regulators could be promising targets for anti-fibrotic therapy: histone deacetylases (HDACs), DNA methyltransferase 1 (DNMT1), and bromodomain protein 4 (BRD4). It also outlined experimental techniques for testing these ideas, such as single-cell RNA sequencing to track how the drugs might affect different cell populations. The researchers incorporated these suggestions into their experimental design.

To test the AI’s proposals, the team used a laboratory system based on human hepatic organoids—three-dimensional cell cultures derived from stem cells that resemble key features of the human liver. These mini-organs contain a mix of liver cell types and can model fibrosis when exposed to fibrotic triggers like TGF-beta, a molecule known to promote scarring. The organoid system allowed researchers to assess not just whether a drug could reduce fibrosis, but also whether it would be toxic or promote regeneration of liver tissue.

The findings provided evidence that two of the drug classes proposed by AI (HDAC inhibitors and BRD4 inhibitors) showed strong anti-fibrotic effects. One of the tested compounds, Vorinostat, is an FDA-approved cancer drug. In the organoid model, it not only suppressed fibrosis but also appeared to stimulate the growth of healthy liver cells.

“Since I was working on the text for a grant submission in this area, I was surprised by the AI co-scientist output,” Peltz told PsyPost.

In particular, Peltz was struck by how little prior research had explored this potential. After checking PubMed, he found over 180,000 papers on liver fibrosis in general, but only seven that mentioned Vorinostat in this context. Of those, four turned out to be unrelated to fibrosis, and another only referenced the drug in a data table without actually testing it. That left just two studies directly investigating Vorinostat for liver fibrosis.

While the HDAC and BRD4 inhibitors showed promising effects, the third AI-recommended class, DNMT1 inhibitors, did not. One compound in this category was too toxic to the organoids to be considered viable for further study.

To evaluate the AI’s performance, Peltz also selected two additional drug targets for comparison based on existing literature. These were chosen precisely because they had more published support suggesting they might work against fibrosis.

But when tested in the same organoid system, the inhibitors targeting those well-supported pathways did not reduce fibrosis. This outcome suggested that the AI was able to surface potentially effective treatments that human researchers might have missed, despite extensive literature reviews.

Looking ahead, Peltz said his team is “developing additional data with our liver organoid system to determine if Vorinostat can be effective for reducing an established fibrosis, and we are talking with some organizations and drug companies about the potential for Vorinostat being tested as an anti-fibrotic agent.”

An AI Recapitulates a Decade-Long Discovery in Days

In a separate demonstration of its reasoning power, the AI co-scientist was challenged to solve a biological mystery that had taken a team at Imperial College London over a decade to unravel. The research, published in Cell, focused on a peculiar family of mobile genetic elements in bacteria known as capsid-forming phage-inducible chromosomal islands, or cf-PICIs.

Scientists were puzzled by how identical cf-PICIs were found in many different species of bacteria. This was unexpected because these elements rely on viruses called phages to spread, and phages typically have a very narrow host range, often infecting only a single species or strain. The human research team had already solved the puzzle through years of complex experiments, but their findings were not yet public.

They had discovered a novel mechanism they termed “tail piracy,” where cf-PICIs produce their own DNA-filled “heads” (capsids) but lack tails. These tailless particles are then released and can hijack tails from a wide variety of other phages infecting different bacterial species, creating chimeric infectious particles that can inject the cf-PICI’s genetic material into a new host.

To test the AI co-scientist, the researchers provided it only with publicly available information from before their discovery was made and posed the same question: how do identical cf-PICIs spread across different bacterial species?

The AI co-scientist generated five ranked hypotheses. Its top-ranked suggestion was that cf-PICIs achieve their broad host range through “capsid-tail interactions,” proposing that the cf-PICI heads could interact with a wide range of phage tails. This hypothesis almost perfectly mirrored the “tail piracy” mechanism the human team had spent years discovering.

The AI, unburdened by the researchers’ initial assumptions and biases from existing scientific models, arrived at the core of the discovery in a matter of days. When the researchers benchmarked this result, they found that other leading AI models were not able to produce the same correct hypothesis, suggesting a more advanced reasoning capability in the AI co-scientist system.

Limitations and the Path Forward

Despite these promising results, researchers involved in the work caution that significant limitations remain. The performance of the AI co-scientist has so far been evaluated on a small number of specific biological problems. More testing is needed to determine if this capability can be generalized across other scientific domains. The AI’s reasoning is also dependent on the quality and completeness of the publicly available data it analyzes, which may contain its own biases or gaps in knowledge.

Perhaps most importantly, human expertise remains essential. While an AI can generate a large volume of plausible hypotheses, it lacks the deep contextual judgment that comes from years of hands-on experience. An experienced scientist is still needed to evaluate which ideas are truly worth pursuing and to design the precise experiments required for validation. The challenge of how to prioritize AI-generated ideas is substantial, as traditional experimental pipelines are not fast or inexpensive enough to test every promising lead.

“Generally, AI output must be evaluated by people with knowledge in the area; and AI output is most valuable to those with domain-specific expertise because they are best positioned to assess it and to make use of it,” Peltz told PsyPost.

Nevertheless, these two studies provide evidence that AI systems are evolving from helpful assistants into true collaborative partners in the scientific process. By generating novel and experimentally verifiable hypotheses, tools like the AI co-scientist have the potential to supercharge human intuition and accelerate the pace of scientific and biomedical breakthroughs.

“I believe that AI will dramatically accelerate the pace of discovery for many biomedical areas and will soon be used to improve patient care,” Peltz said. “My lab is currently using it for genetic discovery and for drug re-purposing, but there are many other areas of bioscience that will soon be impacted. At present, I believe that AI co-scientist is the best in this area, but this is a rapidly advancing field.”

The study, “AI-Assisted Drug Re-Purposing for Human Liver Fibrosis,” was authored by Yuan Guan, Lu Cui, Jakkapong Inchai, Zhuoqing Fang, Jacky Law, Alberto Alonzo Garcia Brito, Annalisa Pawlosky, Juraj Gottweis, Alexander Daryin, Artiom Myaskovsky, Lakshmi Ramakrishnan, Anil Palepu, Kavita Kulkarni, Wei-Hung Weng, Zhuanfen Cheng, Vivek Natarajan, Alan Karthikesalingam, Keran Rong, Yunhan Xu, Tao Tu, and Gary Peltz.

The study, “Chimeric infective particles expand species boundaries in phage-inducible chromosomal island mobilization,” was authored by Lingchen He, Jonasz B. Patkowski, Jinlong Wang, Laura Miguel-Romero, Christopher H.S. Aylett, Alfred Fillol-Salom, Tiago R.D. Costa, and José R. Penadés.

The study, “AI mirrors experimental science to uncover a mechanism of gene transfer crucial to bacterial evolution,” was authored by José R. Penadés, Juraj Gottweis, Lingchen He, Jonasz B. Patkowski, Alexander Daryin, Wei-Hung Weng, Tao Tu, Anil Palepu, Artiom Myaskovsky, Annalisa Pawlosky, Vivek Natarajan, Alan Karthikesalingam, and Tiago R.D. Costa.

In neuroscience breakthrough, scientists identify key component of how exercise triggers neurogenesis

A recent study suggests that some of exercise’s brain-enhancing benefits can be transferred through tiny particles found in the blood. Researchers discovered that injecting these particles, called extracellular vesicles, from exercising mice into sedentary mice promoted the growth of new neurons in the hippocampus, a brain region important for learning and memory. The findings were published in the journal Brain Research.

Aerobic exercise can enhance cognitive functions, in part by stimulating the birth of new neurons in the hippocampus. This process, known as adult neurogenesis, is linked to improved learning, memory, and mood regulation. Understanding the specific mechanisms connecting physical activity to brain health could lead to new therapies for age-related cognitive decline and other neurological conditions.

However, the exact biological conversation between active muscles and the distant brain has remained largely a mystery. A leading hypothesis suggests that exercise releases specific factors into the bloodstream that travel to the brain and initiate these changes. Previous work has shown that blood plasma from exercising animals can be transferred to sedentary ones, resulting in cognitive benefits. This observation has narrowed the search for the responsible agents.

Among these potential messengers are extracellular vesicles, which are minuscule sacs released by cells that carry a diverse cargo of proteins, lipids, and genetic material. During exercise, tissues like muscle and liver release these vesicles into circulation at an increased rate. Because these particles are capable of crossing the protective blood-brain barrier, researchers proposed they might be a key vehicle for delivering exercise’s benefits directly to the brain.

“The hippocampus is critical for learning and memory and atrophy of the hippocampus is associated with common mental health problems like depression, anxiety,, PTST, epilepsy, Alzheimer’s disease and normal aging. So figuring out ways of increasing the integrity of the hippocampus is an avenue to pursue for addressing these problems,” explained study author Justin Rhodes, a professor at the University of Illinois, Urbana-Champaign.

“It is known that exercise increases the formation of new neurons in the hippocampus, and is a natural way to combat all the aforementioned mental health problems. It is further known that there are factors released into the blood that contribute to adult hippocampal neurogenesis. But most likely a mixture of factors rather than one magic chemical. The idea that extracellular vesicles containing lots of different kinds of molecules could communicate with complex chemical signatures from the blood to the brain and contribute to neurogenesis was not known until our study.”

To examine this, the researchers used two groups of mice. One group had unlimited access to running wheels for four weeks, while the other group was housed in cages with locked wheels, serving as a sedentary control. As expected, the running mice showed a significant increase in new brain cells in their own hippocampi, confirming the effectiveness of the exercise regimen.

After four weeks, the team collected blood from both the exercising and sedentary donor mice. From this blood, they isolated the extracellular vesicles using a filtration method that separates particles by size. This process yielded two distinct batches of vesicles: one from the exercising mice and one from the sedentary mice.

The team then administered these vesicles to a new set of sedentary recipient mice over a four-week period. These recipients were divided into three groups. One group received injections of vesicles from the exercising donors, a second group received vesicles from the sedentary donors, and a third group received a simple phosphate-buffered saline solution as a placebo.

To track the creation of new cells in the recipients’ brains, the mice were given injections of a labeling compound known as BrdU. This substance is incorporated into the DNA of dividing cells, effectively tagging them so they can be identified and counted later under a microscope. To ensure the reliability of their results, the entire experiment was conducted twice with two independent groups of mice.

The researchers found that mice that received vesicles from the exercising donors exhibited an approximately 50 percent increase in the number of new, BrdU-labeled cells in the hippocampus compared to mice that received vesicles from sedentary donors or the placebo solution. The findings were consistent across both independent cohorts, strengthening the conclusion that something within the exercise-derived vesicles was promoting cell proliferation.

“I was surprised that the vesicles were sufficient to increase neurogenesis,” Rhodes told PsyPost. “That is because when you exercise, there is not only the contribution of blood factors, but things going on in the brain like large amounts of neuronal activity in the hippocampus that I thought would be necessary for neurogenesis to occur. The results suggest apparently not, the vesicles alone without the other physiological components of actual exercise, are sufficient to increase neurogenesis to a degree, not the full degree, but to a degree.”

The researchers also examined the identity of these new cells to determine what they were becoming. Using fluorescent markers, they identified that, across all groups, about 89 percent of the new cells developed into neurons. A smaller fraction, about 6 percent, became a type of support cell called an astrocyte. This indicates that the vesicles from exercising mice increased the quantity of new neurons being formed, rather than changing what type of cells they became.

Finally, the team assessed whether the treatment affected the density of blood vessels in the hippocampus, as exercise is also known to promote changes in brain vasculature. By staining for a protein found in blood vessel walls, they measured the total vascular area. They found no significant differences in vascular coverage among the three groups, suggesting that the neurogenesis-promoting effect of the vesicles may be independent of vascular remodeling.

“One of the reasons exercise improves mental health is that it stimulates new neurons to form in an area of your brain that is important for learning and memory and for inhibiting stress, and now we know a big piece of the puzzle as to how exercise does this,” Rhodes said. “Exercise causes tissues like muscle and liver to secrete vesicles (sacs that contain lots of different kinds of chemicals) that reach the brain and stimulate neurogenesis.”

“Those vesicles can be taken from an animal that exercises and placed into an animal that is not exercising, and it can increase neurogenesis, not to the full level of that exercise does, but significantly increase it. That strongly suggests the vesicles themselves are carrying critical information. One can imagine a therapy in the future where either vesicles are harvested from exercising humans from their blood and introduced into individuals or synthetic vesicles are made that carry the unique mixture of chemicals that are identified in the exercise vesicles.”

While the findings point to extracellular vesicles as key players in exercise-induced brain plasticity, the study also highlights several areas for future inquiry. A primary question is whether the vesicles directly act on the brain or if their effects are mediated by peripheral organs. It is not yet known what fraction of the injected vesicles crossed the blood-brain barrier. The vesicles could potentially trigger signals in other parts of the body that then influence the brain.

The specific molecular cargo within the vesicles responsible for the neurogenic effect also needs to be identified. A companion study by the same research group found that vesicles from exercising mice were enriched with proteins related to brain plasticity, antioxidant defense, and cellular signaling. Future work will be needed to pinpoint which of these molecules, or combination of molecules, is responsible for the observed increase in new neurons.

“I think there are a lot of ways this could go,” Rhodes told PsyPost. “First, it is a pretty big black box between injecting the animals with vesicles and neurogenesis happening in the hippocampus. How many of the extracellular vesicles make it to the brain? Are they acting in the brain or in the periphery, i.e., maybe via peripheral nerves, mesenteric nervous system, immune activation or other ways we didn’t think of yet.”

“If they are reaching the brain, how do they merge with brain cells, do they reach astrocytes first? How do the vesicles get taken up by the brain cells is it through phagocytosis? What do the chemical signals do to the brain cells that causes increased neurogenesis? Do they act directly on neural progenitor cells, or astrocytes or mature neurons? What are the signaling mechanisms involved in the communication from the extracellular vesicles to the neurons/astrocytes/progenitor cells that causes neurogenesis to occur?”

The study, “Exercise-induced plasma-derived extracellular vesicles increase adult hippocampal neurogenesis,” was authored by Meghan G. Connolly, Alexander M. Fliflet, Prithika Ravi, Dan I. Rosu, Marni D. Boppart, and Justin S. Rhodes.

Scientists question caffeine’s power to shield the brain from junk food

A recent study provides evidence that while a diet high in fat and sugar is associated with memory impairment, habitual caffeine consumption is unlikely to offer protection against these negative effects. These findings, which come from two related experiments, help clarify the complex interplay between diet, stimulants, and cognitive health in humans. The findings were published in Physiology & Behavior.

Researchers have become increasingly interested in the connection between nutrition and brain function. A growing body of scientific work, primarily from animal studies, has shown that diets rich in fat and sugar can impair memory, particularly functions related to the hippocampus, a brain region vital for learning and recall.

Human studies have started to align with these findings, linking high-fat, high-sugar consumption with poorer performance on memory tasks and with more self-reported memory failures. Given these associations, scientists are searching for potential protective factors that might lessen the cognitive impact of a poor diet.

Caffeine is one of the most widely consumed psychoactive substances in the world, and its effects on cognition have been studied extensively. While caffeine is known to improve alertness and reaction time, its impact on memory has been less clear. Some research in animal models has suggested that caffeine could have neuroprotective properties, potentially guarding against the memory deficits induced by a high-fat, high-sugar diet. These animal studies hinted that caffeine might work by reducing inflammation or through other brain-protective mechanisms. However, this potential protective effect had not been thoroughly investigated in human populations, a gap this new research aimed to address.

To explore this relationship, the researchers conducted two experiments. In the first experiment, they recruited 1,000 healthy volunteers between the ages of 18 and 45. Participants completed a series of online questionnaires designed to assess their dietary habits, memory, and caffeine intake. Their consumption of fat and sugar was measured using the Dietary Fat and free Sugar questionnaire, which asks about the frequency of eating various foods over the past year.

To gauge memory, participants filled out the Everyday Memory Questionnaire, a self-report measure where they rated how often they experience common memory lapses, such as forgetting names or misplacing items. Finally, they reported their daily caffeine consumption from various sources like coffee, tea, and soda.

The results from this first experiment confirmed a link between diet and self-perceived memory. Individuals who reported eating a diet higher in fat and sugar also reported experiencing more frequent everyday memory failures. The researchers then analyzed whether caffeine consumption altered this relationship. The analysis suggested a potential, though not statistically strong, moderating effect.

When the researchers specifically isolated the fat component of the diet, they found that caffeine consumption did appear to weaken the association between high fat intake and self-reported memory problems. At low levels of caffeine intake, a high-fat diet was strongly linked to memory complaints, but this link was not present for those with high caffeine intake. This provided preliminary evidence that caffeine might offer some benefit.

The second experiment was designed to build upon the initial findings with a more robust assessment of memory. This study involved 699 healthy volunteers, again aged 18 to 45, who completed the same questionnaires on diet, memory failures, and caffeine use. The key addition in this experiment was an objective measure of memory called the Verbal Paired Associates task. In this task, participants were shown pairs of words and were later asked to recall the second word of a pair when shown the first. This test provides a direct measure of episodic memory, which is the ability to recall specific events and experiences.

The findings from the second experiment once again showed a clear association between diet and memory. A higher intake of fat and sugar was linked to more self-reported memory failures, replicating the results of the first experiment. The diet was also associated with poorer performance on the objective Verbal Paired Associates task, providing stronger evidence that a high-fat, high-sugar diet is connected to actual memory impairment, not just the perception of it.

When the researchers examined the role of caffeine in this second experiment, the results were different from the first. This time, caffeine consumption did not moderate the relationship between a high-fat, high-sugar diet and either of the memory measures. In other words, individuals who consumed high amounts of caffeine were just as likely to show diet-related memory deficits as those who consumed little or no caffeine.

This lack of a protective effect was consistent for both self-reported memory failures and performance on the objective word-pair task. The findings from this more comprehensive experiment did not support the initial suggestion that caffeine could shield memory from the effects of a poor diet.

The researchers acknowledge certain limitations in their study. The data on diet and caffeine consumption were based on self-reports, which can be subject to recall errors. The participants were also relatively young and generally healthy, and the effects of diet on memory might be more pronounced in older populations or those with pre-existing health conditions. Since the study was conducted online, it was not possible to control for participants’ caffeine intake right before they completed the memory tasks, which could have influenced performance.

For future research, the scientists suggest using more objective methods to track dietary intake. They also recommend studying different populations, such as older adults or individuals with obesity, where the links between diet, caffeine, and memory may be clearer. Including a wider array of cognitive tests could also help determine if caffeine has protective effects on other brain functions beyond episodic memory, such as attention or executive function. Despite the lack of a protective effect found here, the study adds to our understanding of how lifestyle factors interact to influence cognitive health.

The study, “Does habitual caffeine consumption moderate the association between a high fat and sugar diet and self-reported and episodic memory impairment in humans?,” was authored by Tatum Sevenoaks and Martin Yeomans.

Vulnerability to stress magnifies how a racing mind disrupts sleep

A new study provides evidence that a person’s innate vulnerability to stress-induced sleep problems can intensify how much a racing mind disrupts their sleep over time. While daily stress affects everyone’s sleep to some degree, this trait appears to make some people more susceptible to fragmented sleep. The findings were published in the Journal of Sleep Research.

Scientists have long understood that stress can be detrimental to sleep. One of the primary ways this occurs is through pre-sleep arousal, a state of heightened mental or physical activity just before bedtime. Researchers have also identified a trait known as sleep reactivity, which describes how susceptible a person’s sleep is to disruption from stress. Some individuals have high sleep reactivity, meaning their sleep is easily disturbed by stressors, while others have low reactivity and can sleep soundly even under pressure.

Despite knowing these factors are related, the precise way they interact on a daily basis was not well understood. Most previous studies relied on infrequent, retrospective reports or focused on major life events rather than common, everyday stressors. The research team behind this new study sought to get a more detailed picture. They aimed to understand how sleep reactivity might alter the connection between daily stress, pre-sleep arousal, and objectively measured sleep patterns in a natural setting.

“Sleep reactivity refers to an individual’s tendency to experience heightened sleep disturbances when faced with stress. Those with high sleep reactivity tend to show increased pre-sleep arousal during stressful periods and are at greater risk of developing both acute and chronic insomnia,” explained study authors Ju Lynn Ong and Stijn Massar, who are both research assistant professors at the National University of Singapore Yong Loo Lin School of Medicine.

“However, most prior research on stress, sleep, and sleep reactivity has relied on single, retrospective assessments, which may fail to capture the immediate and dynamic effects of daily stressors on sleep. Another limitation is that previous studies often examined either the cognitive or physiological components of pre-sleep arousal in isolation. Although these two forms of arousal are related, they may differ in their predictive value and underlying mechanisms, highlighting the importance of evaluating both concurrently.”

“To address these gaps, the current study investigated how day-to-day fluctuations in stress relate to sleep among university students over a two-week period and whether pre-sleep cognitive and physiological arousal mediate this relationship—particularly in individuals with high sleep reactivity.”

The research team began by recruiting a large group of full-time university students. They had the students complete a questionnaire called the Ford Insomnia Response to Stress Test, which is designed to measure an individual’s sleep reactivity. From this initial pool, the researchers selected two distinct groups for a more intensive two-week study: 30 students with the lowest scores, indicating low sleep reactivity, and 30 students with the highest scores, representing high sleep reactivity.

Over the following 14 days, these 60 participants were monitored using several methods. They wore an actigraphy watch on their wrist, which uses motion sensors to provide objective data on sleep patterns. This device measured their total sleep time, the amount of time it took them to fall asleep, and the time they spent awake after initially drifting off. Participants also wore an ŌURA ring, which recorded their pre-sleep heart rate as an objective indicator of physiological arousal.

Alongside these objective measures, participants completed daily surveys on their personal devices. Each evening before going to bed, they rated their perceived level of stress. Upon waking the next morning, they reported on their pre-sleep arousal from the previous night. These reports distinguished between cognitive arousal, such as having racing thoughts or worries, and somatic arousal, which includes physical symptoms like a pounding heart or muscle tension.

The first part of the analysis examined within-individual changes, which looks at how a person’s sleep on a high-stress day compared to their own personal average. The results showed that on days when participants felt more stressed than usual, they also experienced a greater degree of pre-sleep cognitive arousal. This increase in racing thoughts was, in turn, associated with getting less total sleep and taking longer to fall asleep that night. This pattern was observed in both the high and low sleep reactivity groups.

This finding suggests that experiencing a more stressful day than usual is likely to disrupt anyone’s sleep to some extent, regardless of their underlying reactivity. It appears to be a common human response for stress to activate the mind at bedtime, making sleep more difficult. The trait of sleep reactivity did not seem to alter this immediate, day-to-day effect.

“We were surprised to find that at the daily level, all participants did in fact exhibit a link between higher perceived stress and poorer sleep the following night, regardless of their level of sleep reactivity,” Ong and Massar told PsyPost. “This pattern may reflect sleep disturbances as a natural—and potentially adaptive—response to stress.”

The researchers then turned to between-individual differences, comparing the overall patterns of people in the high-reactivity group to those in the low-reactivity group across the entire two-week period. In this analysis, a key distinction became clear. Sleep reactivity did in fact play a moderating role, amplifying the negative effects of stress and arousal.

Individuals with high sleep reactivity showed a much stronger connection between their average stress levels, their average pre-sleep cognitive arousal, and their sleep quality. For these highly reactive individuals, having higher average levels of cognitive arousal was specifically linked to spending more time awake after initially falling asleep. In other words, their predisposition to stress-related sleep disturbance made their racing thoughts more disruptive to maintaining sleep throughout the night.

The researchers also tested whether physiological arousal played a similar role in connecting stress to poor sleep. They examined both the participants’ self-reports of physical tension and their objectively measured pre-sleep heart rate. Neither of these measures of physiological arousal appeared to be a significant middleman in the relationship between stress and sleep, for either group. The link between stress and sleep disruption in this study seemed to operate primarily through mental, not physical, arousal.

“On a day-to-day level, both groups exhibited heightened pre-sleep cognitive arousal and greater sleep disturbances in response to elevated daily stress,” the researchers explained. “However, when considering the study period as a whole, individuals with high sleep reactivity consistently reported higher average levels of stress and pre-sleep cognitive arousal, which in turn contributed to more severe sleep disruptions compared to low-reactive sleepers. Notably, these stress → pre-sleep arousal → sleep associations emerged only for cognitive arousal, not for somatic arousal—whether assessed through self-reports or objectively measured via pre-sleep heart rate.”

The researchers acknowledged some limitations of their work. The study sample consisted of young university students who were predominantly female and of Chinese descent, so the results may not be generalizable to other demographic groups or age ranges. Additionally, the study excluded individuals with diagnosed sleep disorders, meaning the findings might differ in a clinical population. The timing of the arousal survey, completed in the morning, also means it was a retrospective report that could have been influenced by the night’s sleep. It is also important to consider the practical size of these effects.

While statistically significant, the changes were modest: a day with stress levels 10 points higher than usual was linked to about 2.5 minutes less sleep, and the amplified effect in high-reactivity individuals amounted to about 1.2 additional minutes of wakefulness during the night for every 10-point increase in average stress.

Future research could build on these findings by exploring the same dynamics in more diverse populations. The study also highlights pre-sleep cognitive arousal as a potential target for intervention, especially for those with high sleep reactivity. Investigating whether therapies like cognitive-behavioral therapy for insomnia can reduce this mental activation could offer a path to preventing temporary, stress-induced sleep problems from developing into chronic conditions.

The study, “Sleep Reactivity Amplifies the Impact of Pre-Sleep Cognitive Arousal on Sleep Disturbances,” was authored by Noof Abdullah Saad Shaif, Julian Lim, Anthony N. Reffi, Michael W. L. Chee, Stijn A. A. Massar, and Ju Lynn Ong.

Public Montessori preschool yields improved reading and cognition at a lower cost

The debate over the most effective models for early childhood education is a longstanding one. While the benefits of preschool are widely accepted, researchers have observed that the academic advantages gained in many programs tend to diminish by the time children finish kindergarten, a phenomenon often called “fade-out.” Some studies have even pointed to potential negative long-term outcomes from certain public preschool programs, intensifying the search for approaches that provide lasting benefits.

This situation prompted researchers to rigorously examine the Montessori method, a well-established educational model that has been in practice for over a century. Their new large-scale study found that children offered a spot in a public Montessori preschool showed better outcomes in reading, memory, executive function, and social understanding by the end of kindergarten.

The research also revealed that this educational model costs public school districts substantially less over three years compared to traditional programs. The findings were published in the Proceedings of the National Academy of Sciences.

The Montessori method is an educational approach developed over a century ago by Maria Montessori. Its classrooms typically feature a mix of ages, such as three- to six-year-olds, learning together. The environment is structured around child-led discovery, where students choose their own activities from a curated set of specialized, hands-on materials. The teacher acts more as a guide for individual and small-group lessons rather than a lecturer to the entire class.

The Montessori model, which has been implemented in thousands of schools globally, had not previously been evaluated in a rigorous, national randomized controlled trial. This study was designed to provide high-quality evidence on its impact in a public school setting.

“There have been a few small randomized controlled trials of public Montessori outcomes, but they were limited to 1-2 schools, leaving open the question of whether the more positive results were due to something about those schools aside from the Montessori programming,” said study author Angeline Lillard, the Commonwealth Professor of Psychology at the University of Virginia.

“This national study gets around that by using 24 different schools, which each had 3-16 Montessori Primary (3-6) classrooms. In addition, the two prior randomized controlled trials that had trained Montessori teachers (making them more valid) compromised the randomized controlled trial in certain ways, including not using intention-to-treat designs that are preferred by some.”

To conduct the research, the research team took advantage of the admissions lotteries at 24 oversubscribed public Montessori schools across the United States. When a school has more applicants than available seats, a random lottery gives each applicant an equal chance of admission. This process creates a natural experiment, allowing for a direct comparison between the children who were offered a spot (the treatment group) and those who were not (the control group). Nearly 600 children and their families consented to participate.

The children were tracked from the start of preschool at age three through the end of their kindergarten year. Researchers administered a range of assessments at the beginning of the study and again each spring to measure academic skills, memory, and social-emotional development. The primary analysis was a conservative type called an “intention-to-treat” analysis, which measures the effect of simply being offered a spot in a Montessori program, regardless of whether the child actually attended or for how long.

The results showed no significant differences between the two groups after the first or second year of preschool. But by the end of kindergarten, a distinct pattern of advantages had emerged for the children who had been offered a Montessori spot. This group demonstrated significantly higher scores on a standardized test of early reading skills. They also performed better on a test of executive function, which involves skills like planning, self-control, and following rules.

The Montessori group also showed stronger short-term memory, as measured by their ability to recall a sequence of numbers. Their social understanding, or “theory of mind,” was also more advanced, suggesting a greater capacity to comprehend others’ thoughts, feelings, and beliefs. The estimated effects for these outcomes were considered medium to large for this type of educational research.

The study found no significant group differences in vocabulary or a math assessment, although the results for math trended in a positive direction for the Montessori group.

In a secondary analysis, the researchers estimated the effects only for the children who complied with their lottery assignment, meaning those who won and attended Montessori compared to those who lost and did not. As expected, the positive effects on reading, executive function, memory, and social understanding were even larger in this analysis.

“For example, a child who scored at the 50th percentile in reading in a traditional school would have been at the 62nd percentile had they won the lottery to attend Montessori; had they won and attended Montessori, they would have scored at the 71st percentile,” Lillard told PsyPost.

Alongside the child assessments, the researchers performed a detailed cost analysis. They followed a method known as the “ingredients approach,” which accounts for all the resources required to run a program. This included teacher salaries and training, classroom materials, and facility space for both Montessori and traditional public preschool classrooms. One-time costs, such as the specialized Montessori materials and extensive teacher training, were amortized over their expected 25-year lifespan.

The analysis produced a surprising finding. Over the three-year period from ages three to six, public Montessori programs were estimated to cost districts $13,127 less per child than traditional programs. The main source of this cost savings was the higher child-to-teacher ratio in Montessori classrooms for three- and four-year-olds. This is an intentional feature of the Montessori model, designed to encourage peer learning and independence. These savings more than offset the higher upfront costs for teacher training and materials.

“I thought Montessori would cost the same, once one amortized the cost of teacher training and materials,” Lillard said. “Instead, we calculated that (due to intentionally higher ratios at 3 and 4, which predicted higher classroom quality in Montessori) Montessori cost less.”

“Even when including a large, diverse array of schools, public Montessori had better outcomes. These finding were robust to many different approaches to the data. And, the cost analysis showed these outcomes were obtained at significantly lower cost than was spent on traditional PK3 through kindergarten programs in public schools.”

But as with all research, there are limitations. The research included only families who applied to a Montessori school lottery, so the findings might not be generalizable to the broader population. The consent rate to participate in the study was relatively low, at about 21 percent of families who were contacted. Families who won a lottery spot were also more likely to consent than those who lost, which could potentially introduce bias into the results.

“Montessori is not a trademarked term, so anyone can call anything Montessori,” Lillard noted. “We required that most teachers be trained by the two organizations with the most rigorous training — AMI or the Association Montessori Internationale, which Dr. Maria Montessori founded to carry on her work, and AMS or the American Montessori Society, which has less rigorous teacher-trainer preparation and is shorter, but is still commendable. Our results might not extend to all schools that call themselves Montessori. In addition, we had low buy-in as we recruited for this study in summer 2021 when COVID-19 was still deeply concerning. We do not know if the results apply to families that did not consent to participation.”

The findings are also limited to the end of kindergarten. Whether the observed advantages for the Montessori group persist, grow, or fade in later elementary grades is a question for future research. The study authors expressed a strong interest in following these children to assess the long-term impacts of their early educational experiences.

“My collaborators at the American Institutes for Research and the University of Pennsylvania and University of Virginia are deeply appreciative of the schools, teachers, and families who participated, and to our funders, the Institute for Educational Sciences, Arnold Ventures, and the Brady Education Foundation,” Lillard added.

The study, “A national randomized controlled trial of the impact of public Montessori preschool at the end of kindergarten,” was authored by Angeline S. Lillard, David Loeb, Juliette Berg, Maya Escueta, Karen Manship, Alison Hauser, and Emily D. Daggett.

Familial link between ADHD and crime risk is partly genetic, study suggests

A new study has found that individuals with ADHD have a higher risk of being convicted of a crime, and reveals this connection also extends to their family members. The research suggests that shared genetics are a meaningful part of the explanation for this link. Published in Biological Psychology, the findings show that the risk of a criminal conviction increases with the degree of genetic relatedness to a relative with ADHD.

The connection between ADHD and an increased likelihood of criminal activity is well-documented. Past research indicates that individuals with ADHD are two to three times more likely to be arrested or convicted of a crime. Scientists have also established that both ADHD and criminality have substantial genetic influences, with heritability estimates around 70-80% for ADHD and approximately 50% for criminal behavior. This overlap led researchers to hypothesize that shared genetic factors might partly explain the association between the two.

While some previous studies hinted at a familial connection, they were often limited to specific types of crime or a small number of relative types. The current research aimed to provide a more complete picture. The investigators sought to understand how the risk for criminal convictions co-aggregates, or clusters, within families across a wide spectrum of relationships, from identical twins to cousins. They also wanted to examine potential differences in these patterns between men and women.

“ADHD is linked to higher rates of crime, but it’s unclear why. We studied families to see whether shared genetic or environmental factors explain this connection, aiming to better understand how early support could reduce risk,” said study author Sofi Oskarsson, a researcher and senior lecturer in criminology at Örebro University.

To conduct the investigation, researchers utilized Sweden’s comprehensive national population registers. They analyzed data from a cohort of over 1.5 million individuals born in Sweden between 1987 and 2002. ADHD cases were identified through clinical diagnoses or prescriptions for ADHD medication recorded in national health registers. Information on criminal convictions for any crime, violent crime, or non-violent crime was obtained from the National Crime Register, with the analysis beginning from an individual’s 15th birthday, the age of criminal responsibility in Sweden.

The study design allowed researchers to estimate the risk of a criminal conviction for an individual based on whether a relative had ADHD. By comparing these risks across different types of relatives who share varying amounts of genetic material—identical twins (100%), fraternal twins and full siblings (average 50%), half-siblings (average 25%), and cousins (average 12.5%)—the team could infer the potential role of shared genes and environments.

The results first confirmed that individuals with an ADHD diagnosis had a substantially higher risk of being convicted of a crime compared to those without ADHD. The risk was particularly elevated for violent crimes.

The analysis also revealed a significant gender difference: while men with ADHD had higher absolute numbers of convictions, women with ADHD had a greater relative increase in risk compared to women without the disorder. For violent crime, the risk was over eight times higher for women with ADHD, while it was about five times higher for men with ADHD.

“Perhaps not a surprise given what we know today about ADHD, but the stronger associations found among women were very interesting and important,” Oskarsson told PsyPost. “ADHD is not diagnosed as often in females (or is mischaracterized), so the higher relative risk in women suggest that when ADHD is present, it may reflect a more severe or concentrated set of risk factors.”

The central finding of the study was the clear pattern of familial co-aggregation. Having a relative with ADHD was associated with an increased personal risk for a criminal conviction. This risk followed a gradient based on genetic relatedness.

The highest risk was observed in individuals whose identical twin had ADHD, followed by fraternal twins and full siblings. The risk was progressively lower for half-siblings and cousins. This pattern, where the association weakens as genetic similarity decreases, points toward the influence of shared genetic factors.

“Close relatives of people with ADHD were much more likely to have criminal convictions, especially twins, supporting a genetic contribution,” Oskarsson explained. “But the link is not deterministic, most individuals with ADHD or affected relatives are not convicted, emphasizing shared risk, not inevitability.”

The study also found that the stronger relative risk for women was not limited to individuals with ADHD. A similar pattern appeared in some familial relationships, specifically among full siblings and full cousins, where the association between a relative’s ADHD and a woman’s conviction risk was stronger than for men. This suggests that the biological and environmental mechanisms connecting ADHD and crime may operate differently depending on sex.

“People with ADHD are at a higher risk of criminality, but this risk also extend to their relatives,” Oskarsson said. “This pattern suggest that some of the link between ADHD and crime stems from shared genetic and/or environmental factors. Importantly, this does not mean that ADHD causes crime, but that the two share underlying vulnerabilities. Recognizing and addressing ADHD early, especially in families, could reduce downstream risks and improve outcomes.”

As with any study, the researchers acknowledge some limitations. The study’s reliance on official medical records may primarily capture more severe cases of ADHD, and conviction data does not account for all criminal behavior. Because the data comes from Sweden, a country with universal healthcare, the findings may not be directly generalizable to countries with different social or legal systems. The authors also note that the large number of statistical comparisons means the overall consistency of the patterns is more important than any single result.

Future research could explore these associations in different cultural and national contexts to see if the patterns hold. Further investigation is also needed to identify the specific genetic and environmental pathways that contribute to the shared risk between ADHD and criminal convictions. These findings could help inform risk assessment and prevention efforts, but the authors caution that such knowledge must be applied carefully to avoid stigmatization.

“I want to know more about why ADHD and criminality are connected, which symptoms or circumstances matter most, and whether early support for individuals and families can help break that link,” Oskarsson added. “This study underscores the importance of viewing ADHD within a broader family and societal context. Early support for ADHD doesn’t just help the individual, it can have ripple effects that extend across families and communities.”

The study, “The Familial Co-Aggregation of ADHD and Criminal Convictions: A Register-Based Cohort Study,” was authored by Sofi Oskarsson, Ralf Kuja-Halkola, Anneli Andersson, Catherine Tuvblad, Isabell Brikell, Brian D’Onofrio, Zheng Chang, and Henrik Larsson.

New study shows that a robot’s feedback can shape human relationships

A new study has found that a robot’s feedback during a collaborative task can influence the feeling of closeness between the human participants. The research, published in Computers in Human Behavior, indicates that this effect changes depending on the robot’s appearance and how it communicates.

As robots become more integrated into workplaces and homes, they are often designed to assist with decision-making. While much research has focused on how robots affect the quality of a group’s decisions, less is known about how a robot’s presence might alter the personal relationships between the humans on the team. The researchers sought to understand this dynamic by exploring how a robot’s agreement or disagreement impacts the sense of interpersonal connection people feel.

“Given the rise of large language models in recent years, we believe robots of different forms will soon be equipped with non-scripted verbal language to help people make decisions in various contexts. We conducted our research to call for careful consideration and control over the precise behaviors robots should use to provide feedback in the future,” said study author Ting-Han Lin, a computer science PhD student at the University of Chicago.

The investigation centered on two established psychological ideas. One, known as Balance Theory, suggests that people feel more positive toward one another when they are treated similarly by a third party, even if that treatment is negative. The other concept, the Influence of Negative Affect, proposes that a negative tone or criticism can damage the general atmosphere of an interaction and harm relationships.

To test these ideas, the researchers conducted two separate experiments, each involving pairs of participants who did not know each other. In both experiments, the pairs worked together to answer a series of eight personal questions, such as “What is the most important factor contributing to a life well-lived?” For each question, participants first gave their own individual answers before discussing and agreeing on a joint response.

A robot was present to mediate the task. After each person gave their initial answer, the robot would provide feedback. This feedback varied in two ways. First was its positivity, meaning the robot would either agree or disagree with the person’s statement. Second was its treatment of the pair, meaning the robot would either treat both people equally (agreeing with both or disagreeing with both) or unequally (agreeing with one and disagreeing with the other).

The first experiment involved 172 participants interacting with a highly human-like robot named NAO. This robot could speak, use gestures like nodding or shaking its head, and employed artificial intelligence to summarize a person’s response before giving its feedback. Its verbal disagreements were designed to grow in intensity, beginning with mild phrases and ending with statements like, “I am fundamentally opposed with your viewpoint.”

The results from this experiment showed that the positivity of the robot’s feedback had a strong effect on the participants’ relationship. When the NAO robot gave positive feedback, the two human participants reported feeling closer to each other. When the robot consistently gave negative feedback, the participants felt more distant from one another.

“A robot’s feedback to two people in a decision-making task can shape their closeness,” Lin told PsyPost.

This outcome supports the theory regarding the influence of negative affect. The robot’s consistent negativity seemed to create a less pleasant social environment, which in turn reduced the feeling of connection between the two people. The robot’s treatment of the pair, whether equal or unequal, did not appear to be the primary factor shaping their closeness in this context. Participants also rated the human-like robot as warmer and more competent when it was positive, though they found it more discomforting when it treated them unequally.

The second experiment involved 150 participants and a robot with a very low degree of human-like features. This robot resembled a simple, articulated lamp and could not speak. It communicated its feedback exclusively through minimal gestures, such as nodding for agreement or shaking its head from side to side for disagreement.

With this less-human robot, the findings were quite different. The main factor influencing interpersonal closeness was the robot’s treatment of the pair. When the robot treated both participants equally, they reported feeling closer to each other, regardless of whether the feedback was positive or negative. Unequal treatment, where the robot agreed with one person and disagreed with the other, led to a greater sense of distance between them.

This result aligns well with Balance Theory. The shared experience of being treated the same by the robot, either through mutual agreement or mutual disagreement, seemed to create a bond. The researchers also noted a surprising finding. When the lamp-like robot disagreed with both participants, they felt even closer than when it agreed with both, suggesting that the robot became a “common enemy” that united them.

“Heider’s Balance Theory dominates when a low anthropomorphism robot is present,” Lin said.

The researchers propose that the different outcomes are likely due to the intensity of the feedback delivered by each robot. The human-like NAO robot’s use of personalized speech and strong verbal disagreement was potent enough to create a negative atmosphere that overshadowed other social dynamics. Its criticism was taken more seriously, and its negativity was powerful enough to harm the human-human connection.

“The influence of negative affect prevails when a high anthropomorphism robot exists,” Lin said.

In contrast, the simple, non-verbal gestures of the lamp-like robot were not as intense. Because its disagreement was less personal and less powerful, it did not poison the overall interaction. This allowed the more subtle effects of balanced versus imbalanced treatment to become the main influence on the participants’ relationship. Interviews with participants supported this idea, as people interacting with the machine-like robot often noted that they did not take its opinions as seriously.

Across both experiments, the robot’s feedback did not significantly alter how the final joint decisions were made. Participants tended to incorporate each other’s ideas fairly evenly, regardless of the robot’s expressed opinion. This suggests the robot’s influence was more on the social and emotional level than on the practical outcome of the decision-making task.

The study has some limitations, including the fact that the two experiments were conducted in different countries with different participant populations. The first experiment used a diverse group of museum visitors in the United States, while the second involved university students in Israel. Future research could explore these dynamics in more varied contexts.

The study, “The impact of a robot’s agreement (or disagreement) on human-human interpersonal closeness in a two-person decision-making task,” was authored by Ting-Han Lin, Yuval Rubin Kopelman, Madeline Busse, Sarah Sebo, and Hadas Erel.

New research explores the biopsychology of common sexual behaviors

Recent research provides new insight into the functions of common sexual behaviors, revealing how they contribute not just to physical pleasure but also to emotional bonding. A trio of studies, two published in the WebLog Journal of Reproductive Medicine and one in the International Journal of Clinical Research and Reports, examines the physiological and psychological dimensions of why men hold their partners’ legs and stimulate their breasts, what men gain from these acts, and how women experience them.

Researchers pursued these lines of inquiry because many frequently practiced sexual behaviors remain scientifically underexplored. While practices like a man holding a woman’s legs or performing oral breast stimulation are common, the specific reasons for their prevalence and their effects on both partners were not fully understood from an integrated perspective. The scientific motivation was to create a more comprehensive picture that combines biology, psychology, and social factors to explain what happens during these intimate moments.

“Human sexual behavior is often discussed socially, but many aspects of it lack meaningful scientific exploration,” said study author Rehan Haider of the University of Karachi. “We noticed a gap connecting physiological responses, evolutionary psychology, and relationship intimacy to why certain tactile behaviors are preferred during intercourse. Our goal was to examine these mechanisms in a respectful, evidence-based manner rather than rely on anecdote or cultural assumptions.”

The first study took a broad, mixed-methods approach to understand why men often hold women’s legs and engage in breast stimulation during intercourse. The researchers combined a review of existing literature with observational studies and self-reported surveys from adult heterosexual couples aged 18 to 50. This allowed them to assemble a model that connected male behaviors with female responses and relational outcomes.

The research team reported that 68 percent of couples practiced leg holding during intercourse. This position was found to facilitate deeper vaginal penetration and improve the alignment of the bodies, which in turn enhanced stimulation of sensitive areas like the clitoris and G-spot. Women in the study correlated this act with higher levels of sexual satisfaction.

The research also affirmed the significance of breast stimulation, noting that manual stimulation occurred in 60 percent of encounters and oral stimulation in 54 percent. This contact activates sensory pathways in the nipple-areolar complex, promoting the release of the hormones oxytocin and prolactin, which are associated with increased sexual arousal and emotional bonding. From a psychological standpoint, these behaviors appeared to reinforce feelings of intimacy, trust, and connection between partners.

“We were surprised by the consistency of emotional feedback among participants, particularly how strongly feelings of closeness and security were linked to these behaviors,” Haider told PsyPost. “It suggests an underestimated psychological component beyond pure physical stimulation.”

“The core message is that sexual touch preferences are not random—many are supported by biological reward pathways, emotional bonding hormones, and evolutionary reproductive strategies. Leg-holding and breast stimulation, for example, can enhance feelings of safety, intimacy, and arousal for both partners. Healthy communication and consent around such behaviors strengthen relational satisfaction.”

A second, complementary study focused specifically on the male experience of performing oral stimulation on a partner’s nipples. The goal was to understand the pleasure and psychological satisfaction men themselves derive from this act. To do this, researchers conducted a cross-sectional survey, recruiting 500 heterosexual men between the ages of 18 and 55. Participants completed a structured and anonymous questionnaire designed to measure the frequency of the behavior, their self-rated level of arousal from it, and its association with feelings of intimacy and overall sexual satisfaction.

The analysis of this survey data revealed a strong positive association between the frequency of performing nipple stimulation and a man’s own sense of sexual fulfillment and relational closeness. The results indicated that men do not engage in this behavior solely for their partner’s benefit. They reported finding the act to be both highly erotic and emotionally gratifying. The researchers propose that the behavior serves a dual function for men, simultaneously enhancing their personal arousal while reinforcing the psychological bond with their partner, likely through mechanisms linked to the hormone oxytocin, which plays a role in social affiliation and trust.

The third study shifted the focus to the female perspective, examining women’s physical and psychological responses to breast and nipple stimulation during penetrative intercourse. This investigation used a clinical and observational design, collecting data from 120 sexually active women aged 21 to 50. The methodology involved structured interviews, clinical feedback from counseling sessions, and the use of validated questionnaires, including the well-established Female Sexual Function Index (`FSFI`), a self-report tool used to assess key dimensions of female sexual function.

This research confirmed that stimulation of the breasts and nipples consistently contributed to a more positive sexual experience for women. Women with higher reported nipple sensitivity showed significantly better scores across the FSFI domains of arousal, orgasm, and satisfaction. Physically, this type of stimulation was associated with enhanced vaginal lubrication and clitoral responsiveness during intercourse.

Psychologically, the researchers found a connection between a woman’s perception of her breasts and her emotional experience. Women who described their breasts as “zones of intimacy” or “trust-enhancing touchpoints” reported a greater sense of emotional connection and reduced anxiety during sex. However, the study also identified that 23 percent of participants experienced discomfort during breast stimulation.

“This research does not imply that these behaviors are necessary or universally preferred,” Haider noted. “It’s also not about objectification. Rather, it focuses on how touch patterns can reinforce mutual trust, pleasure, and bonding when consensual and respectful. Not everyone will experience the same responses, and preferences vary widely. The study highlights trends—not prescriptions—and should be interpreted as an invitation for communication rather than a standard everyone must follow.”

While these studies offer a more detailed understanding of sexual behavior, the researchers acknowledge certain limitations. All three studies relied heavily on self-reported data, which can be influenced by memory recall and social desirability biases. The research was also primarily cross-sectional, capturing a snapshot in time, which can identify associations but cannot establish cause-and-effect relationships. For instance, it is unclear if frequent breast stimulation leads to higher intimacy or if more intimate couples simply engage in the behavior more often.

For future research, scientists suggest incorporating longitudinal designs that follow couples over an extended period to better understand the development of these behavioral patterns and their long-term effects on relationship satisfaction. There is also a need for more cross-cultural comparisons, as sexual scripts and preferences can vary significantly across different societies.

“Future work will explore female perspectives more deeply, neuroendocrine changes during different types of touch, and how cultural factors shape sexual comfort and preference,” Haider said. We’d like to compare findings across age groups and relationship durations as well. Sexual well-being is an important aspect of overall health, but it is rarely discussed scientifically. By approaching these topics with sensitivity and rigor, we hope to normalize evidence-based conversation and encourage couples to communicate openly.”

The studies, “Physiological Basis of Male Preference for Holding Women’s Legs and Breast Stimulation during Intercourse,” “Nipple Sucking and Male Sexual Response: Perceived Pleasure and Psychological Satisfaction,” and “Women’s Physical and Psychological Responses during Penetrative Sexual Intercourse: The Role of Breast and Nipple Sensitivity” were authored by Rehan Haider, Geetha Kumari Das, and Zameer Ahmed.

Scientists are discovering more and more about the spooky psychology behind our love of horror

The human fascination with fear is a long-standing puzzle. From ghost stories told around a campfire to the latest blockbuster horror film, many people actively seek out experiences designed to frighten them. This seemingly contradictory impulse, where negative feelings like terror and anxiety produce a sense of enjoyment and thrill, has intrigued psychologists for decades. Researchers are now using a variety of tools, from brain scans to personality surveys, to understand this complex relationship.

Their work is revealing how our brains process fear, what personality traits draw us to the dark side of entertainment, and even how these experiences might offer surprising psychological benefits. Here is a look at twelve recent studies that explore the multifaceted psychology of horror, fear, and the paranormal.

(You can click on the subtitles to learn more about the studies.)

Your Brain on Horror: A New Theory Suggests We’re Training for Uncertainty

A new theory proposes that horror films appeal to us because they provide a safe, controlled setting for our brains to practice managing uncertainty. This idea is based on a framework known as predictive processing, which suggests the brain operates like a prediction engine. It constantly makes forecasts about what will happen next, and when reality doesn’t match its predictions, it generates a “prediction error” that it works to resolve.

This process doesn’t mean we only seek out calm, predictable situations. Instead, our brains are wired to find ideal opportunities for learning, which often exist at the edge of our understanding. We are drawn toward a “Goldlilocks zone” of manageable uncertainty that is neither too simple nor too chaotic. The rewarding feeling comes not just from being correct, but from the rate at which we reduce our uncertainty.

Horror films appear to be engineered to place us directly in this zone. They manipulate our predictive minds with a mix of the familiar and the unexpected. Suspenseful music and classic horror tropes build our anticipation, while jump scares suddenly violate our predictions. By engaging with this controlled chaos, we get to experience and resolve prediction errors in a low-stakes environment, which the brain can find inherently gratifying.

A Good Scare: Enjoying Horror May Be an Evolved Trait for Threat Simulation

Research from an evolutionary perspective suggests that our enjoyment of horror serves a practical purpose: it prepares us for real-world dangers. This “threat-simulation hypothesis” posits that engaging with scary media is an adaptive trait, allowing us to explore threatening scenarios and rehearse our responses from a position of safety. Through horror, we can learn about predators, hostile social encounters, and other dangers without facing any actual risk.

A survey of over 1,100 adults found that a majority of people consume horror media and more than half enjoy it. The study revealed that people who enjoy horror expect to experience a range of positive emotions like joy and surprise alongside fear. This supports the idea that the negative emotion of fear is balanced by positive feelings, a phenomenon some call “benign masochism.”

The findings also showed that sensation-seeking was a strong predictor of horror enjoyment, as was a personality trait related to intellect and imagination. It seems those who seek imaginative stimulation are particularly drawn to horror. By providing a vast space for emotional and cognitive play, frightening entertainment allows us to build and display mastery over situations that would be terrifying in real life.

The Thrill of the Kill: Fear and Realism Drive Horror Enjoyment

To better understand what makes a horror movie entertaining, researchers surveyed nearly 600 people about their reactions to short scenes from various horror subgenres. The study found that three key factors predicted both excitement and enjoyment: the intensity of fear the viewer felt, their curiosity about morbid topics, and how realistic they perceived the scenes to be.

The experience of fear itself was powerfully linked to both excitement and enjoyment, showing that the thrill of being scared is a central part of the appeal. Morbid curiosity also played a significant role, indicating that people with a natural interest in dark subjects are more likely to find horror entertaining. The perceived realism of a scene heightened the experience as well.

However, not all negative emotions contributed to the fun. Scenes that provoked high levels of disgust tended to decrease enjoyment, even if they were still exciting. This finding suggests that while fear can be a source of pleasure for horror fans, disgust often introduces an element that makes the experience less enjoyable overall.

Scary Fun: Nearly All Children Enjoy Playful Fear

Fear is not just for adults. A large-scale survey of 1,600 Danish parents has revealed that “recreational fear,” or the experience of activities that are both scary and fun, is a nearly universal part of childhood. An overwhelming 93% of children between the ages of 1 and 17 were reported to enjoy at least one type of scary yet fun activity, with 70% engaging in one weekly.

The study identified clear developmental trends in how children experience recreational fear. Younger children often find it in physical and imaginative play, such as being playfully chased or engaging in rough-and-tumble games. As they grow into adolescence, their interest shifts toward media-based experiences like scary movies, video games, and frightening online content. One constant across all ages was the enjoyment of activities involving high speeds, heights, or depths, like swings and amusement park rides.

These experiences are predominantly social. Young children typically engage with parents or siblings, while adolescents turn to friends. This social context may provide a sense of security that allows children to explore fear safely. The researchers propose that this type of play is beneficial, helping children learn to regulate their emotions, test their limits, and build psychological resilience.

Decoding Your Watchlist: Film Preferences May Reflect Personality

A study involving 300 college students suggests that your favorite movie genre might offer clues about your personality. Using the well-established Big Five personality model, researchers found consistent links between film preferences and traits like extraversion, conscientiousness, and neuroticism.

Fans of horror films tended to score higher in extraversion, agreeableness, and conscientiousness, suggesting they may be outgoing, cooperative, and organized. They also scored lower in neuroticism and openness, which could indicate they are less emotionally reactive and less drawn to abstract ideas. In contrast, those who favored drama scored higher in conscientiousness and neuroticism, while adventure film fans were more extraverted and spontaneous.

While these findings point to a relationship between personality and media choice, the study has limitations. The sample was limited to a specific age group and cultural background, so the results may not apply to everyone. The research also cannot determine whether personality shapes film choice or if the films we watch might influence our personality over time.

Dark Beats: Morbid Curiosity Linked to Enjoyment of Violent Music

Morbid curiosity, a trait defined by an interest in dangerous phenomena, may help explain why some people are drawn to music with violent themes, like death metal or certain subgenres of rap. A recent study found that people with higher levels of morbid curiosity were more likely to listen to and enjoy music with violent lyrics.

In an initial survey, researchers found that fans of music with violent themes scored higher on a scale of morbid curiosity than fans of other genres. A second experiment involved having participants listen to musical excerpts. The results showed that morbid curiosity predicted enjoyment of extreme metal with violent lyrics, but not rap music with violent lyrics, suggesting different factors may be at play for different genres.

The study authors propose that morbid curiosity is not a deviant trait, but an adaptive one that helps people learn about threatening aspects of life in a safe, simulated context. Music with violent themes can act as one of these simulations, allowing listeners to explore dangerous ideas and the emotions they evoke without any real-world consequences.

Pandemic Practice: Horror Fans Showed More Resilience During COVID-19

People who enjoy horror movies may have been better equipped to handle the psychological stress of the COVID-19 pandemic. A study conducted in April 2020 surveyed 322 U.S. adults about their genre preferences, morbid curiosity, and psychological state during the early days of the pandemic.

The researchers found that fans of horror movies reported less psychological distress than non-fans. They were less likely to agree with statements about feeling more depressed or having trouble sleeping since the pandemic began. Fans of “prepper” genres, such as zombie and apocalyptic films, also reported less distress and said they felt more prepared for the pandemic.

The study’s authors speculate that horror fans may have developed better emotion-regulation skills by repeatedly exposing themselves to frightening fiction in a controlled way. This “practice” with fear in a safe setting could have translated into greater resilience when faced with a real-world crisis.

A Frightening Prescription? Scary Fun May Briefly Shift Brain Activity in Depression

Engaging with frightening entertainment might temporarily alter brain network patterns associated with depression. A study found that in individuals with mild-to-moderate depression, a controlled scary experience was linked to a brief reduction in the over-connectivity between two key brain networks: the default mode network (active during self-focused thought) and the salience network (which detects important events).

This over-connectivity is thought to contribute to rumination, a cycle of negative thoughts common in depression. By demanding a person’s full attention, the scary experience appeared to pull focus away from this internal loop and onto the external threat. The greater this reduction in connectivity, the more enjoyment participants reported.

The study also found that individuals with moderate depression needed a more intense scare to reach their peak enjoyment compared to those with minimal symptoms. While the observed brain changes were temporary, the findings raise questions about the interplay between fear, pleasure, and emotion regulation.

Believe What You Watch: Some Horror Might Bolster Paranormal Beliefs

A recent study has found a connection between the type of horror media people watch and their beliefs in the paranormal. After surveying over 600 Belgian adults, researchers discovered that consumption of horror content claiming to be based on “true events” or presented as reality was associated with stronger paranormal beliefs.

Specifically, people who frequently watched paranormal reality TV shows and horror films marketed as being based on a true story were more likely to endorse beliefs in things like ghosts, spiritualism, and psychic powers. Other fictional horror genres, such as monster movies or psychological thrillers, did not show a similar connection.

This finding aligns with media effect theories suggesting that when content is perceived as more realistic or credible, it can have a stronger impact on a viewer’s attitudes. However, the study’s design means it is also possible that people who already believe in the paranormal are simply more drawn to this type of content.

Brainwaves of Believers: Paranormal Beliefs Linked to Distinct Neural Patterns

Individuals who strongly believe in paranormal phenomena may exhibit different brain activity and cognitive patterns compared to skeptics. A study using electroencephalography (EEG) to record the brain’s electrical activity found that paranormal believers had reduced power in certain brainwave frequencies, specifically in the alpha, beta, and gamma bands, particularly in the frontal, parietal, and occipital regions of the brain.

Participants also completed a cognitive task designed to measure inhibitory control, which is the ability to suppress impulsive actions. Paranormal believers made more errors on this task than skeptics, suggesting reduced inhibitory control. They also reported experiencing more everyday cognitive failures, such as memory slips and attention lapses.

The researchers found that activity in one specific frequency band, beta2 in the frontal lobe, appeared to mediate the relationship between paranormal beliefs and inhibitory control. This suggests that differences in brain function, particularly in regions involved in high-level cognitive processes, may be connected to a person’s conviction in the paranormal.

A Sixth Sense? Unusual Experiences Tied to a Trait Called Subconscious Connectedness

Unusual events like premonitions, vivid dreams, and out-of-body sensations are surprisingly common, and people who report them often share certain psychological traits. A series of three studies involving over 2,200 adults found a strong link between anomalous experiences and a trait called “subconscious connectedness,” which describes the degree to which a person’s conscious and subconscious minds influence each other.

People who scored high in subconscious connectedness reported having anomalous experiences far more frequently than those with low scores. In one national survey, 86% of participants said they had at least one type of anomalous experience more than once. The most commonly reported was déjà vu, followed by correctly sensing they were being stared at and having premonitions that came true.

These experiences were also associated with other traits, including absorption, dissociation, vivid imagination, and a tendency to trust intuition. While people who reported more anomalous experiences also tended to report more stress and anxiety, these associations were modest, suggesting such experiences are a normal part of human psychology for many.

Someone There? How Our Brains Create a ‘Feeling of Presence’ in the Dark

The eerie sensation that someone is nearby when you are alone may be a product of your brain trying to make sense of uncertainty. A study found that this “feeling of presence” is more likely to occur when people are in darkness with their senses dulled. Under these conditions, the brain may rely more on internal cues and expectations, sometimes generating the impression of an unseen agent.

In an experiment, university students sat alone in a darkened room for 30 minutes while wearing a sleeping mask and earplugs. The results showed that participants who reported higher levels of internal uncertainty were more likely to feel that another person was with them. This suggests that when sensory information is limited, the brain may interpret ambiguous bodily sensations or anxious feelings as evidence of an outside presence.

This cognitive process might be an evolutionary holdover. From a survival standpoint, it is safer to mistakenly assume a predator is hiding in the dark than to ignore a real one. This bias toward detecting agents could help explain why ghostly encounters and beliefs in invisible beings are so common across human cultures, especially in situations of isolation and vulnerability.

❌