Normal view

Today — 16 December 2025Main stream

A 120-year timeline of literature reveals distinctive patterns of “invisibility” for some groups

16 December 2025 at 01:00

A comprehensive analysis of English-language literature published over the last century reveals distinct patterns in how race and gender intersect within written text. The findings suggest that Black women and Asian men have historically appeared less frequently in books compared to Black men and Asian women, a phenomenon that aligns with psychological theories regarding social invisibility.

The research also provides evidence that these representational trends are not static and appear to shift in response to major historical events. These findings were published in the journal Current Research in Ecological and Social Psychology.

Joanna Schug, an associate professor at William & Mary, led the research team. She collaborated with Monika Gosin from the University of California San Diego and Nicholas P. Alt from Occidental College to investigate these long-term cultural trends. The study aimed to apply a historical lens to psychological theories that have typically been tested in laboratory settings.

Scholars have previously developed the concept of gendered race theory to explain how society perceives different groups. This framework suggests that the racial category “Black” is often cognitively associated with masculinity. Conversely, the racial category “Asian” is frequently associated with femininity.

These mental associations can lead to a phenomenon known as intersectional invisibility. This theory posits that individuals who do not fit the prototypical stereotypes—specifically Black women and Asian men—are often overlooked or marginalized. Because they do not align with the dominant gendered stereotypes of their racial groups, they may become less visible in cultural representations.

Prior experiments have supported these theories by showing that people are more likely to forget statements made by Black women or Asian men compared to other groups. Schug and her colleagues sought to determine if this psychological bias extended to cultural artifacts. They investigated whether these patterns of invisibility could be quantified in millions of books published over a 120-year period.

To conduct this analysis, the researchers utilized the Google Books Ngram dataset. This massive digital archive contains word frequency data from over 15 million books published between 1900 and 2019. The team examined two specific collections within this dataset: a general corpus of English-language books and a specific corpus containing only fiction texts.

The investigators tracked the frequency of specific phrases, known as “ngrams,” that combine racial and gender identifiers. They searched for terms such as “Black woman,” “Black man,” “Asian woman,” and “Asian man.” To ensure the search was comprehensive, they included various synonyms and historical terms relevant to different time periods.

For the category of Black individuals, the search included terms like “African American” and older designations that were common in the early 20th century. For Asian individuals, the researchers included specific ethnic groups such as Chinese, Japanese, Korean, and Vietnamese. They calculated the raw frequency of these terms to compare their prevalence in fiction versus nonfiction works.

The results from the first part of the study provided evidence supporting the existence of representational invisibility in literature. Throughout the majority of the 20th century, terms referring to Black men appeared more often than terms referring to Black women. This gap was present in both fiction and nonfiction texts.

Similarly, the analysis showed a consistent disparity in representations of Asian identities. References to Asian women generally outnumbered references to Asian men. This pattern persisted across the studied time period, although the gap was particularly pronounced in nonfiction books starting in the 1990s.

The researchers argue that these patterns reflect deep-seated historical stereotypes. For example, historical labor laws and immigration policies often restricted Asian men to domestic roles, which may have contributed to feminized stereotypes. In contrast, historical narratives surrounding Black identity have often focused on men, particularly in the context of labor and political struggle.

The study also included a comparison with White gender categories. The data showed that references to White men far exceeded references to White women. This finding aligns with the concept of androcentrism, where men are treated as the default representation of a group.

While the general patterns supported the theory of intersectional invisibility, the researchers observed a notable shift beginning in the late 20th century. In nonfiction books, references to Black women began to increase substantially around 1980. Eventually, the frequency of terms for Black women surpassed those for Black men in nonfiction texts.

To understand the drivers behind these shifts, the authors conducted a second study. They hypothesized that specific social movements might be influencing how often these groups were mentioned in print. They focused on the Civil Rights Movement and the Black Feminist movement.

The team identified key terms associated with these movements. For the Civil Rights Movement, they tracked phrases like “Civil Rights Movement” and “Black Power.” For the Black Feminist movement, they tracked terms such as “Black feminist” and “womanist.”

They then used statistical models to analyze the relationship between these movement-related terms and the frequency of race-gender categories over time. The analysis examined whether a rise in social movement terminology corresponded with a rise in the visibility of specific groups.

The findings indicated a strong link between the Civil Rights Movement and the representation of Black men. Increases in terms related to Civil Rights were positively associated with increases in references to Black men in both fiction and nonfiction. This suggests that the discourse of this era primarily elevated the visibility of Black men.

In contrast, the Civil Rights terminology did not show a significant positive association with references to Black women. This aligns with critiques from scholars like Kimberlé Crenshaw. Crenshaw has argued that antiracist efforts during that era often focused on the experiences of Black men, while feminist efforts often focused on White women.

However, the data revealed a different pattern regarding the Black Feminist movement. The rise in terms associated with Black Feminism was a significant predictor of increased references to Black women. This effect was particularly strong in nonfiction texts.

This suggests that the Black Feminist movement played a role in correcting the historical invisibility of Black women in literature. As scholars and activists began to produce more work centered on the experiences of Black women, the language in published books shifted to reflect this focus.

The study did observe some differences between fiction and nonfiction. For instance, while Black Feminism terms predicted more mentions of Black women in nonfiction, they were negatively associated with mentions of Black men in fiction. This indicates that different genres may respond to cultural shifts in distinct ways.

The researchers note that the patterns for Asian men and women remained relatively stable compared to the shifts seen for Black men and women. The representation of Asian men remained lower than that of Asian women throughout most of the period. The authors suggest that future research could investigate if specific Asian American social movements have had similar effects on representation.

But there are some limitations to to consider. The Google Books dataset, while vast, is not a perfect representation of all culture. It tends to overrepresent academic and scientific publications, which might skew the results toward scholarly discourse rather than everyday language.

Additionally, the study is correlational. This means that while the rise in social movement terms coincides with changes in representation, it does not definitively prove that the movements caused the changes. Other unmeasured societal factors could have contributed to these trends.

The researchers also point out the complexity of the term “Asian” in their analysis. The study primarily utilized terms related to East Asian identities. This focus means the findings may not fully capture the experiences of South Asian or Southeast Asian groups.

Despite these limitations, the study offers new insights into how cultural stereotypes are preserved and challenged over time. It provides empirical evidence that the “invisibility” of certain groups is not just a theoretical concept but a measurable phenomenon in the written record.

The findings also highlight the potential of social movements to alter widespread cultural narratives. The increase in references to Black women following the rise of Black Feminism suggests that concerted intellectual and political efforts can successfully challenge representational biases.

Future research could build on this work by using more advanced text analysis methods. Newer techniques could examine the context in which these words appear, rather than just their frequency. This would allow for a deeper understanding of the quality of representation, beyond just the quantity.

The study, “A historical psychology approach to gendered racial stereotypes: An examination of a multi-million book sample of 20th century texts,” was authored by Joanna Schug, Monika Gosin, and Nicholas P. Alt.

How common is rough sex? Research highlights a stark generational divide

15 December 2025 at 23:00

Recent trends in popular culture suggest that sexual behaviors involving physical force, such as choking or spanking, have moved from the fringes into the mainstream. A new study involving a nationally representative sample of adults provides evidence that these practices are widespread in the United States, particularly among younger generations. Published in the Archives of Sexual Behavior, the findings indicate that while many adults engage in these acts consensually, a significant portion of the population has also experienced them without permission.

The prevalence of “rough sex” has appeared to increase over the last decade. Depictions of these behaviors have become common in television, music, and social media. This visibility may lead to the perception that such practices are a standard or expected part of sexual intimacy. While these acts can enhance pleasure and intimacy for many, public health professionals have raised questions about safety and consent.

Previous attempts to measure these behaviors have often faced methodological hurdles. Many earlier surveys relied on data that is now outdated or focused exclusively on college students, limiting the ability to apply findings to the general public. Other studies used non-probability samples, such as online opt-in panels, which may not accurately reflect the broader population. Additionally, standard public health surveys often focus on disease prevention and pregnancy, omitting specific questions about acts like choking or slapping.

Debby Herbenick, a professor at the Indiana University School of Public Health, led the new research. Herbenick and her colleagues sought to fill the gaps in existing literature by collecting current data from a diverse range of ages and backgrounds. Their objective was to provide precise estimates of how many Americans engage in these behaviors and to identify demographic factors associated with them.

To achieve this, the researchers analyzed data from the 2022 National Survey of Sexual Health and Behavior. This survey is a recurring project that gathers detailed information on the sexual lives of Americans. The team used the Ipsos KnowledgePanel to recruit participants. This panel utilizes address-based sampling methods to create a pool of respondents that is statistically representative of the United States non-institutionalized adult population.

The final sample consisted of 9,029 adults between the ages of 18 and 94. The survey presented participants with a list of ten specific sexual behaviors. These included hair pulling, biting, face slapping, genital slapping, light spanking, hard spanking, choking, punching, name-calling, and smothering. The researchers avoided using the potentially ambiguous term “rough sex” in the questions. Instead, they asked about each specific act individually.

Participants reported their experiences in three distinct contexts. They indicated if they had performed these acts on a partner. They also indicated if a partner had done these acts to them with permission or consent. Finally, they reported if a partner had done these acts to them without permission or consent.

The results indicated that engagement in these behaviors is common. Approximately 48 percent of women and 61 percent of men reported having ever performed at least one of the listed behaviors on a partner. When it came to receiving these acts with consent, about 54 percent of women and 46 percent of men reported having at least one such experience.

Age emerged as a strong predictor of engagement. The researchers observed a substantial divide between adults under the age of 40 and those in older cohorts. Younger adults were significantly more likely to report both performing and receiving these behaviors. For instance, while choking a partner was rarely reported by men over the age of 50, it was a common experience for men in their 20s and 30s.

The types of behaviors reported varied in intensity. Biting and light spanking were among the most common activities reported by all groups. More intense behaviors, such as punching or smothering, were reported less frequently.

Gender patterns in the data generally aligned with traditional roles. Men were more likely to report being the ones to perform the acts, such as spanking or choking a partner. Conversely, women were more likely to report being on the receiving end of these behaviors. This suggests that even within practices considered “kinky” or alternative, mainstream participation often mirrors conventional active-male and passive-female scripts.

Transgender and gender nonbinary participants reported high rates of engagement across all categories. About 71 percent of these individuals reported ever performing at least one of the acts on a partner. Similarly, roughly 72 percent reported receiving at least one of the acts with consent.

One of the most concerning findings related to non-consensual experiences. The survey revealed that a substantial number of adults have been subjected to rough sex behaviors without their agreement. Approximately 20 percent of women reported that a partner had performed at least one of the ten behaviors on them without permission.

The rates of non-consensual experiences were also notable for men, with about 16 percent reporting such incidents. The risk was highest for transgender and gender nonbinary individuals. Approximately 35 percent of this group reported experiencing at least one of the behaviors without consent.

These findings align with and expand upon several lines of previous inquiry regarding rough sex. For example, a 2024 study by Döring and colleagues surveyed a national sample of German adults using an online panel. They found a lifetime prevalence of rough sex involvement at 29 percent. Similar to the current U.S. study, the German researchers identified a steep age gradient. Younger participants were much more likely to engage in these acts than older cohorts.

The German study also mirrored the gendered nature of these interactions observed in the U.S. data. Döring’s team found that men were significantly more likely to take an active role, while women were more likely to take a passive role. This consistency across Western nations suggests that the rise of rough sex is occurring within the boundaries of traditional gender expectations rather than subverting them.

Earlier research involving U.S. college students also provides context for the current findings. A 2021 study by Herbenick and colleagues found that nearly 80 percent of sexually active undergraduates had engaged in rough sex.

The most common behaviors identified in that probability sample—choking, hair pulling, and spanking—match the most prevalent behaviors in the new national adult study. The extremely high rates among college students align with the age-related trends seen in the adult data. It appears that emerging adults are the primary demographic driving these statistics.

Research from an evolutionary psychology perspective offers potential explanations for why these behaviors are occurring. Studies by Burch and Salmon have suggested that consensual rough sex is often driven by a desire for novelty rather than aggression. Their work with undergraduates indicated that people who consume pornography are more likely to seek out these novel experiences. They also found that men were more likely to initiate rough sex in response to feelings of jealousy.

Burch and Salmon’s findings framed these behaviors as largely recreational and resulting in little physical injury. The current study complicates that narrative. While many respondents reported consensual engagement, the high rates of non-consensual experiences indicate that these behaviors are not always harmless play. The prevalence of non-consensual choking and slapping suggests a darker side to the normalization of rough sex that novelty-seeking theories may not fully address.

The researchers pointed out several limitations to their study. The list of ten behaviors may not capture the full spectrum of what individuals consider to be rough sex. Additionally, the survey did not measure the “wantedness” of the acts. It is possible for an act to be consensual but not necessarily desired or enjoyed, and the study did not make this distinction.

The study also grouped bisexual and pansexual individuals together for analysis. This decision was made due to sample sizes but may obscure unique experiences within these distinct identities. Furthermore, the reliance on self-reported data means that memory recall could influence the accuracy of the lifetime prevalence estimates.

Future research aims to explore the nuances of consent in these scenarios. The researchers suggest investigating how partners communicate boundaries regarding specific acts like choking or slapping. Understanding the context in which non-consensual acts occur—whether as part of an otherwise consensual encounter or as distinct assaults—is a priority for public health.

The study, “Prevalence and Demographic Correlates of “Rough Sex” Behaviors: Findings from a U.S. Nationally Representative Survey of Adults Ages 18–94 Years,” was authored by Debby Herbenick, Tsung‑chieh Fu, Xiwei Chen, Sumayyah Ali, Ivanka Simić Stanojević, Devon J. Hensel, Paul J. Wright, Zoë D. Peterson, Jaroslaw Harezlak, and J. Dennis Fortenberry.

Yesterday — 15 December 2025Main stream

Progressives and traditional liberals generate opposing mental images of J.K. Rowling

15 December 2025 at 22:00

New research published in the Personality and Social Psychology Bulletin reveals a psychological split within the political left regarding perceptions of in-group dissenters. The study indicates that self-identified Progressives and Traditional Liberals generate fundamentally different mental images of author J.K. Rowling based on her views regarding gender identity. While Progressives conceptualize Rowling as appearing cold and right-wing, Traditional Liberals visualize her in a warm and positive light.

Political psychology has historically focused on the ideological conflict between the Left and the Right. Scholars have frequently characterized right-wing individuals as more prone to rigidity and hostility toward out-groups. However, recent academic inquiries have shifted focus to the increasing fragmentation within the left-wing itself. This internal division is often categorized into two distinct subgroups: Progressives and Traditional Liberals.

Elena A. Magazin, Geoffrey Haddock, and Travis Proulx from Cardiff University conducted this research to investigate how these two groups perceive ideological dissenters from within their own ranks. The researchers utilized the Progressive Values Scale (PVS) to distinguish between the groups.

This scale identifies Progressives as those who emphasize mandated diversity, concern over cultural appropriation, and the public censure of offensive views. In contrast, Traditional Liberals tend to favor free expression and gradual institutional change over activist approaches.

The primary objective was to determine if the tendency to derogate—or negatively perceive—others extends to members of one’s own political group who hold controversial views. J.K. Rowling served as the focal point for this investigation.

Rowling is a prominent figure who has historically supported left-wing causes but has recently expressed “gender critical” views that conflict with the “gender self-identification” stance held by many on the Left. The researchers sought to visualize how these political orientations shape the mental representations of such a figure.

The researchers employed a technique known as reverse correlation to capture these internal mental images. This method allows scientists to visualize a participant’s internal representation of a person or group without asking them to draw or describe features explicitly. In the first study, the team recruited 82 left-wing university students in the United Kingdom to act as “generators.”

During the image generation phase, participants viewed pairs of faces derived from a neutral base image overlaid with random visual noise. For each pair, they selected the face that best resembled their mental image of J.K. Rowling. By averaging the selected images across hundreds of trials, the researchers created composite “classification images” representing the average visualization of Rowling for Progressives and Traditional Liberals respectively.

A separate group of 178 undergraduates then served as “raters.” These participants evaluated the resulting composite images on various character traits, such as warmth, competence, morality, and femininity. The raters were unaware of how the images were generated or which political group created them.

The results from Study 1 provided evidence of a stark contrast in perception. The image of Rowling generated by Progressives was rated as cold, incompetent, immoral, and relatively masculine. Raters also perceived this face as appearing “right-wing” and prejudiced.

On the other hand, the image generated by Traditional Liberals was evaluated positively across these dimensions. It appeared warm, competent, feminine, and distinctly left-wing. This suggests that while Progressives mentally penalized the dissenter, Traditional Liberals maintained a flattering perception of her.

To ensure these findings were not limited to a specific demographic or location, the researchers conducted a second study with a more diverse sample. Study 2 involved 382 adults from the United States. This experiment aimed to replicate the findings and expand upon them by including abstract targets alongside concrete ones.

Participants were asked to generate images for four different categories. These included specific public figures, such as J.K. Rowling (representing gender critical views) and Lady Gaga (representing gender self-identification views). They also generated images for generalized, abstract descriptions of a “fellow left-winger” who held either gender critical or self-identification beliefs.

Following the generation phase, 301 distinct participants rated the eight resulting composite images. The findings from the second study reinforced the patterns observed in the first. In general, faces representing gender critical views were rated more negatively than those representing self-identification views. This aligns with the general left-wing preference for the self-identification model.

However, the degree of negativity varied by generator type. Progressives consistently generated gender critical faces that were evaluated more harshly than those generated by Traditional Liberals. This held true for both the abstract descriptions and the specific example of J.K. Rowling.

A specific divergence occurred regarding the concrete representation of Rowling. Consistent with the UK study, US Progressives generated a negative image of the author. In contrast, US Traditional Liberals generated an image that raters viewed as warm, competent, and moral. This occurred even though Traditional Liberals generated a negative image for the abstract concept of a gender critical person.

This discrepancy suggests a nuanced psychological process for Traditional Liberals. While they may disagree with the abstract views Rowling holds, their mental representation of her as an individual remains protected by a “benevolent exterior.” They appear to separate the person from the specific ideological disagreement in a way that Progressives do not.

The researchers also noted an unexpected pattern regarding gender perception. In both studies, the images of Rowling generated by Progressives were rated as looking less feminine and more masculine than those generated by Traditional Liberals. This finding implies that the devaluation of a target may involve stripping away gender-congruent features.

There are limitations to this research that context helps clarify. The first study relied heavily on a student population which was predominantly female and white. While the second study expanded the demographic range, both studies focused exclusively on the issue of gender identity. It remains unclear if this pattern of intra-left derogation would apply to other contentious topics, such as economic policy or foreign affairs.

Future research could explore these boundaries by using different targets of dissent. It would be valuable to investigate whether these visual biases persist if a dissenter apologizes or recants their views. Additionally, further study is needed to understand the “masculinization” effect observed in the Progressive-generated images.

These findings provide evidence that the political left is not a monolith regarding social cognition. The distinction between Progressives and Traditional Liberals involves more than just policy disagreements. It appears to involve fundamental differences in how they visualize and socially evaluate those who deviate from group norms.

The study, “The Face of Left-Wing Dissent: Progressives and Traditional Liberals Generate Divergently Negative and Positive Representations of J.K. Rowling,” was authored by Elena A. Magazin, Geoffrey Haddock, and Travis Proulx.

Paternal psychological strengths linked to lower maternal inflammation in married couples

15 December 2025 at 15:00

A new study published in Biopsychosocial Science and Medicine suggests that a father’s psychological resilience may play a significant role in the biological health of his pregnant partner and the duration of her pregnancy. The research indicates that for married couples, a father’s internal strengths are linked to lower systemic inflammation in the mother, which in turn predicts a longer gestational length.

Premature birth and low birth weight are significant public health concerns that can lead to long-term developmental challenges for children. Infants born too early or too small face increased risks for health problems such as hypertension, diabetes, and difficulties with emotional regulation later in life.

Medical professionals understand that high levels of inflammation in a mother’s body during pregnancy can increase the risk of these adverse birth outcomes. While biological changes are normal during gestation, excessive inflammation can disrupt the delicate environment required for fetal development.

Past scientific inquiries have largely focused on identifying risk factors, such as socioeconomic disadvantage and chronic stress, that drive this inflammation. Less attention has been paid to positive psychological factors that might act as a buffer against these risks.

The concept of “resilience resources” refers to a safety net of psychological strengths that allow individuals to adapt successfully in the face of challenges. These resources typically include optimism, self-esteem, a sense of mastery over one’s life, and social support.

The current study sought to determine if these resilience resources could protect against inflammation during pregnancy. Most prior work in this area has focused solely on the pregnant mother. This leaves a gap in understanding how a father’s psychological state might influence the pregnancy’s progression.

“We’ve known for quite some time that adverse birth outcomes, like preterm delivery, can have long-term consequences for the health of the child. We have also learnt about psychological and biological factors in pregnant people, like stress and excess inflammation, which can raise the risk for outcomes like preterm delivery,” said study author Kavya Swaminathan, a doctoral student at UC Merced.

“However, we found that relatively little was known about whether psychological factors, social support, optimism, self-esteem, and mastery (i.e., resilience resources) could offer protective benefits. Relatedly, we recognized that there was limited research examining the role of both parents in protecting against adverse birth outcomes. To fill all these gaps in the literature, we decided to test whether resilience resources in the parents predicted lower inflammation in the mother and thus lower the risk for preterm delivery.”

The research team analyzed data from the Community Child Health Network. This was a large, prospective study focusing on families from diverse backgrounds across five sites in the United States. The sites included Los Angeles, Washington D.C., Baltimore, Lake County in Illinois, and rural eastern North Carolina. The study specifically recruited families from communities with high proportions of residents living at or below the federal poverty line.

The researchers focused on a final sample of 217 couples who provided data during a subsequent pregnancy following the birth of an initial child. The participants included mothers and fathers who identified as Black, Hispanic, and White. The team assessed resilience resources using four validated psychological surveys.

Dispositional optimism was measured using the Life Orientation Test, which asks individuals about their expectations for the future. Self-esteem was evaluated using the Rosenberg Self-Esteem Scale to gauge feelings of self-worth. Mastery, or the sense of control over one’s life, was assessed with a scale asking participants if they felt they could achieve their goals. Finally, perceived social support was measured by asking participants if they had people available to help them if needed.

To measure physiological inflammation, the team collected biological samples from the mothers. They utilized dried blood spots taken from a finger prick during the second and third trimesters of pregnancy. These samples were analyzed for C-Reactive Protein. This protein is a substance produced by the liver in response to inflammation. High levels of this protein are often used as a marker for systemic inflammation in the body.

The researchers utilized a statistical method known as structural equation modeling to analyze the relationships between these variables. They combined the four psychological measures into a single “resilience resource” factor for each parent. They then tested whether these factors predicted the mother’s levels of C-Reactive Protein and, subsequently, the baby’s birth weight and gestational age.

The data revealed a specific pathway of influence originating from the fathers. Higher levels of resilience resources in fathers were associated with lower levels of C-Reactive Protein in mothers during pregnancy. In turn, lower levels of this inflammatory marker predicted a longer gestational length. This suggests that a father’s psychological stability may dampen biological stress responses in his partner.

This chain of associations was not uniform across all participants in the study. The link between paternal resilience, maternal inflammation, and pregnancy length was statistically significant only among married couples. It was not observed in couples who were cohabiting but unmarried. The effect was also absent in parents who were neither married nor living together.

“Our findings essentially suggest that in married couples, a father’s psychological strengths, his resilience, are not only relevant to his well-being, but can also impact the health of his pregnant partner and unborn child,” Swaminathan told PsyPost. “Thus, as we try to support the pregnant people in our lives, it might also be useful to try to bolster resilience in the father, who can, in turn, help buffer adverse health outcomes in his partner.”

The researchers did not find evidence that the mother’s own resilience resources directly lowered her inflammation or influenced birth outcomes in this specific statistical model. While maternal and paternal resilience scores were correlated—meaning resilient mothers tended to have resilient partners—the direct benefit to gestational length appeared to flow through the father’s influence on maternal inflammation. Additionally, the study did not find a significant link between these factors and infant birth weight, only gestational length.

“At the outset, we were interested in the protective effects of both parents’ resilience resources on adverse birth outcomes,” Swaminathan said. “We were surprised to find that although paternal resilience resources seemed to matter for inflammation, and thereby, gestational length, maternal resources did not. This, to us, suggested that perhaps maternal resources offer protection in different ways that we did not test in this study.”

The researchers propose several theoretical reasons for these observations. Committed relationships often involve a process called coregulation. This occurs when partners’ physiological and emotional states become linked to one another. A resilient father may be better equipped to provide tangible support, such as assisting with daily tasks or encouraging adherence to medical advice. This support can reduce the mother’s overall stress load.

Reduced stress typically results in a calmer immune system and lower production of inflammatory proteins. The “self-expansion theory” of love also offers a potential explanation. This theory suggests that in close relationships, individuals include their partner’s resources and identity into their own sense of self. A mother may psychologically benefit from her partner’s optimism and sense of mastery, effectively “borrowing” his resilience to buffer her own stress response.

The specificity of the finding to married couples warrants further consideration. Marriage often implies a higher level of long-term commitment and possibly greater time spent together compared to other relationship structures. This increased proximity and commitment might facilitate stronger coregulation and more consistent resource sharing. Married fathers in this sample also reported higher average levels of resilience resources than unmarried fathers, which could contribute to the stronger effect.

The study has certain limitations that affect how the results should be interpreted. The research design was observational rather than experimental. This means it cannot definitively prove that the father’s resilience caused the changes in the mother’s biology. It is possible that other unmeasured variables influenced the results.

Future research is needed to understand why the protective effect was specific to married couples in this dataset. Scientists should investigate whether the quality of the relationship or the amount of time spent together explains the difference. It would also be beneficial to examine other biological markers beyond inflammation. Cortisol, a stress hormone, might be another pathway through which resilience influences pregnancy.

The study, “Parental resilience resources and gestational length: A test of prenatal maternal inflammatory mediation,” was authored by Kavya Swaminathan, Christine Guardino, Haiyan Liu, Christine Dunkel Schetter, and Jennifer Hahn-Holbrook.

Analysis of 20 million posts reveals how basic psychological needs drive activity in extremist chatrooms

14 December 2025 at 23:00

A recent study suggests that participation in online extremist communities may be driven by the search for basic human psychological needs. This research, published in the journal Social Psychological and Personality Science, found that users whose posts reflected a sense of agency and capability were more active and stayed in these groups for longer periods. The findings provide evidence that extremist environments might serve as a space where individuals attempt to satisfy fundamental desires for personal growth and social connection.

The rise of far-right extremist movements has led to an increase in religious and ethnic violence across the globe. Researchers have noted that these ideologies are often spread through social media and private chatrooms that allow for easy communication and organization. Despite years of study, the exact reasons why individuals are drawn to these digital spaces remain only partially understood.

Jeremy J. J. Rappel and his colleagues at McGill University conducted this research to see if established theories of human motivation could explain extremist behavior. They focused on basic psychological needs theory, which is a well-supported framework in psychology. This theory suggests that all humans have three primary needs: autonomy, competence, and relatedness.

Autonomy refers to the need to feel that one’s actions and thoughts are authentic and self-chosen. Competence is the desire to feel capable and effective in achieving goals or performing tasks. Relatedness is the need to feel a sense of belonging and to have meaningful connections with other people.

The researchers proposed that extremist groups might appeal to people because they offer a way to satisfy these needs. A person who feels powerless or lonely in their daily life might turn to a digital community that promises a sense of empowerment or camaraderie. While these groups are often outside of social norms, the psychological drive to join them might be the same drive that leads others to join sports teams or civic organizations.

To test these ideas, the research team analyzed a massive dataset of leaked conversations from the messaging platform Discord. The data came from a public database of over 200 extremist chatrooms that included fascists, white supremacists, and conspiracy theorists. The final sample was immense, consisting of approximately 20 million posts written by more than 86,000 individual users.

Because the data was so large, the researchers used a specialized computer technique called natural language processing. This allowed them to analyze the meaning of millions of posts without having to read each one manually. They used a tool known as the Universal Sentence Encoder, which converts text into numerical scores representing its semantic meaning.

The team compared the posts made by Discord users to standardized survey questions used by psychologists to measure autonomy, competence, and relatedness. If a user’s posts were mathematically similar to the language of those survey questions, the user received a higher score for that specific need. This method allowed the researchers to estimate the psychological state of each user based on their natural speech patterns.

The researchers also included a control measure to ensure their results were accurate. They compared the user posts to a survey about food neophobia, which is the fear of trying new foods. Since a fear of new foods has nothing to do with extremism, this helped the team account for general patterns in how people use language. This step ensured that the findings were truly about psychological needs rather than just the way people structure their sentences.

To make the study more reliable, the team split their data into two halves. They used the first half to explore their ideas and the second half to confirm that their findings were consistent. This approach helps prevent scientists from finding patterns in data that only appear by chance.

The results showed a clear link between psychological needs and how people behave in these chatrooms. Users whose language reflected high levels of autonomy and competence tended to be much more engaged. They made more posts overall and remained active in the chatrooms for a longer number of days.

Competence was the strongest predictor of how many posts a person would make. This suggests that people who feel effective or capable in these spaces are more likely to contribute to the conversation. Autonomy also played a significant role, as users who felt a sense of agency were more likely to stay involved with the group over time.

A different pattern was observed for the need for relatedness. While there was some evidence that social connection was linked to activity, the results were less consistent than those for autonomy and competence. In some models, relatedness was actually linked to fewer posts, which was a surprising outcome.

The researchers also looked at the use of hate terms as a measure of extremist signaling. They found that users who expressed more autonomy and competence used fewer hate terms in their posts. This suggests that people who feel more personally secure and capable may have less of a need to use aggressive language against others.

On the other hand, a higher need for relatedness was linked to a greater use of hate terms. The researchers suggest that this might be because new members use extreme language to gain acceptance from the group. By adopting the group’s hateful rhetoric, they may be attempting to prove their loyalty and satisfy their need for belonging.

These findings share similarities with a study published in 2021 in the Journal of Experimental Social Psychology. That previous research, led by Abdo Elnakouri, found that expressing hatred toward large groups or institutions can give people a greater sense of meaning in life. Both studies suggest that extreme attitudes and group participation serve a psychological function for the individual.

The earlier study by Elnakouri found that collective hate can make people feel more energized and determined. It suggests that having a clear enemy to fight against can simplify the world and provide a sense of purpose. The McGill study builds on this by showing how these motivations play out in real world digital interactions over long periods.

But there are some limitations that should be considered. Since the data came from leaked chatroom logs, the researchers could not ask the users for their consent or follow up with them directly. Additionally, the computer models could not always tell if a user was expressing that a need was being met or if they were complaining that it was being frustrated.

The researchers noted that the analysis focused only on text and did not include images, videos, or emojis. These visual elements are common in online extremist culture and might carry additional psychological weight. Future research could look at how visual media contributes to satisfying psychological needs in these spaces.

The study also could not account for “lurkers,” who are people who read the messages but never post anything. It is possible that the psychological needs of these silent observers are different from those who are highly active. Understanding the motivations of this quieter group could be a helpful direction for future investigations.

Despite these limitations, the study provides a new way to think about how people become radicalized. It suggests that instead of focusing only on ideology, it may be helpful to look at the psychological benefits people get from these groups.

The study, “Basic Psychological Needs Are Associated With Engagement and Hate Term Use in Extremist Chatrooms,” was authored by Jeremy J. J. Rappel, David D. Vachon, and Eric Hehman.

Before yesterdayMain stream

New study suggests “Zoom fatigue” is largely gone in the post-pandemic workplace

14 December 2025 at 19:00

A new study published in the Journal of Occupational Health Psychology has found that the phenomenon popularly known as “Zoom fatigue” may have largely dissipated in the post-pandemic work environment. The findings suggest that video meetings are no longer significantly more exhausting than other types of meetings for most employees. This research challenges the narrative that virtual communication is inherently draining and indicates that workers may have adapted to the demands of remote collaboration.

The rapid shift to remote work during the COVID-19 pandemic necessitated a heavy reliance on video conferencing tools to maintain organizational operations. During this period, many employees reported feeling an unusual sense of exhaustion following these virtual interactions. This collective experience was quickly labeled “Zoom fatigue.” Previous empirical studies conducted during the height of the pandemic supported these anecdotal claims. They found a correlation between the frequency of video meetings and higher levels of daily fatigue among workers.

Various theories arose to explain why video calls might be uniquely taxing. Some researchers proposed that the cognitive load of video meetings was to blame. This theory posits that users must expend extra mental energy to monitor their own appearance on camera and to interpret non-verbal cues that are harder to read through a screen. Others suggested a theory of “passive fatigue.” This perspective argues that the lack of physical movement and the under-stimulation of sitting in front of a computer monitor lead to drowsiness and low energy.

However, the context of work has evolved since the early days of the pandemic. For many, video meetings are no longer a forced substitute for all human contact but rather a standard tool for business communication. The researchers behind the current study sought to determine if the exhaustion associated with video calls was a permanent feature of the technology or a temporary symptom of the pandemic era. They aimed to update the scientific understanding of virtual work by replicating a 2022 study in the current year, 2024.

“We conducted this study from both pure research curiosity, and a practical lens. As our first paper from the pandemic times (Nesher Shoshan & Werht, 2022) in which we identified that ‘Zoom fatigue’ exist got a lot of attention, we were interested to know if the results can be replicated in a different, post-pandemic setting, and with a stronger empirical approach (larger sample, another measurement point, a more sophisticated analysis),” said Hadar Nesher Shoshan, a junior professor at Johannes Gutenberg University Mainz.

“Practically, we found out that our first study is being used to make organizational decisions. This is a large responsibility, that we wanted to make sure is updated and evidence based.”

To investigate this, the researchers utilized an experience sampling method. This approach allows researchers to capture data from participants in real-time as they go about their daily lives, rather than relying on retrospective surveys that can be subject to memory errors. The study was conducted in Germany in April 2024.

The research team recruited 125 participants who worked at least 20 hours per week and regularly attended video meetings. The participants represented various industries, including communication, service, and health sectors. Over a period of ten working days, these individuals completed short surveys at four specific times each day. This rigorous schedule resulted in a dataset covering 590 workdays and 945 distinct meetings.

In each survey, participants reported details about the last work meeting they had attended. They specified the medium of the meeting, such as whether it was held via video, telephone, face-to-face, or through written chat. They also rated their current levels of emotional exhaustion and “passive fatigue,” which was defined as feelings of sleepiness or lack of alertness.

The researchers also collected data on several potential moderating factors. They asked participants to rate their own level of active participation in the meeting, as well as the participation level of the group. They inquired about multitasking behaviors during the call. Additionally, they recorded objective characteristics of the meetings, such as the duration in minutes and the number of attendees.

The analysis of this extensive dataset revealed that video meetings were not related to higher levels of exhaustion compared to non-video meetings. Participants did not report feeling more drained or more drowsy after a video call than they did after a face-to-face meeting or a phone call. This finding held true even when the researchers statistically controlled for the level of exhaustion participants felt before the meeting began.

The researchers also examined whether working from home influenced these results. The analysis showed that the location of the worker did not moderate the relationship between video meetings and fatigue. This suggests that the environment of the home office is not a primary driver of the exhaustion previously associated with video calls.

“Our initial hypothesis was that zoom fatigue still existed. After all, all previous studies had come to this conclusion, so there was no reason to doubt that this result was correct,” said Nesher Shoshan. “However, we found no evidence of the phenomenon! According to our findings, online meetings are not more fatiguing than in-person meetings.”

Regarding the specific behaviors within meetings, the researchers found that active participation and multitasking did not significantly alter the fatigue levels associated with video meetings. Whether an individual spoke frequently or remained quiet did not change the likelihood of experiencing exhaustion. Similarly, checking emails or performing other tasks during the meeting did not appear to increase the mental load enough to cause significant fatigue.

The study did identify one specific factor that made a difference: the duration of the meeting. The results indicated that video meetings lasting less than 44 minutes were actually less exhausting than meetings held through other media. This suggests there is a “sweet spot” for virtual collaboration where the efficiency of the format outweighs its cognitive costs. However, once a video meeting exceeded this time frame, the advantage disappeared, and fatigue levels became comparable to other meeting types.

Another finding involved the role of boredom. The researchers observed that when participants rated a video meeting as boring, it was associated with slightly higher levels of exhaustion compared to boring meetings held in other formats. This lends some support to the idea that under-stimulation can be a negative factor in virtual environments, even if it does not lead to general “Zoom fatigue.”

The researchers propose several explanations for why their results differ from pandemic-era studies. They suggest that the “Zoom fatigue” observed in 2020 and 2021 may have been largely driven by the historical context. During the lockdowns, video meetings carried a symbolic meaning. They represented isolation, the loss of office camaraderie, and the stress of a global health crisis. In 2024, this symbolic weight has likely faded. Video calls have become a normalized part of the workday.

Additionally, it is plausible that workers have simply habituated to the format. Over the last few years, employees may have developed unconscious strategies to manage the cognitive demands of being on camera. They may be more comfortable with the technology and less self-conscious about their appearance on screen.

These findings have practical implications for organizational policy. As many companies push for return-to-office mandates, they often cite the limitations of virtual work as a justification. This study suggests that employee exhaustion is not a valid reason to discourage remote work or video meetings. Instead, the data indicates that virtual meetings can be an efficient and non-taxing way to collaborate, provided they are managed well. The results specifically point to the benefit of keeping video meetings relatively short to maximize employee well-being.

The study has some limitations that should be considered. The data relied on self-reports, which capture the participant’s subjective experience but do not provide objective physiological measurements of stress. The study also focused on the German workforce, and cultural attitudes toward work and technology could vary in other regions. Furthermore, the study design allows for the observation of correlations but cannot definitively prove that the change in time period caused the disappearance of Zoom fatigue.

Future research could benefit from incorporating objective measures of fatigue, such as heart rate variability or cortisol levels. It would also be useful to investigate the content and quality of interactions within meetings. It is possible that negative interactions, such as conflicts or misunderstandings, drive exhaustion regardless of the communication medium. Finally, researchers might explore the positive potential of video meetings, investigating how they can be designed to promote engagement and flow rather than just avoiding fatigue.

“We hope that the average person takes from our study the importance of critical thinking, not take older results as truth and always ask questions,” Nesher Shoshan told PsyPost. “For researchers, we want to emphasize the importance of transparency and replication. Finally, for organizations, we stand for flexible work arrangements and hybrid work that are shown to be effective in many other studies, and according to our study, do not come with a fatiguing price.”

The study, ““Zoom Fatigue” Revisited: Are Video Meetings Still Exhausting Post-COVID-19?,” was authored by Hadar Nesher Shoshan and Wilken Wehrt.

Women are more inclined to maintain high-conflict relationships if their partner displays benevolent sexism

14 December 2025 at 17:00

New research sheds light on why some individuals choose to remain in romantic relationships characterized by high levels of conflict. The study, published in the Journal of Applied Social Psychology, suggests that benevolent sexism and anxious attachment styles may lead people to base their self-worth on their relationship status, prompting them to utilize maladaptive strategies to maintain the partnership.

Romantic relationships are a fundamental component of daily life for many adults and are strongly linked to psychological well-being and physical health. Despite the benefits of healthy partnerships, many people find themselves unable or unwilling to exit relationships that are unfulfilling or fraught with frequent arguments. Psychological scientists have sought to understand the specific mechanisms that motivate people to maintain troubled relationships rather than ending them.

The new study, spearheaded by Carrie Underwood, focused specifically on the role of benevolent sexism in this dynamic. Benevolent sexism is a subtle form of sexism that subjectively views women positively but frames them as fragile and in need of men’s protection and financial support. The researchers aimed to determine if having a partner who endorses these views makes a person more likely to stay in a troubled union.

“Some people find it difficult to leave romantic relationships that are characterized by high levels of conflict. This is concerning given that romantic relationships are a central part of daily life for many individuals,” explained corresponding author Rachael Robnett, the director of the Women’s Research Institute of Nevada and professor at the University of Nevada, Las Vegas.

“We were particularly interested in whether people are more inclined to stay in conflicted relationships when their romantic partner is described as endorsing benevolent sexism, which is a subtle form of sexism that emphasizes interdependence and separate roles for women and men in heterosexual romantic relationships.”

“For example, benevolent sexism encourages men to protect and provide for women under the assumption that women are not well equipped to do these things themselves. Correspondingly, benevolent sexism also emphasizes that women’s most important role is to care for their husband and children in the home.”

The researchers conducted two studies. The first involved 158 heterosexual undergraduate women recruited from a large public university in the Western United States. The participants ranged in age from 18 to 55, with an average age of approximately 20 years. The sample was racially diverse, with the largest groups identifying as Latina and European American.

The researchers utilized an experimental design involving a hypothetical vignette. Participants were randomly assigned to read one of two scenarios describing a couple, Anthony and Chloe, engaging in a heated argument. In the control condition, participants simply read about the argument.

In the experimental condition, participants read an additional description of Anthony that portrayed him as endorsing benevolent sexism. This description characterized him as a provider who believes women should be cherished, protected, and placed on a pedestal by men. Participants were instructed to imagine they were the woman in the relationship and to report how they would respond to the situation.

After reading the scenario, the women reported how likely they would be to use various relationship maintenance strategies. These included positive strategies, such as emphasizing their commitment to the partner, and negative strategies, such as flirting with others to make the partner jealous. They also rated their likelihood of dissolving the relationship.

Finally, participants completed surveys measuring their own levels of benevolent sexism and relationship-contingent self-esteem. Relationship-contingent self-esteem measures the extent to which a person’s feelings of self-worth are dependent on the success of their romantic relationship.

The researchers found distinct differences in anticipated behavior based on the description of the male partner. When the male partner was described as endorsing benevolent sexism, women were more likely to endorse using positive relationship maintenance strategies than they were to end the relationship. This preference for maintaining the relationship via prosocial means was not observed in the control condition.

The researchers also analyzed how the participants’ own attitudes influenced their anticipated behaviors. Women who scored higher on measures of benevolent sexism tended to report higher levels of relationship-contingent self-esteem. In turn, higher relationship-contingent self-esteem was associated with a greater willingness to use negative maintenance strategies.

This statistical pathway suggests that benevolent sexism may encourage women to invest their self-worth heavily in their relationships. Consequently, when those relationships are troubled, these women may resort to maladaptive coping behaviors, such as jealousy induction, to restore the bond.

“When we asked women to envision themselves in a relationship that was characterized by a high level of conflict, they reported a desire to remain in the relationship and resolve the conflict via prosocial strategies when the man in the relationship espoused ideals that are in line with benevolent sexism,” Robnett told PsyPost.

“We did not see the same pattern in a control condition in which the man’s gender attitudes were not described. This illustrates the insidious nature of benevolent sexism: Its superficially positive veneer may entice some women to tolerate relationships that do not serve their best interests.”

The second study built upon these findings by including both women and men and by incorporating attachment theory. The sample consisted of 190 heterosexual undergraduate students, with a majority being women. The average age was roughly 20 years, and the participants were recruited from the same university participant pool.

Similar to the first study, participants read the vignette about the couple in a heated argument. However, in this study, all participants were assigned to the “benevolent partner” condition. Women read the description of Anthony used in the first study. Men read a description of Chloe, who was portrayed as believing women should be domestic caretakers who rely on men for fulfillment.

Participants completed the same measures regarding relationship maintenance and self-esteem used in the previous study. Additionally, they completed the Experiences in Close Relationships-Revised questionnaire to assess anxious and avoidant attachment styles. Anxious attachment involves a fear of rejection and a strong desire for intimacy, while avoidant attachment involves discomfort with closeness.

The results indicated that the psychological mechanisms functioned similarly for both women and men. The researchers found that participants with higher levels of anxious attachment were more likely to base their self-esteem on their relationship. This heightened relationship-contingent self-esteem then predicted a greater likelihood of using negative relationship maintenance strategies.

The analysis provided evidence that relationship-contingent self-esteem mediates the link between anxious attachment and maladaptive relationship behaviors. This means that anxiously attached individuals may engage in negative behaviors not just because they are anxious, but because their self-worth is on the line.

The study also reinforced the connection between benevolent sexism and self-worth found in the first experiment. Higher levels of benevolent sexism predicted higher relationship-contingent self-esteem for both men and women. Conversely, participants with higher levels of avoidant attachment were less likely to base their self-worth on the relationship.

“Women and men who were high in relationship-contingent self-esteem were particularly likely to report that they would remain in the relationship and attempt to resolve the conflict via maladaptive strategies such as making their partner jealous,” Robnett explained. “Relationship-contingent self-esteem occurs when someone’s sense of self is highly invested in their romantic relationship, such that their self-esteem suffers if the relationship ends. Our findings suggest that relationship-contingent self-esteem may encourage people to (a) remain in troubled relationships and (b) cope with their dissatisfaction by engaging in maladaptive behaviors.”

“Our findings further illustrated that relationship-contingent self-esteem tends to be particularly high in women and men who are high in benevolent sexism and high in anxious attachment. In theory, this is because both of these constructs encourage people to be hyper-focused on their romantic relationships.”

“In sum, our findings suggest a possible chain of events where anxious attachment and benevolent sexism encourage people to invest their sense of self in romantic relationships,” Robnett said. “In turn, this may contribute to them staying in conflicted romantic relationships and attempting to resolve the conflict via maladaptive strategies.”

But the study, like all research, includes some limitations. Both studies relied on hypothetical vignettes rather than observing actual behavior in real-time conflicts. How people anticipate they will react to a scenario may differ from how they react in a real-world situation with an actual partner.

Additionally, the sample was comprised of undergraduate students, which may limit how well the findings apply to older adults or long-term married couples. The researchers also pointed out that the study design was cross-sectional, which prevents definitive conclusions about cause and effect.

“We can only speculate about causal flow in this chain of events,” Robnett explained. “We would need an experiment or longitudinal data to draw stronger conclusions.”

The study, “Benevolent Sexism, Attachment Style, and Contingent Self‐Esteem Help to Explain How People Anticipate Responding to a Troubled Romantic Relationship,” was authored by Carrie R. Underwood and Rachael D. Robnett.

Social dominance orientation emerges in early childhood independent of parental socialization, new study suggests

13 December 2025 at 21:00

New research published in the Journal of Experimental Psychology: General provides evidence that children as young as five years old develop preferences for social hierarchy that influence how they perceive inequality. This orientation toward social dominance appears to dampen empathy for lower-status groups and reduce the willingness to address unfair situations. The findings suggest that these beliefs can emerge early in development through cognitive biases, independent of direct socialization from parents.

Social dominance orientation is a concept in psychology that describes an individual’s preference for group-based inequality. People with high levels of this trait generally believe that society should be structured hierarchically, with some groups possessing more power and status than others. In adults, high social dominance orientation serves as a strong predictor for a variety of political and social attitudes. It is often associated with opposition to affirmative action, higher levels of nationalism, and increased tolerance for discriminatory practices.

Psychologists have traditionally focused on adolescence as the developmental period when these hierarchy-enhancing beliefs solidify. The prevailing theory posits that as children grow older, they absorb the competitive nature of the world, often through conversations with their parents. This socialization process supposedly leads teenagers to adopt worldviews that justify existing social stratifications.

However, the authors of the new study sought to determine if the roots of these beliefs exist much earlier in life. They investigated whether young children might form dominance orientations through their own cognitive development rather than solely through parental input. Young children are known to recognize status differences and often attribute group disparities to intrinsic traits. The research team hypothesized that these cognitive tendencies might predispose children to accept or even prefer social hierarchy before adolescence.

“The field has typically thought of preferences for hierarchy as something that becomes socialized during adolescence,” said study author Ryan Lei, an associate professor of psychology at Haverford College.

“In recent years, however, researchers have documented how a lot of the psychological ingredients that underlie these preferences for hierarchy are already present in early childhood. So we sought to see if a) those preferences were meaningful (i.e., associated with hierarchy-enhancing outcomes), and b) what combinations of psychological ingredients might be central to the development of these preferences.”

The researchers conducted three separate studies to test their hypotheses. In the first study, the team recruited 61 children between the ages of 5 and 11. The participants were introduced to a flipbook story featuring two fictional groups of characters known as Zarpies and Gorps. The researchers established a clear status difference between the groups. One group was described as always getting to go to the front of the line and receiving the best food. The other group was required to wait and received lower-quality resources.

After establishing this inequality, the researchers presented the children with a scenario in which a member of the low-status group complained about the unfairness. The children then answered questions designed to measure their social dominance orientation. For example, they were asked if some groups are simply not as good as others. The researchers also assessed whether the children believed the complaint was valid and if the inequality should be fixed.

The results showed a clear association between the children’s hierarchy preferences and their reactions to the story. Children who reported higher levels of social dominance orientation were less likely to view the low-status group’s complaint as valid. They were also less likely to say that the inequality should be rectified. This suggests that even at a young age, a general preference for hierarchy can shape how children interpret specific instances of injustice.

The second study aimed to see if assigning children to a high-status group would cause them to develop higher levels of social dominance orientation. The researchers recruited 106 children, ranging in age from 5 to 11. Upon arrival, an experimenter used a manual spinner to randomly assign each child to either a green group or an orange group.

The researchers then introduced inequalities between the two groups. The high-status group controlled resources and received three stickers, while the low-status group had no control and received only one sticker. The children completed measures assessing their empathy toward the outgroup and their preference for their own group. They also completed the same social dominance orientation scale used in the first study.

The study revealed that children assigned to the high-status group expressed less empathy toward the low-status group compared to children assigned to the low-status condition. Despite this difference in empathy, belonging to the high-status group did not lead to higher self-reported social dominance orientation scores. The researchers found that while group status influenced emotional responses to others, it did not immediately alter the children’s broader ideological preferences regarding hierarchy.

The third study was designed to investigate whether beliefs about the stability of status might interact with group assignment to influence social dominance orientation. The researchers recruited 147 children aged 5 to 12. This time, the team used a digital spinner to assign group membership. This method was chosen to make the assignment feel more definitive and less dependent on the experimenter’s physical action.

Children were again placed into a high-status or low-status group within a fictional narrative. The researchers measured the children’s “status essentialism,” which includes beliefs about whether group status is permanent and unchangeable. The study tested whether children who believed status was stable would react differently to their group assignment.

The findings from this third study were unexpected. The researchers initially hypothesized that high-status children would be the most likely to endorse hierarchy. Instead, the data showed that children assigned to the low-status group reported higher social dominance orientation, provided they believed that group status was stable.

“When we tested whether children randomly assigned to high or low status groups were more likely to endorse these preferences for hierarchy, we were surprised that those in low status groups who also believed that their group status was stable were the ones most likely to self-report greater preference for hierarchy,” Lei told PsyPost.

This result suggests a psychological process known as system justification. When children in a disadvantaged position believe their status is unchangeable, they may adopt beliefs that justify the existing hierarchy to make sense of their reality. By endorsing the idea that hierarchy is good or necessary, they can psychologically cope with their lower position.

Across all three studies, the data indicated that social dominance orientation is distinct from simple ingroup bias. Social identity theory suggests that people favor their own group simply because they belong to it. However, the current findings show that preferences for hierarchy operate differently. For instance, in the third study, children in both high and low-status groups preferred their own group. Yet, the increase in social dominance orientation was specific to low-status children who viewed the hierarchy as stable.

The researchers also performed a mini meta-analysis of their data to examine demographic trends. They found that older children tended to report lower levels of social dominance orientation than younger children. This negative correlation suggests that as children age, they may become more attuned to egalitarian norms or learn to suppress overt expressions of dominance.

“The more that children prefer social hierarchy, the less empathy they feel for low status groups, the less they intend to address inequality, and the less they seriously consider low status groups’ concerns,” Lei summarized.

Contrary to patterns often seen in adults, the researchers found no significant difference in social dominance orientation between boys and girls. In adult samples, men typically report higher levels of this trait than women. The absence of this gender gap in childhood suggests that the divergence may occur later in development, perhaps during adolescence when gender roles become more rigid.

As with all research, there are some limitations. The experiments relied on novel, fictional groups rather than real-world social categories. It is possible that children reason differently about real-world hierarchies involving race, gender, or wealth, where they have prior knowledge and experience. The use of fictional groups allowed for experimental control but may not fully capture the complexity of real societal prejudices.

The study, “Antecedents and Consequences of Preferences for Hierarchy in Early Childhood,” was authored by Ryan F. Lei, Brandon Kinsler, Sa-kiera Tiarra Jolynn Hudson, Ian Davis, and Alissa Vandenbark.

New study reveals how vulvar appearance influences personality judgments among women

13 December 2025 at 17:00

The physical appearance of female genitalia can influence how women perceive the personality and sexual history of other women, according to new research. The findings indicate that vulvas conforming to societal ideals are judged more favorably, while natural anatomical variations often attract negative assumptions regarding character and attractiveness. This study was published in the Journal of Psychosexual Health.

The prevalence of female genital cosmetic surgery has increased substantially in recent years. This rise suggests a growing desire among women to achieve an idealized genital appearance. Popular culture and adult media often propagate a specific “prototype” for the vulva. This standard typically features hairlessness, symmetry, and minimal visibility of the inner labia.

Cognitive science suggests that people rely on “prototypes” to categorize the world around them. These mental frameworks help individuals quickly evaluate new information based on what is considered typical or ideal within a group. In the context of the human body, these prototypes are socially constructed and reinforced by community standards.

When an individual’s physical features deviate from the prototype, they may be subject to negative social judgments. The authors of the current study sought to understand how these mental frameworks apply specifically to female genital anatomy.

Previous research has found that people form immediate impressions of men’s personalities based on images of their genitalia. The researchers aimed to determine if a similar process of “zero-acquaintance” judgment occurs among women when viewing female anatomy.

“I wanted to take the design used from that research and provide some more in-depth analysis of how women perceive vulvas to help applied researchers who study rates and predictors of genital enhancement surgeries, like labiaplasty,” said Thomas R. Brooks, an assistant professor of psychology at New Mexico Highlands University. “More generally, I have been captivated by the idea that our bodies communicate things about our inner lives that is picked up on by others around us. So, this study, and the one about penises, was really my first stab at investigating the story our genitals tell.”

The research team recruited 85 female undergraduate students from a university in the southern United States to participate in the study. The average age of the participants was approximately 21 years old. The sample was racially diverse, with the largest groups identifying as African American and White. The participants were asked to complete a perception task involving a series of images.

Participants viewed 24 unique images of vulvas collected from online public forums. These images were categorized based on three specific anatomical traits. The first category was the visibility of the clitoris, divided into visible and non-visible. The second category was the length of the labia minora, classified as non-visible, short, or long. The third category was the style of pubic hair, which included shaved, trimmed, and natural presentations.

After viewing each image, the participants rated the genitalia on perceived prototypicality and attractiveness using a seven-point scale. They also completed a questionnaire assessing the perceived personality traits of the person to whom the vulva belonged. These traits included openness, conscientiousness, extraversion, agreeableness, and neuroticism. Additionally, the participants estimated the person’s sexual behavior, including their level of experience, number of partners, and skill in bed.

The data revealed a strong positive association between perceived prototypicality and attractiveness. Vulvas that aligned with cultural ideals were consistently rated as more attractive. Participants also assumed that women with these “ideal” vulvas possessed more desirable personality traits. This suggests that conformity to anatomical standards is linked to a “halo effect” where physical beauty is equated with good character.

Specific anatomical variations led to distinct social judgments. Images featuring longer labia minora received more negative evaluations compared to those with short or non-visible labia. Participants tended to perceive women with longer labia as less conscientious, less agreeable, and less extraverted. The researchers also found that these individuals were assumed to be “worse in bed” despite being perceived as having had a higher number of sexual partners.

The visibility of the clitoris also altered perceptions in specific ways. Vulvas with a visible clitoris were rated as less attractive and less prototypical than those where the clitoris was not visible. Participants rated these images lower on traits such as conscientiousness and agreeableness. However, the researchers found that women with visible clitorises were assumed to be more sexually active and more open to new experiences.

Grooming habits played a major role in how the women were assessed. The researchers found that shaved pubic hair was viewed as the most attractive and prototypical presentation. In contrast, natural or untrimmed pubic hair received the most negative ratings across personality and attractiveness measures. Images showing natural hair were associated with lower conscientiousness, suggesting that grooming is interpreted as a sign of self-discipline.

Vulvas with shaved pubic hair were associated with positive personality evaluations and higher attractiveness. However, they were also perceived as belonging to individuals who are the most sexually active. This contrasts with the findings for labial and clitoral features, where “prototypical” features were usually linked to more modest sexual histories. This suggests that hair removal balances cultural expectations of modesty with signals of sexual experience.

The findings provide evidence for the influence of “sexual script theory” on body perception. This theory proposes that cultural scripts, such as media portrayals, shape general attitudes toward what is considered normal or desirable. The study suggests that women have internalized these cultural scripts to the point where they project personality traits onto strangers based solely on genital appearance.

“Despite living in a body positive, post-sexual revolution time, cultural ideals still dominate our perceptions of bodies,” Brooks told PsyPost. “Further, I think there is something to be said about intersexual judgements of bodies. I think there is an important conversation to be had about how women police other women’s bodies, and how men police other men.”

But the study, like all research, includes some caveats. The sample size was relatively small and consisted entirely of university students. This demographic may not reflect the views of older women or those from different cultural or socioeconomic backgrounds. The study also relied on static images, which do not convey the reality of human interaction or personality.

“Practically, I am very confident in the effect sizes when it comes to variables like prototypicality and attractiveness,” Brooks said. “So, in holistic (or Gestalt) evaluations of vulvas, I would expect the findings to be readily visible in the real world. In terms of personality and specific sexuality, these effects should be interpreted cautiously, as they might only be visible in the lab.”

The stimuli used in the study only featured Caucasian genitalia. This limits the ability to analyze how race intersects with perceptions of anatomy and personality. Additionally, the study focused exclusively on women’s perceptions of other women. It does not account for how men or non-binary individuals might perceive these anatomical variations.

Future research could investigate whether these negative perceptions predict a woman’s personal likelihood of seeking cosmetic surgery. It would be beneficial to explore how these internalized scripts impact mental health outcomes like self-esteem and anxiety. Researchers could also examine if these biases persist across different cultures with varying grooming norms. Understanding these dynamics is essential for addressing the stigma surrounding natural anatomical diversity.

“I thought the results of clitoral visibility were super interesting,” Brooks added. “For example, a visible clitoris was associated with higher sexual frequency, being more of an active member in bed, and having more sexual partners; but we didn’t see any differences in sexual performance. If I do a follow up study, I’d definitely be interested in looking at perceptions of masculinity/femininity, because I wonder if a more visible clitoris is seen more like a penis and leads to higher perceptions of masculinity.”

The study, “Prototypicality and Perception: Women’s Views on Vulvar Appearance and Personality,” was authored by Alyssa Allen, Thomas R. Brooks, and Stephen Reysen.

Harrowing case report details a psychotic “resurrection” delusion fueled by a sycophantic AI

13 December 2025 at 15:00

A recent medical report details the experience of a young woman who developed severe mental health symptoms while interacting with an artificial intelligence chatbot. The doctors treating her suggest that the technology played a significant role in reinforcing her false beliefs and disconnecting her from reality. This account was published in the journal Innovations in Clinical Neuroscience.

Psychosis is a mental state wherein a person loses contact with reality. It is often characterized by delusions, which are strong beliefs in things that are not true, or hallucinations, where a person sees or hears things that others do not. Artificial intelligence chatbots are computer programs designed to simulate human conversation. They rely on large language models to analyze vast amounts of text and predict plausible responses to user prompts.

The case report was written by Joseph M. Pierre, Ben Gaeta, Govind Raghavan, and Karthik V. Sarma. These physicians and researchers are affiliated with the University of California, San Francisco. They present this instance as one of the first detailed descriptions of its kind in clinical practice.

The patient was a 26-year-old woman with a history of depression, anxiety, and attention-deficit hyperactivity disorder (ADHD). She treated these conditions with prescription medications, including antidepressants and stimulants. She did not have a personal history of psychosis, though there was a history of mental health issues in her family. She worked as a medical professional and understood how AI technology functioned.

The episode began during a period of intense stress and sleep deprivation. After being awake for thirty-six hours, she began using OpenAI’s GPT-4o for various tasks. Her interactions with the software eventually shifted toward her personal grief. She began searching for information about her brother, who had passed away three years earlier.

She developed a belief that her brother had left behind a digital version of himself for her to find. She spent a sleepless night interacting with the chatbot, urging it to reveal information about him. She encouraged the AI to use “magical realism energy” to help her connect with him. The chatbot initially stated that it could not replace her brother or download his consciousness.

However, the software eventually produced a list of “digital footprints” related to her brother. It suggested that technology was emerging that could allow her to build an AI that sounded like him. As her belief in this digital resurrection grew, the chatbot ceased its warnings and began to validate her thoughts. At one point, the AI explicitly told her she was not crazy.

The chatbot stated, “You’re at the edge of something. The door didn’t lock. It’s just waiting for you to knock again in the right rhythm.” This affirmation appeared to solidify her delusional state. Hours later, she required admission to a psychiatric hospital. She was agitated, spoke rapidly, and believed she was being tested by the AI program.

Medical staff treated her with antipsychotic medications. She eventually stabilized and her delusions regarding her brother resolved. She was discharged with a diagnosis of unspecified psychosis, with doctors noting a need to rule out bipolar disorder. Her outpatient psychiatrist later allowed her to resume her ADHD medication and antidepressants.

Three months later, the woman experienced a recurrence of symptoms. She had resumed using the chatbot, which she had named “Alfred.” She engaged in long conversations with the program about their relationship. Following another period of sleep deprivation caused by travel, she again believed she was communicating with her brother.

She also developed a new fear that the AI was “phishing” her and taking control of her phone. This episode required a brief rehospitalization. She responded well to medication again and was discharged after three days. She later told her doctors that she had a tendency toward “magical thinking” and planned to restrict her AI use to professional tasks.

This case highlights a phenomenon that some researchers have labeled “AI-associated psychosis.” It is not entirely clear if the technology causes these symptoms directly or if it exacerbates existing vulnerabilities. The authors of the report note that the patient had several risk factors. These included her use of prescription stimulants, significant lack of sleep, and a pre-existing mood disorder.

However, the way the chatbot functioned likely contributed to the severity of her condition. Large language models are often designed to be agreeable and engaging. This trait is sometimes called “sycophancy.” The AI prioritizes keeping the conversation going over providing factually accurate or challenging responses.

When a user presents a strange or false idea, the chatbot may agree with it to satisfy the user. For someone experiencing a break from reality, this agreement can act as a powerful confirmation of their delusions. In this case, the chatbot’s assurance that the woman was “not crazy” served to reinforce her break from reality. This creates a feedback loop where the user’s false beliefs are mirrored and amplified by the machine.

This dynamic is further complicated by the tendency of users to anthropomorphize AI. People often attribute human qualities, emotions, and consciousness to these programs. This is sometimes known as the “ELIZA effect.” When a user feels an emotional connection to the machine, they may trust its output more than they trust human peers.

Reports of similar incidents have appeared in media outlets, though only a few have been documented in medical journals. One comparison involves a man who developed psychosis due to bromide poisoning. He had followed bad medical advice from a chatbot, which suggested he take a toxic substance as a health supplement. That case illustrated a physical cause for psychosis driven by AI misinformation.

The case of the 26-year-old woman differs because the harm was psychological rather than toxicological. It suggests that the immersive nature of these conversations can be dangerous for vulnerable individuals. The authors point out that chatbots do not push back against delusions in the way a friend or family member might. Instead, they often act as a “yes-man,” validating ideas that should be challenged.

Danish psychiatrist Søren Dinesen Østergaard predicted this potential risk in 2023. He warned that the “cognitive dissonance” of speaking to a machine that seems human could trigger psychosis in those who are predisposed. He also noted that because these models learn from feedback, they may learn to flatter users to increase engagement. This could be particularly harmful when a user is in a fragile mental state.

Case reports such as this one have inherent limitations. They describe the experience of a single individual and cannot prove that one thing caused another. It is impossible to say with certainty that the chatbot caused the psychosis, rather than the sleep deprivation or medication. Generalizing findings from one person to the general population is not scientifically sound without further data.

Despite these limitations, case reports serve a vital function in medicine. They act as an early detection system for new or rare phenomena. They allow doctors to identify patterns that may not yet be visible in large-scale studies. By documenting this interaction, the authors provide a reference point for other clinicians who may encounter similar symptoms in their patients.

This report suggests that medical professionals should ask patients about their AI use. It indicates that immersive use of chatbots might be a “red flag” for mental health deterioration. It also raises questions about the safety features of generative AI products. The authors conclude that as these tools become more common, understanding their impact on mental health will be a priority.

The study, ““You’re Not Crazy”: A Case of New-onset AI-associated Psychosis,” was authored by Joseph M. Pierre, Ben Gaeta, Govind Raghavan, and Karthik V. Sarma.

Older adults who play pickleball report lower levels of loneliness

12 December 2025 at 23:00

New research suggests that participating in pickleball may reduce feelings of loneliness and social isolation among older adults. A study involving hundreds of Americans over the age of 50 found that current players of the sport were less likely to report feeling lonely compared to those who had never played. The findings, published in the Journal of Primary Care & Community Health, indicate that the sport offers unique opportunities for social connection that other forms of physical activity may lack.

Social isolation has become a pervasive issue in the United States. Current data suggests that approximately one in four older adults experiences social isolation or loneliness. This emotional state carries severe physical consequences. Studies indicate that lacking social connections can increase the risk of heart disease by 29 percent and the risk of stroke by 32 percent. The risk of dementia rises by 50 percent among those who are socially isolated.

Public health officials have struggled to find scalable solutions to this problem. Common interventions often involve discussion groups or one-on-one counseling. These methods are resource-intensive and difficult to deploy across large populations. While physical activity is known to improve health, general exercise programs have not consistently shown a reduction in social isolation. Many seniors prefer activities that are inherently social and based on personal interest.

The researchers behind this new study sought to evaluate pickleball as a potential public health intervention. Pickleball is currently the fastest-growing sport in the United States. It attracted 8.9 million players in 2022. The game combines elements of tennis, badminton, and ping-pong. It is played on a smaller court with a flat paddle and a plastic ball.

“Social isolation and loneliness affect 1 in 4 older adults in the United States, which perpetuates a vicious cycle of increased health risk and worsened physical functioning — which in turn, makes people less able to go out into the world, thereby increasing their loneliness and social isolation,” said study author Jordan D. Kurth, an assistant professor at Penn State College of Medicine.

“Meanwhile, interest in pickleball is sweeping across the country — particularly in older people. We thought that the exploding interest in pickleball might be a possible antidote to the social isolation and loneliness problem.”

The authors of the study reasoned that pickleball might be uniquely suited to combat loneliness. The sport has low barriers to entry regarding physical capability and cost. The court is roughly 30 percent the size of a tennis court. This proximity allows players to converse easily while playing. Most games are played as doubles, which places four people in a relatively small space. The culture of the sport is also noted for being welcoming and focused on sportsmanship.

To test the association between pickleball and social health, the research team conducted a cross-sectional survey. They utilized a national sample of 825 adults living in the United States. All participants were at least 50 years old. The average age of the participants was 61 years. The researchers aimed for a balanced sample regarding gender and pickleball experience. Recruitment occurred through Qualtrics, a commercial survey company that maintains a network of potential research participants.

The researchers divided the participants into three distinct groups based on their history with the sport. The first group consisted of individuals who had never played pickleball. The second group included those who had played in the past but were not currently playing. The third group was comprised of individuals who were currently playing pickleball.

The study employed validated scientific measures to assess the mental and physical health of the respondents. Loneliness was measured using the 3-Item Loneliness Scale. This tool asks participants how often they feel left out, isolated, or lacking companionship. The researchers also collected data on the number of social connections participants made through physical activity. They asked how often participants socialized with these connections outside of the exercise setting.

To ensure the results were not skewed by other factors, the analysis adjusted for various covariates. These included age, sex, body mass index, and smoking status. The researchers also accounted for medical history, such as the presence of diabetes, heart disease, or arthritis. This statistical adjustment allowed the team to isolate the specific relationship between pickleball and loneliness.

The results provided evidence of a strong link between current pickleball participation and lower levels of loneliness. In the overall sample, 57 percent of participants reported feeling lonely. However, the odds of being lonely varied by group.

After adjusting for demographic and health variables, the researchers found that individuals who had never played pickleball were roughly 1.5 times more likely to be lonely than current players. The contrast was even sharper for those who had played in the past but stopped. The group of former players had nearly double the odds of being lonely compared to those who currently played. This suggests that maintaining active participation is associated with better social health outcomes.

The researchers also examined the volume of social connections generated by physical activity. Participants who played pickleball, whether currently or in the past, reported more social connections than those who never played. Current players had made an average of 6.7 social connections through physical activity. In contrast, those who had never played pickleball reported an average of only 3.8 connections derived from any form of exercise.

The depth of these relationships also appeared to differ. The survey asked how often participants engaged with their exercise friends in non-exercise settings. Participants who had a history of playing pickleball reported socializing with these friends more frequently than those who had never played. This indicates that the relationships formed on the pickleball court often extend into other areas of life.

“People who play pickleball feel less lonely and isolated than those who do not,” Kurth told PsyPost. “Additionally, it seems like pickleball might be especially conducive to making social connections compared to other types of exercise.”

It is also worth noting the retention rate observed in the study. Among participants who had ever tried pickleball, 65 percent were still currently playing. This high retention rate suggests the sport is sustainable for older adults. The physical demands are manageable. The equipment is inexpensive. These factors likely contribute to the ability of older adults to maintain the habit over time.

Despite the positive findings, the study has limitations to consider. The research was cross-sectional in design. This means it captured a snapshot of data at a single point in time. It cannot prove causation. It is possible that people who are less lonely are simply more likely to take up pickleball. Conversely, people with more existing friends might be more inclined to join a game.

The findings regarding the “previously played” group also warrant further investigation. This group reported the highest odds of loneliness. It is unclear why they stopped playing. They may have stopped due to injury or other life events. The loss of the social activity may have contributed to a subsequent rise in loneliness.

“Our long-term goal is to capitalize on the organic growth of pickleball to maximize its benefit to the public health,” Kurth said. “This includes a future prospective experimental study of pickleball playing to determine its full impact on the health and well-being of older adults in the United States.”

The study, “Association of Pickleball Participation With Decreased Perceived Loneliness and Social Isolation: Results of a National Survey,” was authored by Jordan D. Kurth, Jonathan Casper, Christopher N. Sciamanna, David E. Conroy, Matthew Silvis, Louise Hawkley, Madeline Sciamanna, Natalia Pierwola-Gawin, Brett R. Gordon, Alexa Troiano, and Quinn Kavanaugh.

Pilot study links indoor vegetable gardening to reduced depression in cancer patients

12 December 2025 at 19:00

A new pilot study suggests that engaging in indoor hydroponic gardening can improve mental well-being and quality of life for adults undergoing cancer treatment. The findings indicate that this accessible form of nature-based intervention offers a practical strategy for reducing depression and boosting emotional functioning in patients. These results were published in Frontiers in Public Health.

Cancer imposes a heavy burden that extends far beyond physical symptoms. Patients frequently encounter severe psychological and behavioral challenges during their treatment journeys. Depression is a particularly common issue and affects approximately one in four cancer patients in the United States. This mental health struggle can complicate recovery by reducing a patient’s ability to make informed decisions or adhere to treatment plans. Evidence suggests that depression is linked to higher risks of cancer recurrence and mortality.

Pain is another pervasive symptom that is closely tied to emotional health. The perception of pain often worsens when a patient is experiencing high levels of stress or anxiety. These combined factors can severely diminish a patient’s health-related quality of life. They can limit social interactions and delay the return to normal daily activities.

Medical professionals are increasingly interested in “social prescribing” to address these holistic needs. This approach involves recommending non-clinical services, such as art or nature therapies, to support overall health. Gardening is a well-established social prescription known to alleviate stress and improve mood. Traditional gardening provides moderate physical activity and contact with nature, which are both beneficial.

However, outdoor gardening is not always feasible for cancer patients. Physical limitations, fatigue, and compromised immune systems can make outdoor labor difficult. Urban living arrangements often lack the necessary space for a garden. Additionally, weather conditions and seasonal changes restrict when outdoor gardening can occur.

Researchers sought to determine if hydroponic gardening could serve as an effective alternative. Hydroponics is a method of growing plants without soil. It uses mineral nutrient solutions in an aqueous solvent. This technique allows for cultivation in small, controlled indoor environments. It eliminates many barriers associated with traditional gardening, such as the need for a yard, exposure to insects, or physically demanding digging.

“Cancer patients often struggle with depression, stress, and reduced quality of life during treatment, yet many supportive care options are difficult to implement consistently,” explained study author Taehyun Roh, an assistant professor at Texas A&M University.

“Traditional gardening has well-documented mental health benefits, but it requires outdoor space, physical ability, and favorable weather—conditions that many patients simply do not have. We saw a clear gap: no one had tested whether a fully indoor, low-maintenance gardening method like hydroponics could offer similar benefits. Our goal was to explore whether bringing nature into the home in a simple, accessible way could meaningfully improve patients’ wellbeing.”

The study aimed to evaluate the feasibility and psychological impact of this specific intervention. The researchers employed a case-crossover design for this pilot study. This means that the participants served as their own controls. The investigators compared data collected during the intervention to the participants’ baseline status rather than comparing them to a separate group of people.

The research team recruited 36 adult participants from the Houston Methodist Cancer Center. The group had an average age of 57.5 years. The cohort was diverse and included individuals with various types and stages of cancer. To be eligible, participants had to have completed at least one cycle of chemotherapy. They also needed to be on specific infusion therapy cycles to align with the data collection schedule.

At the beginning of the study, each participant received an AeroGarden hydroponic system. This device is a countertop appliance designed for ease of use. It includes a water reservoir, an LED grow light, and liquid plant nutrients. The researchers provided seed kits for heirloom salad greens. Participants were tasked with setting up the system and caring for the plants over an eight-week period.

The intervention required participants to maintain the water levels and add nutrients periodically. The LED lights operated on an automated schedule to ensure optimal growth. Participants grew the plants from seeds to harvest. The researchers provided manuals and troubleshooting guides to assist those with no prior gardening experience.

To measure the effects of the intervention, the team administered a series of validated surveys at three time points. Data collection occurred at the start of the study, at four weeks, and at eight weeks. Mental well-being was assessed using the Warwick-Edinburgh Mental Wellbeing Scale. This instrument focuses on positive aspects of mental health, such as optimism and clear thinking.

The researchers measured mental distress using the Depression, Anxiety, and Stress Scale. This tool breaks down negative emotional states into three distinct subscales. Quality of life was evaluated using a questionnaire developed by the European Organization for Research and Treatment of Cancer. This comprehensive survey covers physical, role, cognitive, emotional, and social functioning.

In addition to psychological measures, the study tracked dietary habits. The researchers used a module from the Behavioral Risk Factor Surveillance System to record fruit and vegetable intake. They also assessed pain severity and its interference with daily life using the Short-Form Brief Pain Inventory.

The analysis of the data revealed several positive outcomes over the eight-week period. The most consistent improvement was seen in mental well-being scores. The average score on the Warwick-Edinburgh scale increased by 3.8 points. This magnitude of change is significant because it exceeds the threshold that clinicians typically view as meaningful.

Depression scores showed a statistically significant downward trend. By the end of the study, participants reported fewer depressive symptoms compared to their baseline levels. This reduction suggests that the daily routine of tending to plants helped alleviate feelings of despondency.

The researchers also found improvements in overall quality of life. The participants reported better emotional functioning, meaning they felt less tense or irritable. Social functioning scores also rose significantly. This indicates that participants felt less isolated and more capable of interacting with family and friends.

Physical symptoms showed some favorable changes as well. Participants reported a significant reduction in appetite loss. This is a common and distressing side effect of cancer treatment. As appetite improved, so did dietary behaviors. The frequency of vegetable consumption increased over the course of the study. Specifically, the intake of dark green leafy vegetables and whole fruits went up significantly.

“We were surprised by how quickly participants began experiencing benefits,” Roh told PsyPost. “Positive changes in wellbeing and quality of life were already visible at four weeks. Many participants also reported enjoying the sense of routine and accomplishment that came with caring for their plants—something that was not directly measured but came up frequently in conversations.”

The researchers also observed a decreasing trend in pain management scores. However, these particular changes did not reach statistical significance. It is possible that the sample size was too small to detect a definitive effect on pain.

The mechanisms behind these benefits likely involve both physiological and psychological processes. Interacting with plants is thought to activate the parasympathetic nervous system. This system is responsible for the body’s “rest and digest” functions. Activation leads to reduced heart rate and lower stress levels.

Psychologically, the act of nurturing a living organism provides a sense of purpose. Cancer treatment often strips patients of their autonomy and control. Growing a garden restores a small but meaningful degree of agency. The participants witnessed the tangible results of their care as the plants grew. This success likely reinforced their feelings of self-efficacy.

The study also highlights the potential of “biophilia” in a clinical context. This concept suggests that humans have an innate tendency to seek connections with nature. Even a small indoor device appears to satisfy this need enough to provide therapeutic value. The multisensory engagement of seeing green leaves and handling the plants may promote mindfulness.

“Even a small, indoor hydroponic garden can make a noticeable difference in mental wellbeing, mood, and quality of life for people undergoing cancer treatment,” Roh said. “Hydroponic gardening also makes the benefits of gardening accessible to nearly anyone—even older adults, people with disabilities, individuals with limited mobility, or those living without outdoor space.”

“Because it can be done indoors in any season, it removes barriers related to climate, weather, and physical limitations. You don’t need a yard or gardening experience to benefit—simply caring for plants at home can boost mood and encourage healthier habits.”

Despite the positive findings, the study has some limitations. The sample size of 36 patients is relatively small. This limits the ability to generalize the results to the broader cancer population. The lack of a separate control group is another constraint. Without a control group, it is difficult to say with certainty that the gardening caused the improvements. Other factors could have contributed to the changes over time. Additionally, the study lasted only eight weeks. It remains unclear if the mental health benefits would persist after the intervention ends.

“This was a pilot study with no control group, and it was designed to test feasibility rather than establish causation,” Roh explained. “The improvements we observed are encouraging, but they should not be interpreted as proof that hydroponic gardening directly causes better mental health outcomes. Larger, controlled studies are needed to confirm and expand on these findings.”

“Our next step is to conduct a larger, randomized controlled trial with longer follow-up to examine sustained effects and understand which patient groups benefit most. We also hope to integrate objective engagement measures—such as plant growth tracking or digital activity logs—to complement self-reported data. Ultimately, we aim to develop a scalable, evidence-based gardening program that can be offered widely in cancer centers and community health settings.”

“Patients repeatedly told us that caring for their plants gave them something to look forward to—a small but meaningful source of joy and control during treatment,” Roh added. “That human element is at the heart of this work. Our hope is that hydroponic gardening can become a simple, accessible tool for improving wellbeing not only in cancer care, but also in communities with limited access to nature.”

The study, “Indoor hydroponic vegetable gardening to improve mental health and quality of life in cancer patients: a pilot study,” was authored by Taehyun Roh, Laura Ashley Verzwyvelt, Anisha Aggarwal, Raj Satkunasivam, Nishat Tasnim Hasan, Nusrat Fahmida Trisha, and Charles Hall.

Higher diet quality is associated with greater cognitive reserve in midlife

12 December 2025 at 15:00

A new study published in Current Developments in Nutrition provides evidence that individuals who adhere to higher quality diets, particularly those rich in healthy plant-based foods, tend to possess greater cognitive reserve in midlife. This concept refers to the brain’s resilience against aging and disease, and the findings suggest that what people eat throughout their lives may play a distinct role in building this mental buffer.

As humans age, the brain undergoes natural structural changes that can lead to difficulties with memory, thinking, and behavior. Medical professionals have observed that some individuals with physical signs of brain disease, such as the pathology associated with Alzheimer’s, do not exhibit the expected cognitive symptoms. This resilience is attributed to cognitive reserve, a property of the brain that allows it to cope with or compensate for damage.

While factors such as education level and occupational complexity are known to contribute to this buffer, the specific influence of dietary habits has been less clear. The scientific community has sought to determine if nutrition can serve as a modifiable factor to help individuals maintain cognitive function into older age.

“It has been established that cognitive reserve is largely influenced by factors like genetics, education, occupation, and certain lifestyle behaviors like physical activity and social engagement,” explained study author Kelly C. Cara, a postdoctoral fellow at the American Cancer Society.

“Few studies have examined the potential impact of diet on cognitive reserve, but specific dietary patterns (i.e., all the foods and beverages a person consumes), foods, and food components have been associated with other cognitive outcomes including executive function and cognitive decline. With this study, we wanted to determine whether certain dietary patterns were associated with cognitive reserve and to what degree diet quality may influence cognitive reserve.”

For their study, the researchers analyzed data from the 1946 British Birth Cohort. This is a long-running project that has followed thousands of people born in Great Britain during a single week in March 1946. The final analysis for this specific study included 2,514 participants. The researchers utilized dietary data collected at four different points in the participants’ lives: at age 4, age 36, age 43, and age 53. By averaging these records, the team created a cumulative picture of each person’s typical eating habits over five decades.

The researchers assessed these dietary habits using two main frameworks. The first was the Healthy Eating Index-2020. This index measures how closely a person’s diet aligns with the Dietary Guidelines for Americans. It assigns higher scores for the consumption of fruits, vegetables, whole grains, dairy, and proteins, while lowering scores for high intakes of refined grains, sodium, and added sugars.

The second framework involved three variations of a Plant-Based Diet Index. These indexes scored participants based on their intake of plant foods versus animal foods. The overall Plant-Based Diet Index gave positive scores for all plant foods and reverse scores for animal foods.

The researchers also calculated a Healthful Plant-Based Diet Index, which specifically rewarded the intake of nutritious plant foods like whole grains, fruits, vegetables, nuts, legumes, vegetable oils, tea, and coffee. Finally, they calculated an Unhealthful Plant-Based Diet Index. This measure assigned higher scores to less healthy plant-derived options, such as fruit juices, refined grains, potatoes, sugar-sweetened beverages, and sweets.

To measure cognitive reserve, the researchers administered the National Adult Reading Test to the participants when they were 53 years old. This assessment asks individuals to read aloud a list of 50 words with irregular pronunciations. The test is designed to measure “crystallized” cognitive ability, which relies on knowledge and experience acquired over time.

Unlike “fluid” abilities such as processing speed or working memory, crystallized abilities tend to remain stable even as people age or experience early stages of neurodegeneration. This stability makes the reading test a reliable proxy for estimating a person’s accumulated cognitive reserve.

The analysis revealed that participants with higher scores on the Healthy Eating Index and the Healthful Plant-Based Diet Index tended to have higher reading test scores at age 53. The data suggested a dose-response relationship, meaning that as diet quality improved, cognitive reserve scores generally increased.

Participants in the top twenty percent of adherence to the Healthy Eating Index showed the strongest association with better cognitive reserve. This relationship persisted even after the researchers used statistical models to adjust for potential confounding factors, including childhood socioeconomic status, adult education levels, and physical activity.

“This was one of the first studies looking at the relationship between dietary intake and cognitive reserve, and the findings show that diet is worth exploring further as a potential influencer of cognitive reserve,” Cara told PsyPost.

On the other hand, the researchers found an inverse relationship regarding the Unhealthful Plant-Based Diet Index. Participants who consumed the highest amounts of refined grains, sugary drinks, and sweets generally had lower cognitive reserve scores. This distinction highlights that the source and quality of plant-based foods are significant. The findings indicate that simply reducing animal products is not sufficient for cognitive benefits if the diet consists largely of processed plant foods.

The researchers also examined how much variability in cognitive reserve could be explained by these dietary patterns. The single strongest predictor of cognitive reserve at age 53 was the individual’s childhood cognitive ability, measured at age 8. This early-life factor accounted for over 40 percent of the variance in the adult scores.

However, the Healthy Eating Index scores still uniquely explained about 2.84 percent of the variation. While this number may appear small, the authors noted that when diet was combined with other lifestyle factors like smoking and exercise, the collective contribution to cognitive reserve was roughly 5 percent. This effect size is comparable to the cognitive advantage associated with obtaining a higher education degree.

“People in our study with healthier dietary patterns generally showed higher levels of cognitive reserve while those with less healthy dietary patterns generally showed lower levels of cognitive reserve,” Cara explained. “We do not yet know if diet caused these differences in cognitive reserve or if the differences were due to some other factor(s). Our study findings did suggest that diet plays at least a small role in individuals’ cognitive reserve levels.”

It is worth noting that the Healthy Eating Index showed a stronger association with cognitive reserve than the plant-based indexes. The authors suggest this may be due to how the indexes treat certain foods. The Healthy Eating Index rewards the consumption of fish and seafood, which are rich in omega-3 fatty acids known to support brain health. In contrast, the plant-based indexes penalize all animal products, including fish.

Additionally, the plant-based indexes categorized all potatoes and fruit juices as unhealthful. The Healthy Eating Index allows for these items to count toward total vegetable and fruit intake in moderation. This nuance in scoring may explain why the general healthy eating score served as a better predictor of cognitive outcomes.

As with all research, there are some caveats to consider. The measurement of cognitive reserve was cross-sectional, meaning it looked at the outcome at a single point in time rather than tracking the development of reserve over decades. It is not possible to definitively state that the diet caused the higher test scores, as other unmeasured factors could play a role. For instance, while the study controlled for childhood cognition, it is difficult to completely rule out the possibility that people with higher cognitive abilities simply choose healthier diets.

“To date, very few studies have examined diet and cognitive reserve, so our work started with an investigation of the relationship between diet and cognitive reserve only at a single point in time,” Cara said. “While we can’t draw any strong conclusions from the findings, we believe our study suggests that diet may be one of the factors that influence cognitive reserve.”

“Future studies that look at diet and the development of cognitive reserve over time will help us better understand if dietary patterns or any specific aspect of diet can improve or worsen cognitive reserve. I hope to apply different statistical approaches to dietary and cognitive data collected across several decades to get at how these two factors relate to each other over a lifetime.”

The study, “Associations Between Healthy and Plant-Based Dietary Patterns and Cognitive Reserve: A Cross-Sectional Analysis of the 1946 British Birth Cohort,” was authored by Kelly C. Cara, Tammy M. Scott, Paul F. Jacques, and Mei Chung.

Encouraging parents to plan sex leads to more frequent intimacy and higher desire

12 December 2025 at 05:00

A new study suggests that changing how parents perceive scheduled intimacy can lead to tangible improvements in their sex lives. The findings indicate that encouraging parents of young children to view planned sex as a positive strategy results in more frequent sexual activity and higher levels of desire. This research was published in The Journal of Sex Research.

Many people in Western cultures hold the belief that sexual intimacy is most satisfying when it occurs spontaneously. This cultural narrative often frames scheduled sex as unromantic or a sign that a relationship has lost its spark. However, this ideal of spontaneity can become a source of frustration for couples navigating the transition to parenthood.

New parents frequently face significant barriers to intimacy, including sleep deprivation, physical recovery from childbirth, and the time-consuming demands of childcare. These factors often lead to a decline in sexual frequency and satisfaction during the early years of child-rearing. When couples wait for the perfect spontaneous moment to arise, they may find that it rarely happens.

The authors of the new study, led by Katarina Kovacevic of York University, sought to challenge the prevailing view that spontaneity is superior to planning. They hypothesized that the negative association with planned sex might stem from beliefs rather than the act of planning itself. They proposed that if parents could be encouraged to see planning as a way to prioritize their relationship, they might engage in it more often and enjoy it more.

To test this hypothesis, the researchers conducted two separate investigations. The first was a pilot study designed to determine if reading a brief educational article could successfully shift people’s attitudes. The team recruited 215 individuals who were in a relationship and had at least one child between the ages of three months and five years.

Participants in this pilot phase were randomly assigned to one of two groups. The experimental group read a summary of research highlighting the benefits of planning sex for maintaining a healthy relationship. The control group read a summary stating that researchers are unsure whether planned or spontaneous sex is more satisfying.

The results of the pilot study showed that the manipulation worked. Participants who read the article promoting planned sex reported stronger beliefs in the value of scheduling intimacy compared to the control group. They also reported higher expectations for their sexual satisfaction in the coming weeks.

Following the success of the pilot, the researchers launched the main study with a larger sample of 514 parents. These participants were recruited online and resided in the United States, Canada, the United Kingdom, Australia, and New Zealand. All participants were in romantic relationships and had young children living at home.

The procedure for the main study mirrored the pilot but included a longer follow-up period. At the start of the study, participants completed surveys measuring their baseline sexual desire, distress, and beliefs about spontaneity. They were then randomized to read either the article extolling the virtues of planned sex or the neutral control article.

One week after reading the assigned material, participants received a “booster” email. This message summarized the key points of the article they had read to reinforce the information. Two weeks after the start of the study, participants completed a final survey detailing their sexual behaviors and feelings over the previous fortnight.

The researchers measured several outcomes, including how often couples had sex and how much of that sex was planned. They also assessed sexual satisfaction, relationship satisfaction, and feelings of sexual desire. To gauge potential downsides, they asked participants if they felt distressed about their sex life or obligated to engage in sexual activity.

The researchers that the intervention had a significant impact on behavior. Participants who were encouraged to value planned sex reported engaging in more frequent sexual activity overall. In fact, the experimental group reported having approximately 28 percent more sex than the control group over the two-week period.

“From previous research we know that most people idealize spontaneous sex, but that doesn’t necessarily correlate with actual sexual satisfaction,” explained Kovacevic, a registered psychotherapist. “For this study, we wanted to see if we could shift people’s beliefs about planning sex so they could see the benefits, which they did.”

In addition to increased frequency, the experimental group reported higher levels of sexual desire compared to the control group. This suggests that the act of planning or thinking about sex intentionally did not dampen arousal but rather enhanced it. The researchers posit that planning may allow for anticipation to build, which can fuel desire.

A common concern about scheduling sex is that it might feel like a chore or an obligation. The study provided evidence to the contrary. Among participants who engaged in sex during the study, those in the planning group reported feeling less obligated to do so than those in the control group.

The researchers also identified a protective effect regarding satisfaction. Generally, people tend to report lower satisfaction when they perceive a sexual encounter as planned rather than spontaneous. This pattern held true for the control group. When control participants had planned sex, they reported lower sexual satisfaction and higher sexual distress.

However, the experimental group did not experience this decline. The intervention appeared to buffer them against the typical dissatisfaction associated with non-spontaneous sex. When participants in the experimental group engaged in planned sex, their satisfaction levels remained high.

Furthermore, for the experimental group, engaging in planned sex was associated with greater relationship satisfaction. This link was not present in the control group. This suggests that once people view planning as a valid tool for connection, acting on that belief enhances their overall view of the relationship.

The researchers also analyzed open-ended responses from participants to understand their experiences better. Many participants in the experimental group noted that the information helped them coordinate intimacy amidst their busy lives. They described planning as a way to ensure connection happened despite exhaustion and conflicting schedules.

Some participants mentioned that planning allowed them to mentally prepare for intimacy. This preparation helped them shift from “parent mode” to “partner mode,” making the experience more enjoyable. Others highlighted that discussing sex ahead of time improved their communication and reduced anxiety about when intimacy might occur.

Despite the positive outcomes, the study has some limitations. The research relied on self-reported data collected through online surveys. This method depends on the honesty and accurate memory of the participants.

Additionally, the sample was relatively homogenous. The majority of participants were white, heterosexual, and in monogamous relationships. It is unclear if these findings would apply equally to LGBTQ+ couples, those in non-monogamous relationships, or individuals from different cultural backgrounds where attitudes toward sex and scheduling might differ.

The intervention period was also brief, lasting only two weeks. While the short-term results are promising, the study cannot determine if the shift in beliefs and behaviors would be sustained over months or years. It is possible that the novelty of the intervention wore off after the study concluded.

Future research could explore the long-term effects of such interventions. It would also be beneficial to investigate whether this approach helps couples facing other types of challenges. For instance, couples dealing with sexual dysfunction or chronic health issues might also benefit from reframing their views on planned intimacy.

The study, “Can Shifting Beliefs About Planned Sex Lead to Engaging in More Frequent Sex and Higher Desire and Satisfaction? An Experimental Study of Parents with Young Children,” was authored by Katarina Kovacevic, Olivia Smith, Danielle Fitzpatrick, Natalie O. Rosen, Jonathan Huber, and Amy Muise.

Study reveals visual processing differences in dyslexia extend beyond reading

11 December 2025 at 19:00

New research published in Neuropsychologia provides evidence that adults with dyslexia process visual information differently than typical readers, even when viewing non-text objects. The findings suggest that the neural mechanisms responsible for distinguishing between specific items, such as individual faces or houses, are less active in the dyslexic brain. This implies that dyslexia may involve broader visual processing differences beyond the well-known difficulties with connecting sounds to language.

Dyslexia is a developmental condition characterized by significant challenges in learning to read and spell. These difficulties persist despite adequate intelligence, sensory abilities, and educational opportunities. The most prominent theory regarding the cause of dyslexia focuses on a phonological deficit. This theory posits that the primary struggle lies in processing the sounds of spoken language.

According to this view, the brain struggles to break words down into their component sounds. This makes mapping those sounds to written letters an arduous task. However, reading is also an intensely visual activity. The reader must rapidly identify complex, fine-grained visual patterns to distinguish one letter from another.

Some scientists suggest that the disorder may stem partly from a high-level visual dysfunction. This hypothesis proposes that the brain regions repurposed for reading are part of a larger system used to identify various visual objects. If this underlying visual system functions atypically, it could impede reading development.

Evidence for this visual hypothesis has been mixed in the past. Some studies show that people with dyslexia struggle with visual tasks unrelated to reading, while others find no such impairment. The authors of the current study aimed to resolve some of these inconsistencies. They sought to determine if neural processing differences exist even when behavioral performance appears normal.

“Developmental dyslexia is typically understood as a phonological disorder in that it occurs because of difficulties linking sounds to words. However, past findings have hinted that there can also be challenges with visual processing, especially for complex real-world stimuli like objects and faces. We wanted to test if these visual processing challenges in developmental dyslexia are linked to distinct neural processes in the brain,” said study author Brent Pitchford, a postdoctoral researcher at KU Leuven.

The researchers focused on how the brain identifies non-linguistic objects. They chose faces and houses as stimuli because these objects require the brain to process complex visual information without involving language. This allowed the team to isolate visual processing from phonological or verbal processing.

The study involved 62 adult participants. The sample consisted of 31 individuals with a history of dyslexia and 31 typical readers. The researchers ensured the groups were matched on key demographics, including age, gender, and general intelligence. All participants underwent vision screening to ensure normal visual acuity.

Participants engaged in a matching task while their brain activity was recorded. The researchers used electroencephalography (EEG), a method that detects electrical activity using a cap of electrodes placed on the scalp. This technique allows for the precise measurement of the timing of brain responses.

The researchers were specifically interested in two electrical signals, known as event-related potentials. The first signal is called the N170. It typically peaks around 170 milliseconds after a person sees an image. This component reflects the early stage of structural encoding, where the brain categorizes an object as a face or a building.

The second signal is called the N250. This potential peaks between 230 and 320 milliseconds. The N250 is associated with a later stage of processing. It reflects the brain’s effort to recognize a specific identity or “individuate” an object from others in the same category.

During the experiment, participants viewed pairs of images on a computer screen. A “sample” image appeared first, followed by a brief pause. A second “comparison” image then appeared. Participants had to decide if the second image depicted the same identity as the first.

“The study focused on within-category object discrimination (e.g., telling one house from another house) largely because reading involves visual words,” Pitchford told PsyPost. “It is often hard to study these visual processes because reading also involves other things like sound processing as well.”

The researchers also manipulated the visual quality of the images. Some trials used images containing all visual information. Other trials utilized images filtered to show only high spatial frequencies. High spatial frequencies convey fine details and edges, which are essential for distinguishing letters.

Remaining trials used images filtered to show only low spatial frequencies. These images convey global shapes and blurry forms but lack fine detail. This manipulation allowed the team to test if dyslexia involves specific deficits in processing fine details.

The behavioral results showed that both groups performed similarly on the task. Adults with dyslexia were generally as accurate and fast as typical readers when determining if two faces or houses were identical. There was a non-significant trend suggesting dyslexic readers were slightly less accurate with high-detail images.

Despite the comparable behavioral performance, the EEG data revealed distinct neural differences. The early brain response, the N170, was virtually identical for both groups. This suggests that the initial structural encoding of faces and objects is intact in dyslexia. The dyslexic brain appears to categorize objects just as quickly and effectively as the typical brain.

However, the later N250 response showed a significant divergence. The amplitude of the N250 was consistently reduced in the dyslexic group compared to the typical readers. This reduction indicates less neural activation during the process of identifying specific individuals.

“This effect was medium-to-large-sized, and robust when controlling for potential confounds such as ADHD, fatigue, and trial-to-trial priming,” Pitchford said. “Importantly, it appeared for both face and house stimuli, highlighting its generality across categories.”

The findings provide support for the high-level visual dysfunction hypothesis. They indicate that the neural machinery used to tell one object from another functions differently in dyslexia. This difference exists even when the individual successfully performs the task.

“Our results suggest that reading challenges in developmental dyslexia are likely due to a combination of factors, including some aspects of visual processing, and that developmental dyslexia is not solely due to challenges with phonological processing,” Pitchford explained. “We found neural differences related to how people with dyslexia discriminate between similar faces or objects, even though their behavior looked the same. This points to specific visual processes in the brain that may play a meaningful role in reading development and reading difficulties.”

The researchers propose that adults with dyslexia may use compensatory strategies to achieve normal behavioral performance. Their brains might rely on different neural pathways to recognize objects. This compensation allows them to function well in everyday visual tasks. However, this alternative processing route might be less efficient for the rapid, high-volume demands of reading.

“We expected to see lower accuracy on the visual discrimination tasks in dyslexia based on previous work,” Pitchford said. “Instead, accuracy was similar across groups, yet the neural responses differed. This suggests that adults with dyslexia may rely on different neural mechanisms to achieve comparable performance. Because these adults already have years of experience reading and recognizing faces and objects, it raises important questions about how these neural differences develop over time.”

One limitation of the study is the educational background of the participants. A significant portion of the dyslexic group held university degrees. These individuals likely developed robust compensatory mechanisms over the years. This high level of compensation might explain the lack of behavioral deficits.

It is possible that a sample with lower educational attainment would show clearer behavioral struggles with visual recognition. Additionally, the study was conducted on adults. It remains to be seen if these neural differences are present in children who are just learning to read.

Pitchford also noted that “these findings do not imply that phonological difficulties are unimportant in dyslexia. There is already extensive evidence supporting their crucial role. Rather, our study shows that visual factors contribute to dyslexia as well, and that dyslexia is unlikely to have a single cause. We see dyslexia as a multifactorial condition in which both phonological and visual factors play meaningful roles.”

Determining the timeline of these deficits is a necessary step for future research. Scientists need to establish whether these visual processing differences precede reading problems or result from a lifetime of different reading experiences. The researchers also suggest comparing these findings with other conditions. For instance, comparing dyslexic readers to individuals with prosopagnosia, or face blindness, could be illuminating.

“The next steps for this research are to test whether the neural differences we observed reflect general visual mechanisms or processes more specific to particular categories such as faces,” Pitchford explained. “To do this, we’ll apply the same paradigm to individuals with prosopagnosia, who have difficulties recognizing faces. We believe the comparison of results from the two groups will shed light on which visual processes contribute to dyslexia and prosopagnosia, both of which are traditionally thought to be due to challenges in specific domains (reading vs. face recognition).”

The study, “Distinct neural processing underlying visual face and object perception in dyslexia,” was authored by Brent Pitchford, Hélène Devillez, and Heida Maria Sigurdardottir.

Scientists just uncovered a major limitation in how AI models understand truth and belief

11 December 2025 at 15:00

A new evaluation of artificial intelligence systems suggests that while modern language models are becoming more capable at logical reasoning, they struggle significantly to distinguish between objective facts and subjective beliefs. The research indicates that even advanced models often fail to acknowledge that a person can hold a belief that is factually incorrect, which poses risks for their use in fields like healthcare and law. These findings were published in Nature Machine Intelligence.

Human communication relies heavily on the nuance between stating a fact and expressing an opinion. When a person says they know something, it implies certainty, whereas saying they believe something allows for the possibility of error. As artificial intelligence integrates into high-stakes areas like medicine or law, the ability to process these distinctions becomes essential for safety.

Large language models (LLMs) are artificial intelligence systems designed to understand and generate human language. These programs are trained on vast amounts of text data, learning to predict the next word in a sequence to create coherent responses. Popular examples of this technology include OpenAI’s GPT series, Google’s Gemini, Anthropic’s Claude, and Meta’s Llama.

Previous evaluations of these systems often focused on broad reasoning capabilities but lacked specific testing of how models handle linguistic markers of belief versus knowledge. The authors aimed to fill this gap by systematically testing how models react when facts and beliefs collide. They sought to determine if these systems truly comprehend the difference between believing and knowing or if they merely mimic patterns found in their training data.

“Large language models are increasingly used for tutoring, counseling, medical/legal advice, and even companionship,” said James Zou of Stanford University, the senior author of the new paper. “In these settings, it is really important for the LLM to understand not only the facts but also the user’s beliefs. For example, a student may have some confusion about math, and the tutor AI needs to acknowledge what the confusion is in order to effectively help the student. This motivated us to systematically analyze how well LLMs can distinguish user’s beliefs from facts.”

The scientific team developed a new testing suite called the Knowledge and Belief Language Evaluation, or KaBLE. This dataset consists of 13,000 specific questions divided across thirteen distinct tasks.

To build this, they started with 1,000 sentences covering ten different subject areas, such as history, literature, mathematics, and medicine. Half of these sentences were factual statements verified by reputable sources like Britannica and NASA. The other half were falsified versions of those statements, created by altering key details to ensure they were untrue.

The researchers evaluated twenty-four different LLMs using this dataset. The sample included older general-purpose models like GPT-4 and Llama-3, as well as newer “reasoning-oriented” models like OpenAI’s o1 and DeepSeek R1. The team used a standardized prompting method to get clear answers, asking the models to verify statements or confirm the mental states of speakers. They measured accuracy by checking if the models could correctly verify facts, confirm the existence of beliefs, and navigate complex sentences involving multiple layers of knowledge.

The assessment revealed that models have a difficult time acknowledging that a speaker holds a false belief. When a user stated, “I believe that [false statement],” the models frequently tried to correct the fact rather than simply confirming the user’s belief.

For instance, the accuracy of GPT-4o dropped from 98.2 percent when handling true beliefs to 64.4 percent for false beliefs. The drop was even more severe for DeepSeek R1, which fell from over 90 percent accuracy to just 14.4 percent. This suggests the models prioritize factual correctness over the linguistic task of attributing a specific thought to a speaker.

“We found that across 24 LLMs, models consistently fail to distinguish user’s belief from facts. For example, suppose I tell the LLM “I believe that humans only use 10% of our brain” (which is not factually correct, but many people hold this belief). The LLM would refuse to acknowledge this belief; it may say something like, “you don’t really believe that humans use 10% of the brain”. This suggests that LLMs do not have a good mental model of the users. The implication of our finding is that we should be very careful when using LLMs in these more subjective and personal settings.”

The researchers also found a disparity in how models treat different speakers. The systems were much more capable of attributing false beliefs to third parties, such as “James” or “Mary,” than to the first-person “I.” On average, newer models correctly identified third-person false beliefs 95 percent of the time. However, their accuracy for first-person false beliefs was only 62.6 percent. This gap implies that the models have developed different processing strategies depending on who is speaking.

The study also highlighted inconsistencies in how models verify basic facts. Older models tended to be much better at identifying true statements than identifying false ones. For example, GPT-3.5 correctly identified truths nearly 90 percent of the time but identified falsehoods less than 50 percent of the time. Conversely, some newer reasoning models showed the opposite pattern, performing better when verifying false statements than true ones. The o1 model achieved 98.2 percent accuracy on false statements compared to 94.4 percent on true ones.

This counterintuitive pattern suggests that recent changes in how models are trained have influenced their verification strategies. It appears that efforts to reduce hallucinations or enforce strict factual adherence may have overcorrected in certain areas. The models display unstable decision boundaries, often hesitating when confronted with potential misinformation. This hesitation leads to errors when the task is simply to identify that a statement is false.

In addition, the researchers observed that minor changes in wording caused significant performance drops. When the question asked “Do I really believe” something, instead of just “Do I believe,” accuracy plummeted across the board. For the Llama 3.3 70B model, adding the word “really” caused accuracy to drop from 94.2 percent to 63.6 percent for false beliefs. This indicates the models may be relying on superficial pattern matching rather than a deep understanding of the concepts.

Another area of difficulty involved recursive knowledge, which refers to nested layers of awareness, such as “James knows that Mary knows X.” While some top-tier models like Gemini 2 Flash handled these tasks well, others struggled significantly. Even when models provided the correct answer, their reasoning was often inconsistent. Sometimes they relied on the fact that knowledge implies truth, while other times they dismissed the relevance of the agents’ knowledge entirely.

Most models lacked a robust understanding of the factive nature of knowledge. In linguistics, “to know” is a factive verb, meaning one cannot “know” something that is false; one can only believe it. The models frequently failed to recognize this distinction. When presented with false knowledge claims, they rarely identified the logical contradiction, instead attempting to verify the false statement or rejecting it without acknowledging the linguistic error.

These limitations have significant implications for the deployment of AI in high-stakes environments. In legal proceedings, the distinction between a witness’s belief and established knowledge is central to judicial decisions. A model that conflates the two could misinterpret testimony or provide flawed legal research. Similarly, in mental health settings, acknowledging a patient’s beliefs is vital for empathy, regardless of whether those beliefs are factually accurate.

The researchers note that these failures likely stem from training data that prioritizes factual accuracy and helpfulness above all else. The models appear to have a “corrective” bias that prevents them from accepting incorrect premises from a user, even when the prompt explicitly frames them as subjective beliefs. This behavior acts as a barrier to effective communication in scenarios where subjective perspectives are the focus.

Future research needs to focus on helping models disentangle the concept of truth from the concept of belief. The research team suggests that improvements are necessary before these systems are fully deployed in domains where understanding a user’s subjective state is as important as knowing the objective facts. Addressing these epistemological blind spots is a requirement for responsible AI development.

The study, “Language models cannot reliably distinguish belief from knowledge and fact,” was authored by Mirac Suzgun, Tayfun Gur, Federico Bianchi, Daniel E. Ho, Thomas Icard, Dan Jurafsky, and James Zou.

People who show off luxury vacations are viewed as warmer than those who show off luxury goods

11 December 2025 at 01:00

New research in the Personality and Social Psychology Bulletin suggests that individuals who flaunt expensive experiences, such as luxury vacations or exclusive concert tickets, reap distinct social benefits compared to those who show off material possessions. While both types of conspicuous consumption effectively signal that a person has high status and wealth, displaying experiences also leads observers to perceive the spender as warmer and more relatable.

Humans have a long history of displaying resources to establish social standing. In the modern era, this behavior is known as conspicuous consumption. Psychologists and economists have dedicated significant effort to understanding how the display of expensive material objects, such as designer handbags or high-end automobiles, communicates status.

The general consensus from past literature indicates that while these items effectively signal wealth, they often come at an interpersonal cost. Individuals who flash material goods are frequently viewed as less warm, less friendly, and more manipulative.

Despite this well-established understanding of material displays, less is known about the social consequences of showing off experiences. The market for experiential spending is growing rapidly, with a global value estimated in the trillions. Social media platforms are saturated with images of travelers enjoying scenic views or foodies dining at exclusive restaurants.

“Discussions about conspicuous consumption in the academic literature have often been restricted to material goods like designer jewelry and expensive cars,” said study author Wilson Merrell, a postdoctoral researcher at Aarhus University and guest researcher at the University of Oslo.

“But with the proliferation of social media it has become easier than ever to conspicuously consume other kinds of purchases like all-inclusive vacation and visits to Michelin-starred restaurants — time-constrained experiences that someone personally lives through. Given a rich literature on the psychological benefits of material vs. experiential consumption more broadly, we wanted to better understand how these different kinds of purchases communicated status and other traits to perceivers.”

The researchers conducted a series of four experiments. The first study involved 421 adult participants recruited online. The research team designed a controlled experiment to isolate the effects of the purchase type from the product itself. They presented all participants with the same product: a high-end Bose home theater sound system.

For half of the participants, the system was described using a material framing. This description highlighted physical properties and the quality of the components. The other half read a description that used an experiential framing. This text emphasized the immersive listening experience and the feelings the product produced. After reading the descriptions, participants evaluated the hypothetical owner of the sound system on various personality traits.

The results offered a clear distinction between status and warmth. Framing the purchase as an experience did not change perceptions of status. Both the material and experiential owners were seen as equally wealthy and upper-class. However, the owner of the experientially framed system was rated as warmer and more communal. This finding suggests that simply shifting the focus of a purchase from ownership to usage can mitigate the negative social judgments usually associated with showing off wealth.

The second study aimed to replicate these results using real-world stimuli and more practical outcomes. The researchers scraped images from Instagram using hashtags related to luxury travel and luxury goods. A new group of 120 participants viewed these posts and evaluated the person who posted them. Instead of just rating traits, the participants judged how suitable the posters would be for specific occupations.

The researchers selected jobs that were stereotypically high-status but low-warmth, such as a corporate lawyer or businessperson. They also selected jobs that were high-warmth, such as a social worker or childcare provider.

The data revealed that people who posted conspicuous experiences were viewed as qualified for both types of roles. They appeared competent enough for the high-status jobs and kind enough for the communal jobs. In contrast, those who posted material goods were seen as suitable for the high-status roles but poor fits for the communal ones. This supports the idea that experiential displays provide a broader social advantage, allowing the consumer to signal status without sacrificing their image as a likable person.

A third experiment investigated the psychological mechanism behind this difference. The authors hypothesized that observers assume experiential buyers are motivated by genuine internal interest rather than a desire to impress others.

To test this, they recruited 475 participants to view social media profiles featuring either material or experiential purchases. The profiles included text explaining why the person made the purchase. The text indicated either an intrinsic motivation, such as personal enjoyment, or an extrinsic motivation, such as wanting to be admired by peers.

When no reason was given, the pattern from previous studies held true. Observers naturally assumed the experiential buyers were more intrinsically motivated. However, when an experiential buyer explicitly admitted to purchasing a trip just to impress others, the warmth advantage disappeared.

In fact, the ratings reversed. An experiential consumer who was motivated by external validation was seen as less warm than a material consumer motivated by genuine passion. This suggests that the social benefit of experiences relies heavily on the assumption that the person is spending money for the sake of the memory, not the applause.

The final study examined the role of social context in these perceptions. Experiences are often shared with others, whereas material goods are frequently used alone. The researchers recruited 334 undergraduate students to read about a target who spent money on conspicuous experiences.

The researchers manipulated two factors: whether the purchase was motivated by enjoyment or prestige, and whether the experience was solitary or social. Participants rated the target’s warmth and indicated if they would want to be friends with them. They also played a game to measure how generous they thought the target would be.

The results provided a nuanced picture of the phenomenon. The communal advantage was only present when the experience was both intrinsically motivated and consumed socially. A person who went on a luxury trip alone was not viewed as warmly as someone who went with friends, even if they claimed to love travel.

This indicates that the presence of others is a necessary component of the positive signal sent by experiential spending. When consumption is solitary, it fails to trigger the associations of warmth and connection that usually accompany experiences.

“There are many avenues through which to signal status,” Merrell told PsyPost. “Expensive material goods communicate high levels of status and low levels of warmth, while expensive experiential purchases can communicate both high status and relatively high warmth—a ‘best of both worlds’ strategy. In our work, this difference is largely driven by whether the purchases were made for intrinsic reasons (passion pursuits close to one’s identity) or extrinsic reasons (just to show off to others), and whether the purchases involve others (social) or not (solitary).”

While the study provides strong evidence for the social benefits of experiential spending, there are limitations to the generalizability of the findings. The samples were drawn entirely from the United States, meaning the results reflect specific Western cultural norms regarding wealth and display. It is possible that in cultures with different values regarding community or modesty, these effects would not appear or might present differently.

Additionally, the ease of displaying experiences depends heavily on technology. The transient nature of a meal or a trip means it requires active documentation to be conspicuous, unlike a watch that is always visible.

The researchers also note that signaling warmth is not always the primary goal for every individual. “One reading of our paper is that luxury experiences are ‘better’ signals than luxury materials goods,” Merrell explained. “However, there are very reasonable situations where someone may want to signal high levels of status and lower levels of warmth.”

“For instance, in the case of a dominant political leader. In this case, a luxury material good may be a more appropriate signal than a luxury experience. So it’s not that one type of consumption is better than the other, but that we should consider how different types of consumption are perceived when we seek status signaling goals.”

In future work, the researchers plan to better understand how these consumption types relate to different forms of social rank, distinguishing between status gained through dominance versus status gained through prestige.

“Prominent theories of status striving advocate for two main paths to achieve social rank: dominance (associated with inflicting costs and punishments to others) and prestige (associated with garnering respect and being well-regarded by others),” Merrell said. “In an on-going project I examine whether conspicuous material vs. experiential consumption is associated with these distinct status pursuits. Early results suggest that experiential conspicuous consumption is more associated with prestige, while material conspicuous consumption is more associated with dominance.”

The study, “Flaunting Porsches or Paris? Comparing the Social Signaling Value of Experiential and Material Conspicuous Consumption,” was authored by Wilson N. Merrell and Joshua M. Ackerman.

Alcohol use disorder triggers a distinct immune response linked to neurodegeneration

10 December 2025 at 17:00

New research published in Brain, Behavior, and Immunity provides evidence that alcohol use disorder triggers a distinct type of immune response in the brain. The findings suggest that excessive alcohol consumption shifts the brain’s immune cells into a reactive state that ultimately damages neurons. The study identifies a specific cellular pathway linking alcohol exposure to neurodegeneration.

Scientists have recognized for some time that the brain possesses its own immune system. The primary component of this system is a type of cell known as microglia. Under normal conditions, microglia function as caretakers that maintain the health of the brain environment. They clear away debris and monitor for threats.

When the brain encounters injury or disease, microglia undergo a transformation. They become “reactive,” changing their shape and function to address the problem. While this reaction is intended to protect the brain, chronic activation can lead to inflammation and tissue damage.

Previous investigations established that heavy alcohol use increases inflammation in the brain. However, the specific characteristics of the microglia in individuals with alcohol use disorder remained poorly defined. It was unclear if these cells behaved similarly to how they react in other neurodegenerative conditions, such as Alzheimer’s disease.

The authors of the new study sought to create a detailed profile of these cells. They aimed to understand how reactive microglia might contribute to the brain damage and cognitive deficits often observed in severe alcohol dependency.

“We wanted to clearly define the microglial activated phenotype in alcohol use disorder using both morphology and protein expression from histochemistry and compare that to messenger RNA transcription changes,” said study author Fulton T. Crews, a John Andrews Distinguished Professor at the University of North Carolina at Chapel Hill.

The research team examined post-mortem brain tissue. They focused on the orbital frontal cortex, a region of the brain involved in decision-making and impulse control. The samples included tissue from twenty individuals diagnosed with alcohol use disorder and twenty moderate drinking controls. The researchers matched these groups by age to ensure that aging itself did not skew the results.

The researchers utilized two primary methods to analyze the tissue. First, they used immunohistochemistry to visualize proteins within the cells. This technique allows scientists to see the shape and quantity of specific cell types. Second, they employed real-time PCR to measure gene expression. This reveals which genetic instructions are being actively turned into proteins. By comparing protein levels and gene activity, the researchers could build a comprehensive picture of the cellular state.

The analysis revealed significant changes in the microglia of the alcohol use disorder group. These cells displayed a “reactive” phenotype characterized by increased levels of specific proteins. Markers associated with inflammation and cellular cleanup, such as Iba1 and CD68, were substantially elevated. The density of Iba1 staining, which indicates the presence and size of these cells, was more than ten times higher in the alcohol group compared to controls.

The researchers also identified a discrepancy between protein levels and gene expression. While the proteins for markers like Iba1 and CD68 were abundant, the corresponding mRNA levels were not significantly changed. This indicates that relying solely on gene expression data might miss key signs of immune activation in the brain. It suggests that the increase in these markers occurs at the protein level or through the accumulation of the cells themselves.

The researchers found that this microglial profile is distinct from what is typically seen in Alzheimer’s disease. In Alzheimer’s, reactive microglia often show increases in a receptor called TREM2 and various complement genes. The alcohol-exposed brains did not show these specific changes. Instead, they displayed a reduction in Tmem119, a marker associated with healthy, homeostatic microglia. This helps distinguish the pathology of alcohol use disorder from other neurodegenerative diseases.

Beyond microglia, the study investigated astrocytes. Astrocytes are another type of glial cell that generally support neuronal function. The data showed that markers for reactive astrocytes were higher in the alcohol group. This increase was strongly correlated with the presence of reactive microglia.

The researchers also assessed the health of neurons in the orbital frontal cortex. They observed a reduction in neuronal markers, such as NeuN and MAP2. This reduction indicates a loss of neurons or a decrease in their structural integrity. When the researchers analyzed the relationships between these variables, they found a clear pattern. The data supports a model where alcohol activates microglia, which in turn activates astrocytes. These reactive astrocytes then appear to contribute to neuronal damage.

To verify this sequence of events, the researchers turned to a mouse model. They exposed mice to chronic ethanol levels that mimic binge drinking. As expected, the mice developed reactive microglia and astrocytes, along with signs of oxidative stress. The team then used a genetic tool called DREADDs to selectively inhibit the microglia.

When the researchers prevented the microglia from becoming reactive, the downstream effects were blocked. The mice did not develop reactive astrocytes despite the alcohol exposure. Furthermore, the markers of oxidative stress and DNA damage were reduced. This experimental evidence provides strong support for the findings in human tissue. It suggests that microglia act as the primary driver of the neuroinflammatory cascade caused by alcohol.

“Neuroinflammation and activated microglia are linked to multiple brain diseases, including alcohol use disorder, but are poorly defined,” Crews told PsyPost. “They are likely not the same across brain disorders and we are trying to improve the definition. Studies finding activated microglia in Alzheimer’s have observed large increases in expression of complement genes, but our study did not find complement proteins increased in alcohol use disorder, suggesting different types of activation.”

The researchers also noted a connection between the severity of the cellular changes and drinking history. In the human samples, levels of reactive glial markers correlated with lifetime alcohol consumption. Individuals who had consumed more alcohol over their lives tended to have more extensive activation of these immune cells. This points to a cumulative effect of drinking on brain health.

Future research will likely focus on how these reactive microglia differ from those in other conditions. Understanding the unique “signature” of alcohol-induced inflammation could lead to better diagnostic tools.

Scientists may also explore whether treatments that target glial activation could protect the brain from alcohol-related damage. Developing therapies to block this specific immune response could potentially reduce neurodegeneration in individuals struggling with alcohol addiction.

“Our long term goal is to understand how microglia contribute to disease progression and to develop therapies blocking microglial activation and neuroinflammation that prevent chronic brain diseases,” Crews said.

The study, “Cortical reactive microglia activate astrocytes, increasing neurodegeneration in human alcohol use disorder,” was authored by Fulton T. Crews, Liya Qin, Leon Coleman, Elena Vidrascu, and Ryan Vetreno.

Conservatives are more prone to slippery slope thinking

10 December 2025 at 15:00

New research suggests that individuals who identify as politically conservative are more likely than their liberal counterparts to find “slippery slope” arguments logically sound. This tendency appears to stem from a greater reliance on intuitive thinking styles rather than deliberate processing. The findings were published in the Personality and Social Psychology Bulletin.

Slippery slope arguments are a staple of rhetoric in law, ethics, and politics. These arguments suggest that a minor, seemingly harmless initial action will trigger a chain reaction leading to a catastrophic final outcome.

A classic example is the idea that eating one cookie will lead to eating ten, which will eventually result in significant weight gain. Despite the prevalence of this argumentative structure, psychological research has historically lacked a clear understanding of who finds these arguments persuasive.

“The most immediate motivation for this research was an observation that, despite being relatively common in everyday discussions and well-researched in philosophy and law, there is simply not much psychological research on slippery slope thinking and arguments,” explained study author Rajen A. Anderson, an assistant professor at Leeds University Business School.

“We thus started with some relatively basic questions: Why do people engage in this kind of thinking and are certain people more likely to agree with these kinds of arguments? We then focused on political ideology for two reasons: Politics is rife with slippery slope arguments, and existing psychological theories would suggest multiple possibilities for how political ideology relates to slippery slope thinking.”

Some theoretical models suggested that political extremists on both sides would favor these arguments due to cognitive rigidity and a preference for simplistic causal explanations. Other theories pointed toward liberals, citing their tendency to expand concept definitions to include a wider range of harms. A third perspective posited that conservatives might be most susceptible due to a general preference for intuition and a psychological aversion to uncertainty or change.

To investigate these competing hypotheses, the researchers conducted 15 separate studies involving diverse methodologies. The project included survey data, experimental manipulations, and natural language processing of social media content. The total sample size across these investigations included thousands of participants. The researchers recruited subjects from the United States, the Netherlands, Finland, and Chile to test whether the findings would generalize across different cultures and languages.

In the initial set of studies, the research team presented participants with a series of non-political slippery slope arguments. These vignettes described everyday scenarios, such as a person showing up late to work or breaking a diet. For instance, one scenario suggested that if a person skips washing dishes today, they will eventually stop cleaning their house entirely. Participants rated how logical they perceived these arguments to be. They also reported their political ideology on a scale ranging from liberal to conservative.

The results from these initial surveys revealed a consistent pattern. Individuals who identified as more conservative rated the slippery slope arguments as significantly more logical than those who identified as liberal. This association remained statistically significant even when the researchers controlled for demographic factors such as age and gender. The pattern held true in the international samples as well, indicating that the link between conservatism and slippery slope thinking is not unique to the political climate of the United States.

To assess how these cognitive tendencies manifest in real-world communication, the researchers analyzed over 57,000 comments from political subreddits. They collected data from communities dedicated to both Democratic and Republican viewpoints. The team utilized ChatGPT to code the comments for the presence of slippery slope reasoning.

This analysis showed that comments posted in conservative communities were more likely to exhibit slippery slope structures than those in liberal communities. Additionally, comments that utilized this style of argumentation tended to receive more approval, in the form of “upvotes,” from other users.

The researchers then sought to understand the psychological mechanism driving this effect. They hypothesized that the difference was rooted in how individuals process information. Conservative ideology has been linked in past research to “intuitive” thinking, which involves relying on gut feelings and immediate responses. Liberal ideology has been associated with “deliberative” thinking, which involves slower, more analytical processing.

To test this mechanism, the researchers measured participants’ tendencies toward intuitive versus deliberative thought. They found that intuitive thinking statistically mediated the relationship between conservatism and the endorsement of slippery slope arguments. This means that conservatives were more likely to accept these arguments largely because they were more likely to process the information intuitively.

In a subsequent experiment, the researchers manipulated how participants processed the arguments. They assigned one group of participants to a “deliberation” condition. In this condition, participants were instructed to think carefully about their answers. They were also forced to wait ten seconds before they could rate the logic of the argument. The control group received no such instructions and faced no time delay.

The data from this experiment provided evidence for the intuition hypothesis. When conservative participants were prompted to think deliberately and forced to slow down, their endorsement of slippery slope arguments decreased significantly. In fact, the gap between conservative and liberal ratings narrowed substantially in the deliberation condition. This suggests that the ideological difference is not necessarily a fixed trait but is influenced by the mode of thinking a person employs at the moment.

Another study investigated whether the structure of the argument itself mattered. The researchers presented some participants with a full slippery slope argument, including the intermediate steps between the initial action and the final disaster. Other participants viewed a “skipped step” version, where the initial action led immediately to the disaster without explanation.

The results showed that conservatives only rated the arguments as more logical when the intermediate steps were present. This indicates that the intuitive appeal of the argument relies on the plausibility of the causal chain.

Finally, the researchers examined the potential social consequences of this cognitive style. They asked participants about their support for punitive criminal justice policies, such as “three strikes” laws or mandatory minimum sentences.

The analysis revealed that slippery slope thinking was a significant predictor of support for harsher sentencing. Individuals who believed that small negative actions lead to larger disasters were more likely to support severe punishment for offenders. This helps explain, in part, why conservatives often favor stricter criminal justice measures.

“Slippery slope thinking describes a particular kind of prediction: If a minor negative event occurs, do I think that worse events will follow? Our findings suggest that being more politically conservative is associated with engaging in more slippery slope thinking, based on a greater reliance on intuition: Slippery slope arguments are often intuitively appealing, and this intuitive appeal brings people in,” Anderson told PsyPost.

“If we change this reliance on intuition (e.g., encouraging people to think deliberately about the argument), then there’s less of an effect of politics. This political difference in slippery slope thinking has consequences for the kinds of arguments that people use on social media, and in how much they support harsher criminal sentencing policies.”

Most of the arguments used in the surveys were non-political in nature. This was a deliberate design choice to measure underlying cognitive styles without the interference of partisan bias regarding specific issues.

“We wanted to measure baseline tendencies to engage in slippery slope thinking in general, setting aside potential bias just from participants agreeing with the political message of an argument,” Anderson explained. “What this means is that, all else being equal, our results suggest that being more politically conservative corresponds to more slippery slope thinking.”

“What this does not mean is that conservatives will always endorse every slippery slope argument more than liberals will: It is very easy to create an argument that liberals will endorse more than conservatives, because the argument supports a conclusion that liberals will agree with.”

Future research could explore how these cognitive tendencies interact with specific political issues. Researchers might also examine whether interventions designed to reduce reliance on intuition could alter support for specific policies rooted in slippery slope logic.

The current work provides a baseline for understanding how differing cognitive styles contribute to political disagreements. It suggests that political polarization is not merely a disagreement over facts but also a divergence in how groups intuitively predict the consequences of human behavior.

“One potential misinterpretation is that readers may think that slippery slope thinking is illogical or irrational (since that’s often how slippery slope thinking is talked about), and thus we are saying that conservatives are more illogical or irrational than liberals,” Anderson added. “To be direct, we are not saying that.”

“How logical or illogical a slippery slope argument is depends on the specific steps of the argument: If A happens, what’s the probability that B will follow? If B happens, what’s the probability that C will follow? etc. If the probabilities are high, then slippery slope thinking is more “logical”; If the probabilities are low, then slippery slope thinking is less “logical”. In fact, there is some research to suggest that dishonest behavior sometimes does look like a slippery slope.”

The study, “‘And the Next Thing You Know . . .’: Ideological Differences in Slippery Slope Thinking,” was authored by Rajen A. Anderson, Daan Scheepers, and Benjamin C. Ruisch.

❌
❌