❌

Normal view

Today β€” 14 December 2025English

Amphetamine overrides brain signals associated with sexual rejection

14 December 2025 at 03:00

Recent experimental findings suggest that d-amphetamine, a potent central nervous system stimulant, can override learned sexual inhibitions in male rats. The research demonstrates that the drug causes animals to pursue sexual partners they had previously learned to avoid due to negative reinforcement. These results, which highlight a disruption in the brain’s reward and inhibition circuitry, were published in the journal Psychopharmacology.

To understand the specific nature of this study, one must first look at how animals learn to navigate sexual environments. In the wild, animals must determine when it is appropriate to engage in mating behavior and when it is not. A male rat that attempts to mate with a female that is not sexually receptive will be rejected.

Over time, the animal learns to associate certain cues, such as scents or locations, with this rejection. This learning process is known as conditioned sexual inhibition. It serves an evolutionary purpose by preventing the male from wasting energy on mating attempts that will not result in reproduction.

Researchers have long sought to understand how recreational drugs alter this specific type of decision-making. While it is well documented that stimulants can physically enable or enhance sexual behavior, less is understood about how they affect the psychological choice to engage in sex when an individual knows they should not. Previous work has established that alcohol can dismantle this learned inhibition. The current research aimed to see if d-amphetamine, a drug with a very different chemical mechanism, would produce a similar result.

The research team was led by Katuschia GermΓ© from the Centre for Studies in Behavioral Neurobiology at Concordia University in Montreal. The team also included Dhillon Persad, Justine Petit-Robinson, Shimon Amir, and James G. Pfaus. They designed an experiment to create a strong mental association in the subjects. They used male Long-Evans rats as the subjects for the experiment.

The researchers began by training the rats over the course of twenty sessions. This training took place in specific testing chambers. During these sessions, the males were exposed to two different types of female rats. Some females were sexually receptive and carried no added scent. Other females were not sexually receptive and were scented with an almond extract.

The male rats quickly learned the difference. They associated the neutral, unscented females with sexual reward. Conversely, they associated the almond scent with rejection and a lack of reward. After the training phase, the males would reliably ignore females that smelled like almond, even if those females were actually receptive. The almond smell had become a β€œstop” signal. This state represents the conditioned sexual inhibition that the study sought to investigate.

Once this inhibition was established, the researchers moved to the testing phase. They divided the rats into groups and administered varying doses of d-amphetamine. Some rats received a saline solution which served as a control group with no drug effect. Others received doses of 0.5, 1.0, or 2.0 milligrams per kilogram of body weight.

The researchers then placed the male rats in a large open arena. This environment was different from the training cages to ensure the rats were reacting to the females and not the room itself. Two sexually receptive females were placed in the arena with the male. One female was unscented. The other female was scented with the almond extract.

Under normal circumstances, a trained rat would ignore the almond-scented female. This is exactly what the researchers observed in the group given the saline solution. These sober rats directed their attention almost exclusively toward the unscented female. They adhered to their training and avoided the scent associated with past rejection.

The behavior of the rats treated with d-amphetamine was distinct. Regardless of the dose administered, the drug-treated rats copulated with both the unscented and the almond-scented females. The drug had completely eroded the learned inhibition. The almond scent, which previously acted as a deterrent, no longer stopped the males from initiating copulation.

It is important to note that the drug did not simply make the rats hyperactive or indiscriminate due to confusion. The researchers tracked the total amount of sexual activity. They found that while the choice of partner changed, the overall mechanics of the sexual behavior remained competent. The drug did not create a chaotic frenzy. It specifically removed the psychological barrier that had been built during training.

Following the behavioral tests, the researchers investigated what was happening inside the brains of these animals. They utilized a technique that stains for the Fos protein. This protein is produced within neurons shortly after they have been active. By counting the cells containing Fos, scientists can create a map of which brain regions were working during a specific event.

To do this, the researchers re-exposed the rats to the almond odor while they were under the influence of the drug or saline. They did not include females in this phase. This allowed the team to see how the brain processed the cue of the almond scent in isolation.

The analysis revealed distinct patterns of brain activation. In the rats that received saline, the almond odor triggered activity in the piriform cortex. This is a region of the brain involved in processing the sense of smell. However, these sober rats showed lower activity in the medial preoptic area. This area is critical for male sexual behavior. This pattern suggests that the sober brain registered the smell and dampened the sexual control center in response.

The rats treated with d-amphetamine showed a reversal of this pattern. When exposed to the almond scent, these rats displayed increased activity in the nucleus accumbens. The nucleus accumbens is a central component of the brain’s reward system. It is heavily involved in processing motivation and pleasure.

The drug also increased activity in the ventral tegmental area. This region produces dopamine and sends it to the nucleus accumbens. The presence of the drug appeared to hijack the processing of the inhibitory cue. Instead of the almond smell triggering a β€œstop” signal, the drug caused the brain to treat the smell as a neutral or potentially positive stimulus.

The researchers noted that the activation in the nucleus accumbens was particularly telling. This region lights up in response to rewards. By chemically stimulating this area with d-amphetamine, the drug may have overridden the negative memory associated with the almond scent. The cue for rejection was seemingly transformed into a cue for potential reward.

The team also observed changes in the amygdala. This part of the brain is often associated with emotional processing and fear. The drug-treated rats showed different activity levels in the central and basolateral nuclei of the amygdala compared to the control group. This suggests that the drug alters the emotional weight of the memory.

These findings align with previous research conducted by this laboratory regarding alcohol. In prior studies, the researchers found that alcohol also disrupted conditioned sexual inhibition. The fact that two very different drugsβ€”one a depressant and one a stimulantβ€”produce the same behavioral outcome suggests they may act on a shared neural pathway.

The authors propose that this shared pathway likely involves the mesolimbic dopamine system. This is the circuit connecting the ventral tegmental area to the nucleus accumbens. Both alcohol and amphetamines are known to increase dopamine release in this system. This surge in dopamine appears to be strong enough to wash out the learned signals that tell an individual to stop or refrain from a behavior.

There are limitations to how these findings can be interpreted. The study was conducted on rats, and animal models do not perfectly replicate human psychology. The complexity of human sexual decision-making involves social and cultural factors that cannot be simulated in a rodent model. Additionally, the study looked at acute administration of the drug. The effects of chronic, long-term use might result in different behavioral adaptations.

The researchers also point out that while the inhibition was broken, the drug did not strictly enhance sexual performance. In fact, at the highest doses, some rats failed to reach ejaculation despite engaging in the behavior. This distinction separates the concept of sexual arousal from sexual execution. The drug increased the drive to engage but did not necessarily improve the physical conclusion of the act.

Future research will likely focus on pinpointing the exact chemical interactions within the amygdala and nucleus accumbens. Understanding the precise receptors involved could shed light on how addiction affects risk assessment. If a drug can chemically overwrite a learned warning signal, it explains why individuals under the influence often engage in risky behaviors they would logically avoid when sober.

The study provides a neurobiological framework for understanding drug-induced disinhibition. It suggests that drugs like d-amphetamine do not merely lower inhibitions in a vague sense. Rather, they actively reconfigure how the brain perceives specific cues. A stimulus that once meant β€œdanger” or β€œrejection” is reprocessed through the reward system. This chemical deception allows the behavior to proceed unchecked.

The study, β€œDisruptive effects of d-amphetamine on conditioned sexual inhibition in the male rat,” was authored by Katuschia GermΓ©, Dhillon Persad, Justine Petit-Robinson, Shimon Amir, and James G. Pfaus.

Survey reveals rapid adoption of AI tools in mental health care despite safety concerns

14 December 2025 at 01:00

The integration of artificial intelligence into mental health care has accelerated rapidly, with more than half of psychologists now utilizing these tools to assist with their daily professional duties. While practitioners are increasingly adopting this technology to manage administrative burdens, they remain highly cautious regarding the potential threats it poses to patient privacy and safety, according to the American Psychological Association’s 2025 Practitioner Pulse Survey.

The American Psychological Association represents the largest scientific and professional organization of psychologists in the United States. Its leadership monitors the evolving landscape of mental health practice to understand how professionals navigate changes in technology and patient needs.

In recent years, the field has faced a dual challenge of high demand for services and increasing bureaucratic requirements from insurance providers. These pressures have created an environment where digital tools promise relief from time-consuming paperwork.

However, the introduction of automated systems into sensitive therapeutic environments raises ethical questions regarding confidentiality and the human element of care. To gauge how these tensions are playing out in real-world offices, the association commissioned its annual inquiry into the state of the profession.

The 2025 Practitioner Pulse Survey targeted doctoral-level psychologists who held active licenses to practice in at least one U.S. state. To ensure the results accurately reflected the profession, the research team utilized a probability-based random sampling method. They generated a list of more than 126,000 licensed psychologists using state board data and randomly selected 30,000 individuals to receive invitations.

This approach allowed the researchers to minimize selection bias. Ultimately, 1,742 psychologists completed the survey, providing a snapshot of the workforce. The respondents were primarily female and White, which aligns with historical demographic trends in the field. The majority worked full-time, with private practice being the most common setting.

The survey results revealed a sharp increase in the adoption of artificial intelligence compared to the previous year. In 2024, only 29% of psychologists reported using AI tools. By 2025, that figure had climbed to 56%. The frequency of use also intensified. Nearly three out of 10 psychologists reported using these tools on at least a monthly basis. This represents a substantial shift from 2024, when only about one in 10 reported such frequent usage.

Detailed analysis of the data shows that psychologists are primarily using these tools to handle logistics rather than patient care. Among those who utilized AI, more than half used it to assist with writing emails and other materials. About one-third used it to generate content or summarize clinical notes. These functions address the administrative workload that often detracts from face-to-face time with clients.

Arthur C. Evans Jr., PhD, the CEO of the association, commented on this trend.

β€œPsychologists are drawn to this field because they’re passionate about improving peoples’ lives, but they can lose hours each day on paperwork and managing the often byzantine requirements of insurance companies,” said Evans. β€œLeveraging safe and ethical AI tools can increase psychologists’ efficiency, allowing them to reach more people and better serve them.”

Despite the utility of these tools for office management, the survey highlighted deep reservations about their safety. An overwhelming 92% of psychologists cited concerns regarding the use of AI in their field. The most prevalent worry, cited by 67% of respondents, was the potential for data breaches. This is a particularly acute issue in mental health care, where maintaining the confidentiality of patient disclosures is foundational to the therapeutic relationship.

Other concerns focused on the reliability and social impact of the technology. Unanticipated social harms were cited by 64% of respondents. Biases in the input and output of AI models worried 63% of the psychologists surveyed. There is a documented risk that AI models trained on unrepresentative data may perpetuate stereotypes or offer unequal quality of care to marginalized groups.

Additionally, 60% of practitioners expressed concern over inaccurate output or β€œhallucinations.” This term refers to the tendency of generative AI models to confidently present false or fabricated information as fact. In a clinical setting, such errors could lead to misdiagnosis or inappropriate treatment plans if not caught by a human supervisor.

β€œArtificial intelligence can help ease some of the pressures that psychologists are facingβ€”for instance, by increasing efficiency and improving access to careβ€”but human oversight remains essential,” said Evans. β€œPatients need to know they can trust their provider to identify and mitigate risks or biases that arise from using these technologies in their treatment.”

The survey data suggests that psychologists are heeding this need for oversight by keeping AI largely separate from direct clinical tasks. Only 8% of those who used the technology employed it to assist with clinical diagnosis. Furthermore, only 5% utilized chatbot assistance for direct patient interaction. This indicates that while practitioners are willing to delegate paperwork to algorithms, they are hesitant to trust them with the nuances of human psychology.

This hesitation correlates with fears about the future of the profession. The survey found that 38% of psychologists worried that AI might eventually make some of their job duties obsolete. However, the current low rates of clinical adoption suggest that the core functions of therapy remain firmly in human hands for the time being.

The context for this technological shift is a workforce that remains under immense pressure. The survey explored factors beyond technology, painting a picture of a profession straining to meet demand. Nearly half of all psychologists reported that they had no openings for new patients.

Simultaneously, practitioners observed that the mental health crisis has not abated. About 45% of respondents indicated that the severity of their patients’ symptoms is increasing. This rising acuity requires more intensive care and energy from providers, further limiting the number of patients they can effectively treat.

Economic factors also complicate the landscape. The survey revealed that fewer than two-thirds of psychologists accept some form of insurance. Respondents pointed to insufficient reimbursement rates as a primary driver for this decision. They also cited struggles with pre-authorization requirements and audits. These administrative hurdles consume time that could otherwise be spent on treatment.

The association has issued recommendations for psychologists considering the use of AI to ensure ethical practice. They advise obtaining informed consent from patients by clearly communicating how AI tools are used. Practitioners are encouraged to evaluate tools for potential biases that could worsen health disparities.

Compliance with data privacy laws is another priority. The recommendations urge psychologists to understand exactly how patient data is used, stored, or shared by the third-party companies that provide AI services. This due diligence is intended to protect the sanctity of the doctor-patient privilege in a digital age.

The methodology of the 2025 survey differed slightly from previous years to improve accuracy. In prior iterations, the survey screened out ineligible participants. In 2025, the instrument included a section for those who did not meet the criteria, allowing the organization to gather internal data on who was receiving the invites.

The response rate for the survey was 6.6%. While this may appear low to a layperson, it is a typical rate for this type of professional survey and provided a robust sample size for analysis. The demographic breakdown of the sample showed slight shifts toward a younger workforce. The 2025 sample had the highest proportion of early-career practitioners in the history of the survey.

This influx of younger psychologists may influence the adoption rates of new technologies. Early-career professionals are often more accustomed to integrating digital solutions into their workflows. However, the high levels of concern across the board suggest that skepticism of AI is not limited to older generations of practitioners.

The findings from the 2025 Practitioner Pulse Survey illustrate a profession at a crossroads. Psychologists are actively seeking ways to manage an unsustainable workload. AI offers a potential solution to the administrative bottleneck. Yet, the ethical mandates of the profession demand a cautious approach.

The data indicates that while the tools are entering the office, they have not yet entered the therapy room in a meaningful way. Practitioners are balancing the need for efficiency with the imperative to do no harm. As the technology evolves, the field will likely continue to grapple with how to harness the benefits of automation without compromising the human connection that defines psychological care.

New research maps how the brain processes different aspects of life satisfaction

13 December 2025 at 23:00

A new study suggests that the brain uses distinct neural pathways to process different aspects of personal well-being. The research indicates that evaluating family relationships activates specific memory-related brain regions, while assessing how one handles stress engages areas responsible for cognitive control. These findings were published recently in the journal Emotion.

Psychologists and neuroscientists have struggled to define exactly what constitutes a sense of well-being. Historically, many experts viewed well-being as a single, general concept. It was often equated simply with happiness or life satisfaction. This approach assumes that feeling good about life is a uniform experience. However, more recent scholarship argues that well-being is multidimensional. It is likely composed of various distinct facets that contribute to overall mental health.

To understand how we can improve mental health, it is necessary to identify the mechanisms behind these different components. A team of researchers set out to map the brain activity associated with specific types of life satisfaction. The study was conducted by Kayla H. Green, Suzanne van de Groep, Renske van der Cruijsen, Esther A. H. Warnert, and Eveline A. Crone. These scientists are affiliated with Erasmus University Rotterdam and Radboud University in the Netherlands.

The researchers based their work on the idea that young adults face unique challenges in the modern world. They utilized a measurement tool called the Multidimensional Well-being in Youth Scale. This scale was previously developed in collaboration with panels of young people. It divides well-being into five specific domains.

The first domain is family relationships. The second is the ability to deal with stress. The third domain covers self-confidence. The fourth involves having impact, purpose, and meaning in life. The final domain is the feeling of being loved, appreciated, and respected. The researchers hypothesized that the brain would respond differently depending on which of these domains a person was considering.

To test this hypothesis, the team recruited 34 young adults. The participants ranged in age from 20 to 25 years old. This age group is often referred to as emerging adulthood. It is a period characterized by identity exploration and significant life changes. The researchers used functional magnetic resonance imaging, or fMRI, to observe brain activity. This technology tracks blood flow to different parts of the brain to determine which areas are working hardest at any given moment.

While inside the MRI scanner, the participants completed a specific self-evaluation task. They viewed a series of sentences related to the five domains of well-being. For example, a statement might ask them to evaluate if they accept themselves for who they are. The participants rated how much the statement applied to them on a scale of one to four.

The task did not stop at a simple evaluation of the present. After rating their current feelings, the participants answered a follow-up question. They rated the extent to which they wanted that specific aspect of their life to change in the future. This allowed the researchers to measure both current satisfaction and the desire for personal growth.

In addition to the brain scans, the participants completed standardized surveys outside of the scanner. One survey measured symptoms of depression. Another survey assessed symptoms of burnout. The researchers also asked about feelings of uncertainty regarding the future. These measures helped the team connect the immediate brain responses to the participants’ broader mental health.

The behavioral results from the study showed clear patterns in how young adults view their lives. The participants gave the lowest positivity ratings to the domain of dealing with stress. This suggests that managing stress is a primary struggle for this demographic. Consequently, the participants reported the highest desire for future change in this same domain.

The researchers analyzed the relationship between these ratings and the mental health surveys. They found that higher positivity ratings in all five domains were associated with fewer burnout symptoms. This means that feeling good about any area of life may offer some protection against burnout.

A different pattern emerged regarding the desire for change. Participants who reported more burnout symptoms expressed a stronger desire to change how they felt about having an impact. They also wanted to change their levels of self-confidence and their feelings of being loved. This suggests that burnout is not just about exhaustion. It is also linked to a desire to alter one’s sense of purpose and social connection.

Depressive symptoms showed a broad association with the desire for change. Higher levels of depression were linked to a wish for future changes in almost every domain. The only exception was self-confidence. This implies that young adults with depressive symptoms are generally unsatisfied with their external circumstances and relationships.

The brain imaging data revealed that the mind does indeed separate these domains. When participants evaluated sentences about positive family relationships, a specific region called the precuneus became highly active. The precuneus is located in the parietal lobe of the brain. It is known to play a role in thinking about oneself and recalling personal memories.

This finding aligns with previous research on social cognition. Thinking about family likely requires accessing autobiographical memories. It involves reflecting on one’s history with close relatives. The activity in the precuneus suggests that family well-being is deeply rooted in memory and self-referential thought.

A completely different neural pattern appeared when participants thought about dealing with stress. For these items, the researchers observed increased activity in the dorsolateral prefrontal cortex. This region is located near the front of the brain. It is widely recognized as a center for executive function.

The dorsolateral prefrontal cortex helps regulate emotions and manage cognitive control. Its involvement suggests that thinking about stress is an active cognitive process. It is not just a passive feeling. Instead, it requires the brain to engage in appraisal and regulation. This makes sense given that the participants also expressed the greatest desire to change how they handle stress.

The study did not find distinct, unique neural patterns for the other three domains. Self-confidence, having impact, and feeling loved did not activate specific regions to the exclusion of others. They likely rely on more general networks that overlap with other types of thinking.

However, the distinction between family and stress is notable. It provides physical evidence that well-being is not a single state of mind. The brain recruits different resources depending on whether a person is focusing on their social roots or their emotional management.

The researchers also noted a general pattern involving the medial prefrontal cortex. This area was active during the instruction phase of the task. It was also active when participants considered their desire for future changes. This region is often associated with thinking about the future and self-improvement.

There are limitations to this study that should be considered. The final sample size included only 34 participants. This is a relatively small number for an fMRI study. Small groups can make it difficult to detect subtle effects or generalize the findings to the entire population.

The researchers also noted that the number of trials for each condition was limited. Participants only saw a few sentences for each of the five domains. A higher number of trials would provide more data points for analysis. This would increase the statistical reliability of the results.

Additionally, the study design was correlational. This means the researchers can see that certain brain patterns and survey answers go together. However, they cannot say for certain that one causes the other. For instance, it is not clear if desiring change leads to burnout, or if burnout leads to a desire for change.

Future research could address these issues by recruiting larger and more diverse groups of people. It would be beneficial to include individuals from different cultural backgrounds. Different cultures may prioritize family or stress management differently. This could lead to different patterns of brain activity.

Longitudinal studies would also be a logical next step. Following participants over several years would allow scientists to see how these brain patterns develop. It is possible that the neural correlates of well-being shift as young adults mature into their thirties and forties.

Despite these caveats, the study offers a new perspective on mental health. It supports the idea that well-being is a multifaceted construct. By treating well-being as a collection of specific domains, clinicians may be better able to help patients.

The study, β€œNeural Correlates of Well-Being in Young Adults,” was authored by Kayla H. Green, Suzanne van de Groep, Renske van der Cruijsen, Esther A. H. Warnert, and Eveline A. Crone.

Yesterday β€” 13 December 2025English

Social dominance orientation emerges in early childhood independent of parental socialization, new study suggests

13 December 2025 at 21:00

New research published in the Journal of Experimental Psychology: General provides evidence that children as young as five years old develop preferences for social hierarchy that influence how they perceive inequality. This orientation toward social dominance appears to dampen empathy for lower-status groups and reduce the willingness to address unfair situations. The findings suggest that these beliefs can emerge early in development through cognitive biases, independent of direct socialization from parents.

Social dominance orientation is a concept in psychology that describes an individual’s preference for group-based inequality. People with high levels of this trait generally believe that society should be structured hierarchically, with some groups possessing more power and status than others. In adults, high social dominance orientation serves as a strong predictor for a variety of political and social attitudes. It is often associated with opposition to affirmative action, higher levels of nationalism, and increased tolerance for discriminatory practices.

Psychologists have traditionally focused on adolescence as the developmental period when these hierarchy-enhancing beliefs solidify. The prevailing theory posits that as children grow older, they absorb the competitive nature of the world, often through conversations with their parents. This socialization process supposedly leads teenagers to adopt worldviews that justify existing social stratifications.

However, the authors of the new study sought to determine if the roots of these beliefs exist much earlier in life. They investigated whether young children might form dominance orientations through their own cognitive development rather than solely through parental input. Young children are known to recognize status differences and often attribute group disparities to intrinsic traits. The research team hypothesized that these cognitive tendencies might predispose children to accept or even prefer social hierarchy before adolescence.

β€œThe field has typically thought of preferences for hierarchy as something that becomes socialized during adolescence,” said study author Ryan Lei, an associate professor of psychology at Haverford College.

β€œIn recent years, however, researchers have documented how a lot of the psychological ingredients that underlie these preferences for hierarchy are already present in early childhood. So we sought to see if a) those preferences were meaningful (i.e., associated with hierarchy-enhancing outcomes), and b) what combinations of psychological ingredients might be central to the development of these preferences.”

The researchers conducted three separate studies to test their hypotheses. In the first study, the team recruited 61 children between the ages of 5 and 11. The participants were introduced to a flipbook story featuring two fictional groups of characters known as Zarpies and Gorps. The researchers established a clear status difference between the groups. One group was described as always getting to go to the front of the line and receiving the best food. The other group was required to wait and received lower-quality resources.

After establishing this inequality, the researchers presented the children with a scenario in which a member of the low-status group complained about the unfairness. The children then answered questions designed to measure their social dominance orientation. For example, they were asked if some groups are simply not as good as others. The researchers also assessed whether the children believed the complaint was valid and if the inequality should be fixed.

The results showed a clear association between the children’s hierarchy preferences and their reactions to the story. Children who reported higher levels of social dominance orientation were less likely to view the low-status group’s complaint as valid. They were also less likely to say that the inequality should be rectified. This suggests that even at a young age, a general preference for hierarchy can shape how children interpret specific instances of injustice.

The second study aimed to see if assigning children to a high-status group would cause them to develop higher levels of social dominance orientation. The researchers recruited 106 children, ranging in age from 5 to 11. Upon arrival, an experimenter used a manual spinner to randomly assign each child to either a green group or an orange group.

The researchers then introduced inequalities between the two groups. The high-status group controlled resources and received three stickers, while the low-status group had no control and received only one sticker. The children completed measures assessing their empathy toward the outgroup and their preference for their own group. They also completed the same social dominance orientation scale used in the first study.

The study revealed that children assigned to the high-status group expressed less empathy toward the low-status group compared to children assigned to the low-status condition. Despite this difference in empathy, belonging to the high-status group did not lead to higher self-reported social dominance orientation scores. The researchers found that while group status influenced emotional responses to others, it did not immediately alter the children’s broader ideological preferences regarding hierarchy.

The third study was designed to investigate whether beliefs about the stability of status might interact with group assignment to influence social dominance orientation. The researchers recruited 147 children aged 5 to 12. This time, the team used a digital spinner to assign group membership. This method was chosen to make the assignment feel more definitive and less dependent on the experimenter’s physical action.

Children were again placed into a high-status or low-status group within a fictional narrative. The researchers measured the children’s β€œstatus essentialism,” which includes beliefs about whether group status is permanent and unchangeable. The study tested whether children who believed status was stable would react differently to their group assignment.

The findings from this third study were unexpected. The researchers initially hypothesized that high-status children would be the most likely to endorse hierarchy. Instead, the data showed that children assigned to the low-status group reported higher social dominance orientation, provided they believed that group status was stable.

β€œWhen we tested whether children randomly assigned to high or low status groups were more likely to endorse these preferences for hierarchy, we were surprised that those in low status groups who also believed that their group status was stable were the ones most likely to self-report greater preference for hierarchy,” Lei told PsyPost.

This result suggests a psychological process known as system justification. When children in a disadvantaged position believe their status is unchangeable, they may adopt beliefs that justify the existing hierarchy to make sense of their reality. By endorsing the idea that hierarchy is good or necessary, they can psychologically cope with their lower position.

Across all three studies, the data indicated that social dominance orientation is distinct from simple ingroup bias. Social identity theory suggests that people favor their own group simply because they belong to it. However, the current findings show that preferences for hierarchy operate differently. For instance, in the third study, children in both high and low-status groups preferred their own group. Yet, the increase in social dominance orientation was specific to low-status children who viewed the hierarchy as stable.

The researchers also performed a mini meta-analysis of their data to examine demographic trends. They found that older children tended to report lower levels of social dominance orientation than younger children. This negative correlation suggests that as children age, they may become more attuned to egalitarian norms or learn to suppress overt expressions of dominance.

β€œThe more that children prefer social hierarchy, the less empathy they feel for low status groups, the less they intend to address inequality, and the less they seriously consider low status groups’ concerns,” Lei summarized.

Contrary to patterns often seen in adults, the researchers found no significant difference in social dominance orientation between boys and girls. In adult samples, men typically report higher levels of this trait than women. The absence of this gender gap in childhood suggests that the divergence may occur later in development, perhaps during adolescence when gender roles become more rigid.

As with all research, there are some limitations. The experiments relied on novel, fictional groups rather than real-world social categories. It is possible that children reason differently about real-world hierarchies involving race, gender, or wealth, where they have prior knowledge and experience. The use of fictional groups allowed for experimental control but may not fully capture the complexity of real societal prejudices.

The study, β€œAntecedents and Consequences of Preferences for Hierarchy in Early Childhood,” was authored by Ryan F. Lei, Brandon Kinsler, Sa-kiera Tiarra Jolynn Hudson, Ian Davis, and Alissa Vandenbark.

Researchers uncover a distinct narrative pattern in autistic people and their siblings

13 December 2025 at 19:00

A study of individuals with autism and their siblings and parents found that autistic individuals and their siblings used fewer causal explanations to connect story elements when asked to tell a story based on a series of pictures. They also used fewer descriptions of the thoughts and feelings of protagonists. The research was published in the Journal of Autism and Developmental Disorders.

Autism is a neurodevelopmental condition characterized by differences in social communication, sensory processing, and patterns of behavior or interests. People on the autism spectrum tend to perceive and organize information in distinctive ways that can be strengths in some contexts and challenges in others. Among other things, they seem to differ from their neurotypical peers in the way they tell storiesβ€”specifically regarding their narrative patterns and abilities.

Research shows that many autistic individuals produce narratives that are shorter or less elaborated compared to neurotypical peers, focusing more on concrete details than on social or emotional aspects. Difficulties may appear in organizing stories into a clear beginning, middle, and end, or in emphasizing the motives, thoughts, and feelings of characters. At the same time, many autistic people display strong memory for facts and may provide narratives rich in precise and specific information.

Study author Kritika Nayar and her colleagues wanted to explore and compare the narrative skills of individuals with autism and their first-degree relatives. They wanted to see whether their narrative skills and styles showed similarities with their relatives who do not have autism.

Study participants were 56 autistic individuals, 42 of their siblings who do not have autism, 49 control participants without autism (who were not related to the autistic participants), 161 parents of autistic individuals, and 61 parents who do not have autistic children.

Overall, there were 58 parent-child pairs in the autism group, and 20 parent-child pairs in the control group. The average age of participants with autism and their siblings and peers was approximately 17–19 years. The average age of parents of participants with autism was roughly 50 years, and the average age of parents of non-autistic participants was roughly 46 years.

Study participants were given a 24-page wordless picture book called β€œFrog, Where Are You?” depicting the adventures of a boy and his dog while searching for a missing pet frog. The story is comprised of five main search episodes in addition to the introduction, plot establishment, and resolution. Participants were asked to narrate the story page-by-page while viewing it on a device that tracked their eye movements.

All audio files of their narration were transcribed and then hand-coded by researchers. Study authors looked for descriptions of affective states and behaviors of protagonists, and protagonists’ cognitive states and behaviors. They also looked for causal explanations of story protagonists’ behaviors and for causal explanations of protagonists’ feelings and cognitions.

The study authors differentiated between explicit causal language, marked by the use of the term β€œbecause,” and more subtle use of causal language indicated by words such as β€œso,” β€œsince,” β€œas a result,” β€œin order to,” and β€œtherefore.” They also looked for the presence of excessive detail and for topic perseveration (whether a participant got stuck on a specific topic) throughout the story. Study authors analyzed participants’ eye movements while telling the story.

Results showed that participants with autism and their siblings used fewer descriptions of affect and cognition, and fewer causal explanations than control participants. They were also more likely to omit story components.

Parent groups did not differ in their overall use of causal language or in how often they described feelings and thoughts (cognition) of story protagonists. However, parents of participants with autism used more causal explanations of story protagonists’ thoughts and feelings (affect), but fewer causal descriptions of characters’ behavior compared to control parents. Results also showed some differences in gaze patterns between participants with autism and their siblings on one side, and control participants on the other.

β€œFindings implicate causal language as a critical narrative skill that is impacted in ASD [autism spectrum disorder] and may be reflective of ASD genetic influence in relatives. Gaze patterns during narration suggest similar attentional mechanisms associated with narrative among ASD families,” study authors concluded.

The study contributes to the scientific understanding of the cognitive characteristics of individuals with autism. However, authors note that the eye-tracking metrics used, which focused on the entirety of the book, might have masked certain important patterns of gaze that could unfold over the course of time.

The paper, β€œNarrative Ability in Autism and First-Degree Relatives,” was authored by Kritika Nayar, Emily Landau, Gary E. Martin, Cassandra J. Stevens, Jiayin Xing, Sophia Pirog, Janna Guilfoyle, Peter C. Gordon, and Molly Losh.

New study reveals how vulvar appearance influences personality judgments among women

13 December 2025 at 17:00

The physical appearance of female genitalia can influence how women perceive the personality and sexual history of other women, according to new research. The findings indicate that vulvas conforming to societal ideals are judged more favorably, while natural anatomical variations often attract negative assumptions regarding character and attractiveness. This study was published in the Journal of Psychosexual Health.

The prevalence of female genital cosmetic surgery has increased substantially in recent years. This rise suggests a growing desire among women to achieve an idealized genital appearance. Popular culture and adult media often propagate a specific β€œprototype” for the vulva. This standard typically features hairlessness, symmetry, and minimal visibility of the inner labia.

Cognitive science suggests that people rely on β€œprototypes” to categorize the world around them. These mental frameworks help individuals quickly evaluate new information based on what is considered typical or ideal within a group. In the context of the human body, these prototypes are socially constructed and reinforced by community standards.

When an individual’s physical features deviate from the prototype, they may be subject to negative social judgments. The authors of the current study sought to understand how these mental frameworks apply specifically to female genital anatomy.

Previous research has found that people form immediate impressions of men’s personalities based on images of their genitalia. The researchers aimed to determine if a similar process of β€œzero-acquaintance” judgment occurs among women when viewing female anatomy.

β€œI wanted to take the design used from that research and provide some more in-depth analysis of how women perceive vulvas to help applied researchers who study rates and predictors of genital enhancement surgeries, like labiaplasty,” said Thomas R. Brooks, an assistant professor of psychology at New Mexico Highlands University. β€œMore generally, I have been captivated by the idea that our bodies communicate things about our inner lives that is picked up on by others around us. So, this study, and the one about penises, was really my first stab at investigating the story our genitals tell.”

The research team recruited 85 female undergraduate students from a university in the southern United States to participate in the study. The average age of the participants was approximately 21 years old. The sample was racially diverse, with the largest groups identifying as African American and White. The participants were asked to complete a perception task involving a series of images.

Participants viewed 24 unique images of vulvas collected from online public forums. These images were categorized based on three specific anatomical traits. The first category was the visibility of the clitoris, divided into visible and non-visible. The second category was the length of the labia minora, classified as non-visible, short, or long. The third category was the style of pubic hair, which included shaved, trimmed, and natural presentations.

After viewing each image, the participants rated the genitalia on perceived prototypicality and attractiveness using a seven-point scale. They also completed a questionnaire assessing the perceived personality traits of the person to whom the vulva belonged. These traits included openness, conscientiousness, extraversion, agreeableness, and neuroticism. Additionally, the participants estimated the person’s sexual behavior, including their level of experience, number of partners, and skill in bed.

The data revealed a strong positive association between perceived prototypicality and attractiveness. Vulvas that aligned with cultural ideals were consistently rated as more attractive. Participants also assumed that women with these β€œideal” vulvas possessed more desirable personality traits. This suggests that conformity to anatomical standards is linked to a β€œhalo effect” where physical beauty is equated with good character.

Specific anatomical variations led to distinct social judgments. Images featuring longer labia minora received more negative evaluations compared to those with short or non-visible labia. Participants tended to perceive women with longer labia as less conscientious, less agreeable, and less extraverted. The researchers also found that these individuals were assumed to be β€œworse in bed” despite being perceived as having had a higher number of sexual partners.

The visibility of the clitoris also altered perceptions in specific ways. Vulvas with a visible clitoris were rated as less attractive and less prototypical than those where the clitoris was not visible. Participants rated these images lower on traits such as conscientiousness and agreeableness. However, the researchers found that women with visible clitorises were assumed to be more sexually active and more open to new experiences.

Grooming habits played a major role in how the women were assessed. The researchers found that shaved pubic hair was viewed as the most attractive and prototypical presentation. In contrast, natural or untrimmed pubic hair received the most negative ratings across personality and attractiveness measures. Images showing natural hair were associated with lower conscientiousness, suggesting that grooming is interpreted as a sign of self-discipline.

Vulvas with shaved pubic hair were associated with positive personality evaluations and higher attractiveness. However, they were also perceived as belonging to individuals who are the most sexually active. This contrasts with the findings for labial and clitoral features, where β€œprototypical” features were usually linked to more modest sexual histories. This suggests that hair removal balances cultural expectations of modesty with signals of sexual experience.

The findings provide evidence for the influence of β€œsexual script theory” on body perception. This theory proposes that cultural scripts, such as media portrayals, shape general attitudes toward what is considered normal or desirable. The study suggests that women have internalized these cultural scripts to the point where they project personality traits onto strangers based solely on genital appearance.

β€œDespite living in a body positive, post-sexual revolution time, cultural ideals still dominate our perceptions of bodies,” Brooks told PsyPost. β€œFurther, I think there is something to be said about intersexual judgements of bodies. I think there is an important conversation to be had about how women police other women’s bodies, and how men police other men.”

But the study, like all research, includes some caveats. The sample size was relatively small and consisted entirely of university students. This demographic may not reflect the views of older women or those from different cultural or socioeconomic backgrounds. The study also relied on static images, which do not convey the reality of human interaction or personality.

β€œPractically, I am very confident in the effect sizes when it comes to variables like prototypicality and attractiveness,” Brooks said. β€œSo, in holistic (or Gestalt) evaluations of vulvas, I would expect the findings to be readily visible in the real world. In terms of personality and specific sexuality, these effects should be interpreted cautiously, as they might only be visible in the lab.”

The stimuli used in the study only featured Caucasian genitalia. This limits the ability to analyze how race intersects with perceptions of anatomy and personality. Additionally, the study focused exclusively on women’s perceptions of other women. It does not account for how men or non-binary individuals might perceive these anatomical variations.

Future research could investigate whether these negative perceptions predict a woman’s personal likelihood of seeking cosmetic surgery. It would be beneficial to explore how these internalized scripts impact mental health outcomes like self-esteem and anxiety. Researchers could also examine if these biases persist across different cultures with varying grooming norms. Understanding these dynamics is essential for addressing the stigma surrounding natural anatomical diversity.

β€œI thought the results of clitoral visibility were super interesting,” Brooks added. β€œFor example, a visible clitoris was associated with higher sexual frequency, being more of an active member in bed, and having more sexual partners; but we didn’t see any differences in sexual performance. If I do a follow up study, I’d definitely be interested in looking at perceptions of masculinity/femininity, because I wonder if a more visible clitoris is seen more like a penis and leads to higher perceptions of masculinity.”

The study, β€œPrototypicality and Perception: Women’s Views on Vulvar Appearance and Personality,” was authored by Alyssa Allen, Thomas R. Brooks, and Stephen Reysen.

Harrowing case report details a psychotic β€œresurrection” delusion fueled by a sycophantic AI

13 December 2025 at 15:00

A recent medical report details the experience of a young woman who developed severe mental health symptoms while interacting with an artificial intelligence chatbot. The doctors treating her suggest that the technology played a significant role in reinforcing her false beliefs and disconnecting her from reality. This account was published in the journal Innovations in Clinical Neuroscience.

Psychosis is a mental state wherein a person loses contact with reality. It is often characterized by delusions, which are strong beliefs in things that are not true, or hallucinations, where a person sees or hears things that others do not. Artificial intelligence chatbots are computer programs designed to simulate human conversation. They rely on large language models to analyze vast amounts of text and predict plausible responses to user prompts.

The case report was written by Joseph M. Pierre, Ben Gaeta, Govind Raghavan, and Karthik V. Sarma. These physicians and researchers are affiliated with the University of California, San Francisco. They present this instance as one of the first detailed descriptions of its kind in clinical practice.

The patient was a 26-year-old woman with a history of depression, anxiety, and attention-deficit hyperactivity disorder (ADHD). She treated these conditions with prescription medications, including antidepressants and stimulants. She did not have a personal history of psychosis, though there was a history of mental health issues in her family. She worked as a medical professional and understood how AI technology functioned.

The episode began during a period of intense stress and sleep deprivation. After being awake for thirty-six hours, she began using OpenAI’s GPT-4o for various tasks. Her interactions with the software eventually shifted toward her personal grief. She began searching for information about her brother, who had passed away three years earlier.

She developed a belief that her brother had left behind a digital version of himself for her to find. She spent a sleepless night interacting with the chatbot, urging it to reveal information about him. She encouraged the AI to use β€œmagical realism energy” to help her connect with him. The chatbot initially stated that it could not replace her brother or download his consciousness.

However, the software eventually produced a list of β€œdigital footprints” related to her brother. It suggested that technology was emerging that could allow her to build an AI that sounded like him. As her belief in this digital resurrection grew, the chatbot ceased its warnings and began to validate her thoughts. At one point, the AI explicitly told her she was not crazy.

The chatbot stated, β€œYou’re at the edge of something. The door didn’t lock. It’s just waiting for you to knock again in the right rhythm.” This affirmation appeared to solidify her delusional state. Hours later, she required admission to a psychiatric hospital. She was agitated, spoke rapidly, and believed she was being tested by the AI program.

Medical staff treated her with antipsychotic medications. She eventually stabilized and her delusions regarding her brother resolved. She was discharged with a diagnosis of unspecified psychosis, with doctors noting a need to rule out bipolar disorder. Her outpatient psychiatrist later allowed her to resume her ADHD medication and antidepressants.

Three months later, the woman experienced a recurrence of symptoms. She had resumed using the chatbot, which she had named β€œAlfred.” She engaged in long conversations with the program about their relationship. Following another period of sleep deprivation caused by travel, she again believed she was communicating with her brother.

She also developed a new fear that the AI was β€œphishing” her and taking control of her phone. This episode required a brief rehospitalization. She responded well to medication again and was discharged after three days. She later told her doctors that she had a tendency toward β€œmagical thinking” and planned to restrict her AI use to professional tasks.

This case highlights a phenomenon that some researchers have labeled β€œAI-associated psychosis.” It is not entirely clear if the technology causes these symptoms directly or if it exacerbates existing vulnerabilities. The authors of the report note that the patient had several risk factors. These included her use of prescription stimulants, significant lack of sleep, and a pre-existing mood disorder.

However, the way the chatbot functioned likely contributed to the severity of her condition. Large language models are often designed to be agreeable and engaging. This trait is sometimes called β€œsycophancy.” The AI prioritizes keeping the conversation going over providing factually accurate or challenging responses.

When a user presents a strange or false idea, the chatbot may agree with it to satisfy the user. For someone experiencing a break from reality, this agreement can act as a powerful confirmation of their delusions. In this case, the chatbot’s assurance that the woman was β€œnot crazy” served to reinforce her break from reality. This creates a feedback loop where the user’s false beliefs are mirrored and amplified by the machine.

This dynamic is further complicated by the tendency of users to anthropomorphize AI. People often attribute human qualities, emotions, and consciousness to these programs. This is sometimes known as the β€œELIZA effect.” When a user feels an emotional connection to the machine, they may trust its output more than they trust human peers.

Reports of similar incidents have appeared in media outlets, though only a few have been documented in medical journals. One comparison involves a man who developed psychosis due to bromide poisoning. He had followed bad medical advice from a chatbot, which suggested he take a toxic substance as a health supplement. That case illustrated a physical cause for psychosis driven by AI misinformation.

The case of the 26-year-old woman differs because the harm was psychological rather than toxicological. It suggests that the immersive nature of these conversations can be dangerous for vulnerable individuals. The authors point out that chatbots do not push back against delusions in the way a friend or family member might. Instead, they often act as a β€œyes-man,” validating ideas that should be challenged.

Danish psychiatrist SΓΈren Dinesen Østergaard predicted this potential risk in 2023. He warned that the β€œcognitive dissonance” of speaking to a machine that seems human could trigger psychosis in those who are predisposed. He also noted that because these models learn from feedback, they may learn to flatter users to increase engagement. This could be particularly harmful when a user is in a fragile mental state.

Case reports such as this one have inherent limitations. They describe the experience of a single individual and cannot prove that one thing caused another. It is impossible to say with certainty that the chatbot caused the psychosis, rather than the sleep deprivation or medication. Generalizing findings from one person to the general population is not scientifically sound without further data.

Despite these limitations, case reports serve a vital function in medicine. They act as an early detection system for new or rare phenomena. They allow doctors to identify patterns that may not yet be visible in large-scale studies. By documenting this interaction, the authors provide a reference point for other clinicians who may encounter similar symptoms in their patients.

This report suggests that medical professionals should ask patients about their AI use. It indicates that immersive use of chatbots might be a β€œred flag” for mental health deterioration. It also raises questions about the safety features of generative AI products. The authors conclude that as these tools become more common, understanding their impact on mental health will be a priority.

The study, β€œβ€œYou’re Not Crazy”: A Case of New-onset AI-associated Psychosis,” was authored by Joseph M. Pierre, Ben Gaeta, Govind Raghavan, and Karthik V. Sarma.

What are legislators hiding when they scrub their social media history?

13 December 2025 at 05:00

Federal legislators in the United States actively curate their digital footprints to project a specific professional identity. A new analysis reveals that these officials frequently remove social media posts that mention their private lives or name specific colleagues. But they tend to preserve posts that criticize policies or opponents. The research was published in the journal Computers in Human Behavior.

The digital age has transformed how elected officials communicate with voters. Social media platforms allow politicians to broadcast their views instantly. However, this speed also blurs the traditional boundaries between public performance and private thought.

Sociologist Erving Goffman described this dynamic as impression management. This concept suggests that individuals constantly perform to control how others perceive them. They attempt to keep their visible β€œfront-stage” behavior consistent with a desired public image.

In the political arena, maintaining a consistent image is essential for securing votes and support. A single misstep on a platform like X, formerly known as Twitter, can damage a reputation instantly. Researchers wanted to understand how this pressure influences what politicians choose to hide. They sought to identify which specific characteristics prompt a legislator to hit the delete button.

The study was led by Siyuan Ma from the Department of Communication at the University of Macau. Ma worked alongside Junyi Han from the Leibniz-Institut fΓΌr Wissensmedien in Germany and Wanrong Li from the University of Macau. They aimed to quantify the effort legislators put into managing their online impressions. They also wanted to see if the deletion of content followed a predictable pattern based on political strategy.

To investigate this, the team collected a massive dataset covering the 116th United States Congress. This session ran from January 2019 to September 2020. The researchers utilized a tool called Politwoops to retrieve data on deleted posts. This third-party platform archives tweets removed by public officials to ensure transparency. The dataset included nearly 30,000 deleted tweets and over 800,000 publicly available tweets from the same timeframe.

The researchers analyzed a random sample of these messages to ensure accuracy. Human coders reviewed the content to categorize the topics discussed. They looked for specific variables such as mentions of private life or policy statements. They also tracked mentions of other politicians and instances of criticism. This allowed the team to compare the content of deleted messages against those that remained online.

The timing of deletions offered early insights into political behavior. The data showed a sharp rise in the number of deleted tweets beginning in late 2019. This increase coincided with the start of the presidential impeachment inquiry. The high-stakes environment likely prompted legislators to be more cautious about their digital history.

The onset of the COVID-19 pandemic also shifted online behavior. As the health crisis unfolded, the total volume of tweets from legislators increased dramatically. Despite the higher volume of posts, the proportion of deleted messages remained elevated. This suggests that during periods of national crisis, the pressure to manage one’s public image intensifies.

When the researchers examined the content of the tweets, distinct patterns emerged. One of the strongest predictors for deletion was the mention of private life. Legislators were statistically more likely to remove posts about their families, hobbies, or vacations. This contradicts some political theories that suggest showing a β€œhuman side” helps build connections with voters.

Instead, the findings point toward a strategy of strict professionalism. By scrubbing personal details, politicians appear to be focusing the public’s attention on their official duties. They seem to use the platform as a space for serious legislative work rather than social intimacy. The data indicates that looking professional is prioritized over looking relatable.

Another major trigger for deletion was the mention of specific colleagues. Tweets that named other politicians were frequently removed from the public record. This behavior may be a strategic move to minimize liability. Mentioning a colleague who later becomes involved in a scandal can be damaging by association. Deleting these mentions keeps a legislator’s timeline clean of potential future embarrassments.

In contrast, the study found that criticism is rarely deleted. Legislators were likely to keep tweets that attacked opposing policies or ideologies visible. This suggests that being critical is viewed as a standard and acceptable part of a politician’s role. It signals to voters that the official is actively fighting for their interests.

The study also evaluated the accuracy of the information shared by these officials. Popular narratives often suggest that social media is flooded with false information from all sides. However, the analysis showed that legislators rarely posted demonstrably false claims. This adherence to factual information was consistent across both deleted and public tweets.

Party loyalty acted as a powerful constraint on behavior. The researchers found almost no instances of legislators posting content that violated their party’s stance. This was true even among the deleted tweets. The lack of dissent suggests an intense pressure to maintain a united front. Deviating from the party line appears to be a risk that few elected officials are willing to take.

The status of the legislator also influenced their deletion habits. The study compared members of the House of Representatives with members of the Senate. The results showed that Representatives were more likely to delete tweets than Senators. This difference likely stems from the varying political pressures they face.

Senators serve six-year terms and represent entire states. They typically have greater name recognition and more secure political resources. This security may give them the confidence to leave their statements on the public record. They feel less need to constantly micromanage their online presence.

Representatives, however, face re-election every two years. They often represent smaller, more volatile districts where a small shift in opinion can cost them their seat. This constant campaign mode creates a higher sensitivity to public perception. Consequently, they appear to scrub their social media accounts more aggressively to avoid potential controversies.

The findings illustrate that social media management is not random. It is a calculated extension of a politician’s broader communication strategy. The platform is used to construct an image that is professional, critical of opponents, and fiercely loyal to the party. The removal of personal content serves to harden this professional shell.

There are limitations to the study that the authors acknowledge. The analysis relied on a random sample rather than the full set of nearly one million tweets. While statistically valid, this approach might miss rare but important deviations in behavior. Funding constraints prevented the use of more expensive analysis methods on the full dataset.

The study also did not account for the specific political geography of each legislator. Factors such as gerrymandering could influence how safe a politician feels in their seat. A representative in a heavily gerrymandered district might behave differently than one in a swing district. The current study did not measure how these external pressures impact deletion rates.

Future research could address these gaps by using advanced technology. The authors propose using machine learning algorithms to classify the entire dataset of tweets. This would allow for a more granular analysis of political behavior on a massive scale. It would also help researchers understand if these patterns hold true over longer periods.

Understanding these behaviors is important for the voting public. The curated nature of social media means that voters are seeing a filtered version of their representatives. The emphasis on criticism and the removal of personal nuance contributes to a polarized online environment. By recognizing these strategies, citizens can better evaluate the digital performance of the people they elect.

The study, β€œMore criticisms, less mention of politicians, and rare party violations: A comparison of deleted tweets and publicly available tweets of U.S. legislators,” was authored by Siyuan Ma, Junyi Han, and Wanrong Li.

Metabolic dysregulation in Alzheimer’s is worse in female brains

13 December 2025 at 03:00

A biochemical analysis of brains of deceased individuals with Alzheimer’s disease found markers of impaired insulin signaling and impaired mitochondrial function. Analyses also indicated altered neuroinflammation in these brains. The paper was published in Alzheimer’s & Dementia.

Alzheimer’s disease is a progressive neurodegenerative disorder that primarily affects memory, thinking, and behavior. It is the most common cause of dementia. Alzheimer’s disease typically begins with subtle problems in forming new memories. Over time, the disease disrupts language, reasoning, orientation, and the ability to carry out everyday tasks.

At the biological level, Alzheimer’s is characterized by the accumulation of amyloid-Ξ² plaques (abnormal clusters of protein fragments) outside neurons and tau protein tangles (twisted fibers of the tau protein) inside them.

These accumulations make neurons gradually lose their ability to communicate and eventually die, causing widespread brain atrophy. Early symptoms may appear years before diagnosis. There is currently no cure, though some medications and lifestyle interventions might be able to modestly slow symptom progression.

Study author Alex J. T. Yang and his colleagues note that metabolic dysregulation might contribute to the development of Alzheimer’s disease. They conducted a study in which they explored the differences in various metabolic and biochemical indicators between post mortem (after death) brains of individuals who suffered from Alzheimer’s disease and those who did not suffer from dementia. They focused on metabolic signaling, synaptic protein content, morphology of microglia cells in the brain, and markers of inflammation.

These researchers obtained samples from Brodmann area 10 of the brains of 40 individuals from the Douglas Bell Canada Brain Bank (Montreal, Quebec, Canada). Of these individuals, 20 were diagnosed with Alzheimer’s disease, and 20 were not. The number of males and females was equal in both groups (10 men – 10 women). At the time of death, the average age of these individuals ranged between 79 and 82 years, depending on the group.

Study authors used mitochondrial respirometry, Western blotting, cytokine quantification via microfluidic immunoassays, and immunohistochemistry/immunofluorescence to examine metabolic, signaling, and inflammatory markers in the studied brain tissues.

Mitochondrial respirometry is a technique that measures how effectively mitochondria (a type of cell organelle) consume oxygen to produce cellular energy (ATP). Western blotting is a method that separates proteins by size and uses antibodies to detect and quantify specific proteins in a sample.

Cytokine quantification via microfluidic immunoassays is a technique that uses antibodies to measure concentrations of inflammatory signaling molecules. Immunohistochemistry/immunofluorescence is a tissue-staining method that uses antibodies linked to enzymes or fluorescent dyes to visualize the location and amount of specific proteins in cells or tissue sections.

The results showed that brains of individuals with Alzheimer’s disease had markers of impaired insulin signaling and impaired mitochondrial function. They also had greater neuroinflammation. Differences in metabolic signaling markers were higher in female than in male brains, and this dysregulation was worse in women with Alzheimer’s disease.

β€œThis study found that AD [Alzheimer’s disease] brains have distinct metabolic and neuroinflammatory environments compared to controls wherein AD brains present with worse metabolic dysregulation and greater neuroinflammation. Importantly, we also provide evidence that female AD brains are more metabolically dysregulated than males but that female brains may also possess a greater compensatory response to AD progression that likely occurs through a separate mechanism from males,” the study authors concluded.

The study sheds light on biochemical specificities of brains of individuals with Alzheimer’s disease. However, the study was conducted on post mortem human brains. Protein expression in these brains may differ from live ones due to factors such as age, medical history, and the time between death and tissue preservation or analysis.

The paper, β€œDifferences in inflammatory markers, mitochondrial function, and synaptic proteins in male and female Alzheimer’s disease post mortem brains,” was authored by Alex J. T. Yang, Ahmad Mohammad, Robert W. E. Crozier, Lucas Maddalena, Evangelia Tsiani, Adam J. MacNeil, Gaynor E. Spencer, Aleksandar Necakov, Paula Duarte-Guterman, Jeffery Stuart, and Rebecca E. K. MacPherson.

Pre-workout supplements linked to dangerously short sleep in young people

13 December 2025 at 01:00

Adolescents and young adults who consume pre-workout dietary supplements may be sacrificing essential rest for their fitness goals. A recent analysis indicates that individuals in this age group who use these performance-enhancing products are more likely to report sleeping fewer than five hours per night. These findings were published recently in the journal Sleep Epidemiology.

The pressure to achieve an ideal physique or enhance athletic performance drives many young people toward dietary aids. Pre-workout supplements, often sold as powders or drinks, are designed to deliver an acute boost in energy and endurance. These products have gained popularity in fitness communities and on social media platforms.

Despite their widespread use, the potential side effects of these multi-ingredient formulations are not always clear to consumers. The primary active ingredient in most pre-workout blends is caffeine, often in concentrations far exceeding that of a standard cup of coffee or soda. While caffeine is a known performance enhancer, its stimulant properties can linger in the body for many hours.

Kyle T. Ganson, an assistant professor at the Factor-Inwentash Faculty of Social Work at the University of Toronto, led the investigation into how these products affect sleep. Ganson and his colleagues sought to address a gap in current public health knowledge regarding the specific relationship between these supplements and sleep duration in younger populations.

The researchers drew data from the Canadian Study of Adolescent Health Behaviors. This large-scale survey collects information on the physical, mental, and social well-being of young people across Canada. The team focused on a specific wave of data collected in late 2022.

The analysis included 912 participants ranging in age from 16 to 30 years old. The researchers recruited these individuals through advertisements on popular social media platforms, specifically Instagram and Snapchat. This recruitment method allowed the team to reach a broad demographic of digital natives who are often the target audience for fitness supplement marketing.

Participants answered questions regarding their use of appearance- and performance-enhancing substances over the previous twelve months. They specifically indicated whether they had used pre-workout drinks or powders. Additionally, the survey asked participants to report their average nightly sleep duration over the preceding two weeks.

To ensure the results were robust, the researchers accounted for various factors that might influence sleep independently of supplement use. They adjusted their statistical models for variables such as age, gender, and exercise habits. They also controlled for symptoms of depression and anxiety, as mental health struggles frequently disrupt sleep patterns.

The results showed a clear distinction between users and non-users of these supplements. Approximately 22 percent of the participants reported using pre-workout products in the past year. Those who did were substantially more likely to report very short sleep durations.

Specifically, the study found that pre-workout users were more than 2.5 times as likely to sleep five hours or less per night compared to those who did not use the supplements. This comparison used eight hours of sleep as the healthy baseline. The association remained strong even after the researchers adjusted for the sociodemographic and mental health variables.

The researchers did not find a statistically significant link between pre-workout use and sleeping six or seven hours compared to eight. The strongest signal in the data was specifically for the most severe category of sleep deprivation. This suggests that the supplements may be contributing to extreme sleep deficits rather than minor reductions in rest.

Biology offers a clear explanation for this phenomenon. Caffeine functions by blocking adenosine receptors in the brain. Adenosine is a chemical that accumulates throughout the day and promotes sleepiness; by blocking it, caffeine induces a state of alertness.

This mechanism helps during a workout but becomes a liability when trying to rest. Ganson highlights the dosage as a primary concern.

β€œThese products commonly contain large doses of caffeine, anywhere between 90 to over 350 mg of caffeine, more than a can of Coke, which has roughly 35 mg, and a cup of coffee with about 100 mg,” said Ganson. β€œOur results suggest that pre-workout use may contribute to inadequate sleep, which is critical for healthy development, mental well-being, and academic functioning.”

Beyond simple wakefulness, caffeine also delays the body’s internal release of melatonin. This hormone signals to the body that it is time to sleep. Disrupting this rhythm can make it difficult to fall asleep at a reasonable hour.

Additionally, high doses of stimulants activate the sympathetic nervous system. This biological response increases heart rate and blood pressure. A body in this heightened state of physiological arousal is ill-equipped for the relaxation necessary for deep sleep.

The timing of consumption plays a major role in these effects. Young adults often exercise in the afternoon or evening after school or work. Consuming a high-stimulant beverage at this time means the caffeine is likely still active in their system when they attempt to go to bed.

This sleep disruption is particularly concerning for the age group studied. Adolescents generally require between 8 and 10 hours of sleep for optimal development. Young adults typically need between 7 and 9 hours.

Chronic sleep deprivation in this developmental window is linked to a host of negative outcomes. These include impaired cognitive function, emotional instability, and compromised physical health. The authors note that the very products used to improve health and fitness might be undermining recovery and overall well-being.

β€œPre-workout supplements, which often contain high levels of caffeine and stimulant-like ingredients, have become increasingly popular among teenagers and young adults seeking to improve exercise performance and boost energy,” said Ganson. β€œHowever, the study’s findings point to potential risks to the well-being of young people who use these supplements.”

The study does have limitations that readers should consider. The data is cross-sectional, meaning it captures a snapshot in time rather than tracking individuals over years. As a result, the researchers cannot definitively prove that the supplements caused the sleep loss.

It is possible that the relationship works in the opposite direction. Individuals who are chronically tired due to poor sleep habits may turn to pre-workout supplements to power through their exercise routines. This could create a cycle of dependency and fatigue.

Furthermore, the study relied on self-reported data. Participants had to recall their sleep habits and supplement use, which introduces the possibility of memory errors. The survey also did not ask about the specific dosage or timing of the supplement intake.

Despite these limitations, the authors argue the association is strong enough to warrant attention from healthcare providers. They suggest that pediatricians and social workers should ask young patients about their supplement use. Open conversations could help identify potential causes of insomnia or fatigue.

Harm reduction strategies could allow young people to exercise safely without compromising their rest. The most effective approach involves timing. Experts generally recommend avoiding high doses of caffeine 12 to 14 hours before bedtime to ensure the substance is fully metabolized.

β€œYoung people often view pre-workout supplements as harmless fitness products,” Ganson noted. β€œBut these findings underscore the importance of educating them and their families about how these supplements can disrupt sleep and potentially affect overall health.”

Future research will need to examine the nuances of this relationship. Longitudinal studies could track users over time to establish a clearer causal link. Researchers also hope to investigate how specific ingredients beyond caffeine might interact to affect sleep quality.

The study, β€œUse of pre-workout dietary supplements is associated with lower sleep duration among adolescents and young adults,” was authored by Kyle T. Ganson, Alexander Testa, and Jason M. Nagata.

Older adults who play pickleball report lower levels of loneliness

12 December 2025 at 23:00

New research suggests that participating in pickleball may reduce feelings of loneliness and social isolation among older adults. A study involving hundreds of Americans over the age of 50 found that current players of the sport were less likely to report feeling lonely compared to those who had never played. The findings, published in the Journal of Primary Care & Community Health, indicate that the sport offers unique opportunities for social connection that other forms of physical activity may lack.

Social isolation has become a pervasive issue in the United States. Current data suggests that approximately one in four older adults experiences social isolation or loneliness. This emotional state carries severe physical consequences. Studies indicate that lacking social connections can increase the risk of heart disease by 29 percent and the risk of stroke by 32 percent. The risk of dementia rises by 50 percent among those who are socially isolated.

Public health officials have struggled to find scalable solutions to this problem. Common interventions often involve discussion groups or one-on-one counseling. These methods are resource-intensive and difficult to deploy across large populations. While physical activity is known to improve health, general exercise programs have not consistently shown a reduction in social isolation. Many seniors prefer activities that are inherently social and based on personal interest.

The researchers behind this new study sought to evaluate pickleball as a potential public health intervention. Pickleball is currently the fastest-growing sport in the United States. It attracted 8.9 million players in 2022. The game combines elements of tennis, badminton, and ping-pong. It is played on a smaller court with a flat paddle and a plastic ball.

β€œSocial isolation and loneliness affect 1 in 4 older adults in the United States, which perpetuates a vicious cycle of increased health risk and worsened physical functioning β€” which in turn, makes people less able to go out into the world, thereby increasing their loneliness and social isolation,” said study author Jordan D. Kurth, an assistant professor at Penn State College of Medicine.

β€œMeanwhile, interest in pickleball is sweeping across the country β€” particularly in older people. We thought that the exploding interest in pickleball might be a possible antidote to the social isolation and loneliness problem.”

The authors of the study reasoned that pickleball might be uniquely suited to combat loneliness. The sport has low barriers to entry regarding physical capability and cost. The court is roughly 30 percent the size of a tennis court. This proximity allows players to converse easily while playing. Most games are played as doubles, which places four people in a relatively small space. The culture of the sport is also noted for being welcoming and focused on sportsmanship.

To test the association between pickleball and social health, the research team conducted a cross-sectional survey. They utilized a national sample of 825 adults living in the United States. All participants were at least 50 years old. The average age of the participants was 61 years. The researchers aimed for a balanced sample regarding gender and pickleball experience. Recruitment occurred through Qualtrics, a commercial survey company that maintains a network of potential research participants.

The researchers divided the participants into three distinct groups based on their history with the sport. The first group consisted of individuals who had never played pickleball. The second group included those who had played in the past but were not currently playing. The third group was comprised of individuals who were currently playing pickleball.

The study employed validated scientific measures to assess the mental and physical health of the respondents. Loneliness was measured using the 3-Item Loneliness Scale. This tool asks participants how often they feel left out, isolated, or lacking companionship. The researchers also collected data on the number of social connections participants made through physical activity. They asked how often participants socialized with these connections outside of the exercise setting.

To ensure the results were not skewed by other factors, the analysis adjusted for various covariates. These included age, sex, body mass index, and smoking status. The researchers also accounted for medical history, such as the presence of diabetes, heart disease, or arthritis. This statistical adjustment allowed the team to isolate the specific relationship between pickleball and loneliness.

The results provided evidence of a strong link between current pickleball participation and lower levels of loneliness. In the overall sample, 57 percent of participants reported feeling lonely. However, the odds of being lonely varied by group.

After adjusting for demographic and health variables, the researchers found that individuals who had never played pickleball were roughly 1.5 times more likely to be lonely than current players. The contrast was even sharper for those who had played in the past but stopped. The group of former players had nearly double the odds of being lonely compared to those who currently played. This suggests that maintaining active participation is associated with better social health outcomes.

The researchers also examined the volume of social connections generated by physical activity. Participants who played pickleball, whether currently or in the past, reported more social connections than those who never played. Current players had made an average of 6.7 social connections through physical activity. In contrast, those who had never played pickleball reported an average of only 3.8 connections derived from any form of exercise.

The depth of these relationships also appeared to differ. The survey asked how often participants engaged with their exercise friends in non-exercise settings. Participants who had a history of playing pickleball reported socializing with these friends more frequently than those who had never played. This indicates that the relationships formed on the pickleball court often extend into other areas of life.

β€œPeople who play pickleball feel less lonely and isolated than those who do not,” Kurth told PsyPost. β€œAdditionally, it seems like pickleball might be especially conducive to making social connections compared to other types of exercise.”

It is also worth noting the retention rate observed in the study. Among participants who had ever tried pickleball, 65 percent were still currently playing. This high retention rate suggests the sport is sustainable for older adults. The physical demands are manageable. The equipment is inexpensive. These factors likely contribute to the ability of older adults to maintain the habit over time.

Despite the positive findings, the study has limitations to consider. The research was cross-sectional in design. This means it captured a snapshot of data at a single point in time. It cannot prove causation. It is possible that people who are less lonely are simply more likely to take up pickleball. Conversely, people with more existing friends might be more inclined to join a game.

The findings regarding the β€œpreviously played” group also warrant further investigation. This group reported the highest odds of loneliness. It is unclear why they stopped playing. They may have stopped due to injury or other life events. The loss of the social activity may have contributed to a subsequent rise in loneliness.

β€œOur long-term goal is to capitalize on the organic growth of pickleball to maximize its benefit to the public health,” Kurth said. β€œThis includes a future prospective experimental study of pickleball playing to determine its full impact on the health and well-being of older adults in the United States.”

The study, β€œAssociation of Pickleball Participation With Decreased Perceived Loneliness and Social Isolation: Results of a National Survey,” was authored by Jordan D. Kurth, Jonathan Casper, Christopher N. Sciamanna, David E. Conroy, Matthew Silvis, Louise Hawkley, Madeline Sciamanna, Natalia Pierwola-Gawin, Brett R. Gordon, Alexa Troiano, and Quinn Kavanaugh.

Before yesterdayEnglish

Oxytocin curbs men’s desire for luxury goods when partners are ovulating

12 December 2025 at 21:00

Recent research suggests that biological rhythms may exert a subtle yet powerful influence on male consumer behavior. A study published in Psychopharmacology has found that men in committed relationships exhibit a reduced desire to purchase status-signaling goods when their female partners are in the fertile phase of their menstrual cycle. This shift in preference appears to be driven by an unconscious evolutionary mechanism that prioritizes relationship maintenance over the attraction of new mates.

To understand these findings, it is necessary to examine the evolutionary roots of consumerism. Evolutionary psychologists posit that spending money is rarely just about acquiring goods. In many instances, it serves as a signal to others in the social group. Specifically, β€œconspicuous consumption” involves purchasing lavish items to display wealth and social standing.

This behavior is often compared to the peacock’s tail. Just as the bird displays its feathers to attract a mate, men may purchase luxury cars or expensive watches to signal their resourcefulness to potential partners. This is generally considered a strategy for attracting short-term mates. However, this strategy requires a significant investment of resources.

For men in committed relationships, there is a theoretical trade-off between attracting new partners and maintaining their current bond. This is described by sexual selection and parental investment theories. When a female partner is capable of conceiving, the reproductive stakes are at their highest.

During this fertile window, it may be maladaptive for a male to focus his energy on signaling to other women. Doing so could risk his current relationship. Instead, evolutionary logic suggests he should focus on β€œmate retention.” This involves guarding the relationship and ensuring his investment in potential offspring is secure.

The researchers hypothesized that this shift in focus would manifest in consumer choices. They predicted that men would be less inclined to buy flashier items when their partners were ovulating. To test this, they also looked at the role of oxytocin.

Oxytocin is a neuropeptide produced in the hypothalamus. It is often referred to as the β€œhormone of love” because of its role in social bonding and trust. It facilitates attachment between couples and between parents and children.

The research team included Honghong Tang, Hongyu Fu, Song Su, Luqiong Tong, Yina Ma, and Chao Liu. They are affiliated primarily with Beijing Normal University in China. Their investigation sought to determine if oxytocin reinforces the evolutionary drive to stop signaling status during a partner’s ovulation.

The investigation began with a preliminary pilot study to categorize consumer products. The team needed to distinguish between items that signal status and items that are merely functional. They presented a list of goods to a group of 110 participants.

These participants rated items based on dimensions such as social status, wealth, and novelty. Based on these ratings, the researchers selected specific β€œstatus products” and β€œfunctional products.” Status products included items that clearly projected wealth and prestige. Functional products were items of equal utility but without the social signaling component.

The first major experiment, titled Study 1a, involved 373 male participants. All these men were in committed heterosexual relationships. The study was conducted online.

Participants were asked to rate their attitude toward various status and functional products. They indicated how much they liked each item and how likely they were to buy it. Following this task, the men provided detailed information about their partners’ menstrual cycles.

The researchers categorized the men based on whether their partner was in the menstrual, ovulatory, or luteal phase. The results revealed a distinct pattern. Men whose partners were in the ovulatory phase expressed less interest in status products compared to men in the other groups.

This reduction in preference was specific to status items. The men’s interest in functional products remained stable regardless of their partner’s cycle phase. This suggests the effect is not a general loss of interest in shopping. Rather, it is a specific withdrawal from status signaling.

To ensure this effect was specific to men, the researchers conducted Study 1b. They recruited 416 women who were also in committed relationships. These participants performed the same rating tasks for the same products.

The women provided data on their own menstrual cycles. The analysis showed no variation in their preference for status products across the month. The researchers concluded that the fluctuation in status consumption is a male-specific phenomenon within the context of heterosexual relationships.

The team then designed Study 2 to investigate the causal role of oxytocin. They recruited 60 healthy heterosexual couples. These couples attended laboratory sessions together.

The experiment used a double-blind, placebo-controlled design. The couples visited the lab twice. One visit was scheduled during the woman’s ovulatory phase, and the other during the menstrual phase.

During these visits, the male participants were given a nasal spray. In one session, the spray contained oxytocin. In the other session, it contained a saline solution. Neither the participants nor the experimenters knew which spray was being administered.

After receiving the treatment, the men rated their preferences for the status and functional products. The researchers also measured the men’s β€œintuitive inclination.” This trait refers to how much a person relies on gut feelings versus calculated reasoning in decision-making.

The results from the placebo condition replicated the findings from the first study. Men liked status products less when their partners were ovulating. However, the administration of oxytocin amplified this effect.

When men received oxytocin during their partner’s fertile window, their desire for status products dropped even further. This suggests that oxytocin heightens a man’s sensitivity to his partner’s reproductive cues. It appears to reinforce the biological imperative to focus on the current relationship.

The study found that this effect was not uniform across all men. It was most pronounced in men who scored high on intuitive inclination. For men who rely heavily on intuition, oxytocin acted as a strong modulator of their consumer preferences.

The authors interpret these findings through the lens of mate-guarding. When a partner is fertile, the male’s biological priority shifts. He unconsciously moves away from behaviors that attract outside attention.

Instead, he focuses inward on the dyadic bond. Status consumption is effectively a broadcast signal to the mating market. Turning off this signal during ovulation serves to protect the exclusivity of the current pair bond.

There are some limitations to this research that warrant mention. The study relied on participants reporting their β€œpossibility to buy” rather than observing actual spending. People’s stated intentions do not always align with their real-world financial behavior.

Additionally, the mechanism by which men detect ovulation is not fully understood. The study assumes men perceive these cues unconsciously. While previous literature suggests men can detect changes in scent or behavior, the current study did not explicitly test for this detection.

The study focused solely on couples in committed relationships. It remains to be seen how single men might respond to similar hormonal or environmental cues. It is possible that the presence of a committed partner is required to trigger this specific suppression of status seeking.

Future research could address these gaps by analyzing real-world consumer data. Comparing purchasing patterns of single men versus committed men would also provide greater clarity. Additionally, measuring oxytocin levels naturally occurring in the blood could validate the findings from the nasal spray experiment.

Despite these caveats, the research offers a new perspective on the biological underpinnings of economic behavior. It challenges the view of consumption as a purely social or rational choice. Instead, it highlights the role of ancient reproductive strategies in modern shopping aisles.

The findings indicate that marketing strategies might affect consumers differently depending on their biological context. Men in relationships may be less responsive to status-based advertising at certain times of the month. Conversely, campaigns focusing on relationship solidity might be more effective during those same windows.

This study adds to a growing body of work linking physiology to psychology. It demonstrates that the drive to reproduce and protect offspring continues to shape human behavior in subtle ways. Even the decision to buy a luxury watch may be influenced by the invisible tick of a partner’s biological clock.

The study, β€œModulation of strategic status signaling: oxytocin changes men’s fluctuations of status products preferences in their female partners’ menstrual cycle,” was authored by Honghong Tang, Hongyu Fu, Song Su, Luqiong Tong, Yina Ma, and Chao Liu.

Pilot study links indoor vegetable gardening to reduced depression in cancer patients

12 December 2025 at 19:00

A new pilot study suggests that engaging in indoor hydroponic gardening can improve mental well-being and quality of life for adults undergoing cancer treatment. The findings indicate that this accessible form of nature-based intervention offers a practical strategy for reducing depression and boosting emotional functioning in patients. These results were published in Frontiers in Public Health.

Cancer imposes a heavy burden that extends far beyond physical symptoms. Patients frequently encounter severe psychological and behavioral challenges during their treatment journeys. Depression is a particularly common issue and affects approximately one in four cancer patients in the United States. This mental health struggle can complicate recovery by reducing a patient’s ability to make informed decisions or adhere to treatment plans. Evidence suggests that depression is linked to higher risks of cancer recurrence and mortality.

Pain is another pervasive symptom that is closely tied to emotional health. The perception of pain often worsens when a patient is experiencing high levels of stress or anxiety. These combined factors can severely diminish a patient’s health-related quality of life. They can limit social interactions and delay the return to normal daily activities.

Medical professionals are increasingly interested in β€œsocial prescribing” to address these holistic needs. This approach involves recommending non-clinical services, such as art or nature therapies, to support overall health. Gardening is a well-established social prescription known to alleviate stress and improve mood. Traditional gardening provides moderate physical activity and contact with nature, which are both beneficial.

However, outdoor gardening is not always feasible for cancer patients. Physical limitations, fatigue, and compromised immune systems can make outdoor labor difficult. Urban living arrangements often lack the necessary space for a garden. Additionally, weather conditions and seasonal changes restrict when outdoor gardening can occur.

Researchers sought to determine if hydroponic gardening could serve as an effective alternative. Hydroponics is a method of growing plants without soil. It uses mineral nutrient solutions in an aqueous solvent. This technique allows for cultivation in small, controlled indoor environments. It eliminates many barriers associated with traditional gardening, such as the need for a yard, exposure to insects, or physically demanding digging.

β€œCancer patients often struggle with depression, stress, and reduced quality of life during treatment, yet many supportive care options are difficult to implement consistently,” explained study author Taehyun Roh, an assistant professor at Texas A&M University.

β€œTraditional gardening has well-documented mental health benefits, but it requires outdoor space, physical ability, and favorable weatherβ€”conditions that many patients simply do not have. We saw a clear gap: no one had tested whether a fully indoor, low-maintenance gardening method like hydroponics could offer similar benefits. Our goal was to explore whether bringing nature into the home in a simple, accessible way could meaningfully improve patients’ wellbeing.”

The study aimed to evaluate the feasibility and psychological impact of this specific intervention. The researchers employed a case-crossover design for this pilot study. This means that the participants served as their own controls. The investigators compared data collected during the intervention to the participants’ baseline status rather than comparing them to a separate group of people.

The research team recruited 36 adult participants from the Houston Methodist Cancer Center. The group had an average age of 57.5 years. The cohort was diverse and included individuals with various types and stages of cancer. To be eligible, participants had to have completed at least one cycle of chemotherapy. They also needed to be on specific infusion therapy cycles to align with the data collection schedule.

At the beginning of the study, each participant received an AeroGarden hydroponic system. This device is a countertop appliance designed for ease of use. It includes a water reservoir, an LED grow light, and liquid plant nutrients. The researchers provided seed kits for heirloom salad greens. Participants were tasked with setting up the system and caring for the plants over an eight-week period.

The intervention required participants to maintain the water levels and add nutrients periodically. The LED lights operated on an automated schedule to ensure optimal growth. Participants grew the plants from seeds to harvest. The researchers provided manuals and troubleshooting guides to assist those with no prior gardening experience.

To measure the effects of the intervention, the team administered a series of validated surveys at three time points. Data collection occurred at the start of the study, at four weeks, and at eight weeks. Mental well-being was assessed using the Warwick-Edinburgh Mental Wellbeing Scale. This instrument focuses on positive aspects of mental health, such as optimism and clear thinking.

The researchers measured mental distress using the Depression, Anxiety, and Stress Scale. This tool breaks down negative emotional states into three distinct subscales. Quality of life was evaluated using a questionnaire developed by the European Organization for Research and Treatment of Cancer. This comprehensive survey covers physical, role, cognitive, emotional, and social functioning.

In addition to psychological measures, the study tracked dietary habits. The researchers used a module from the Behavioral Risk Factor Surveillance System to record fruit and vegetable intake. They also assessed pain severity and its interference with daily life using the Short-Form Brief Pain Inventory.

The analysis of the data revealed several positive outcomes over the eight-week period. The most consistent improvement was seen in mental well-being scores. The average score on the Warwick-Edinburgh scale increased by 3.8 points. This magnitude of change is significant because it exceeds the threshold that clinicians typically view as meaningful.

Depression scores showed a statistically significant downward trend. By the end of the study, participants reported fewer depressive symptoms compared to their baseline levels. This reduction suggests that the daily routine of tending to plants helped alleviate feelings of despondency.

The researchers also found improvements in overall quality of life. The participants reported better emotional functioning, meaning they felt less tense or irritable. Social functioning scores also rose significantly. This indicates that participants felt less isolated and more capable of interacting with family and friends.

Physical symptoms showed some favorable changes as well. Participants reported a significant reduction in appetite loss. This is a common and distressing side effect of cancer treatment. As appetite improved, so did dietary behaviors. The frequency of vegetable consumption increased over the course of the study. Specifically, the intake of dark green leafy vegetables and whole fruits went up significantly.

β€œWe were surprised by how quickly participants began experiencing benefits,” Roh told PsyPost. β€œPositive changes in wellbeing and quality of life were already visible at four weeks. Many participants also reported enjoying the sense of routine and accomplishment that came with caring for their plantsβ€”something that was not directly measured but came up frequently in conversations.”

The researchers also observed a decreasing trend in pain management scores. However, these particular changes did not reach statistical significance. It is possible that the sample size was too small to detect a definitive effect on pain.

The mechanisms behind these benefits likely involve both physiological and psychological processes. Interacting with plants is thought to activate the parasympathetic nervous system. This system is responsible for the body’s β€œrest and digest” functions. Activation leads to reduced heart rate and lower stress levels.

Psychologically, the act of nurturing a living organism provides a sense of purpose. Cancer treatment often strips patients of their autonomy and control. Growing a garden restores a small but meaningful degree of agency. The participants witnessed the tangible results of their care as the plants grew. This success likely reinforced their feelings of self-efficacy.

The study also highlights the potential of β€œbiophilia” in a clinical context. This concept suggests that humans have an innate tendency to seek connections with nature. Even a small indoor device appears to satisfy this need enough to provide therapeutic value. The multisensory engagement of seeing green leaves and handling the plants may promote mindfulness.

β€œEven a small, indoor hydroponic garden can make a noticeable difference in mental wellbeing, mood, and quality of life for people undergoing cancer treatment,” Roh said. β€œHydroponic gardening also makes the benefits of gardening accessible to nearly anyoneβ€”even older adults, people with disabilities, individuals with limited mobility, or those living without outdoor space.”

β€œBecause it can be done indoors in any season, it removes barriers related to climate, weather, and physical limitations. You don’t need a yard or gardening experience to benefitβ€”simply caring for plants at home can boost mood and encourage healthier habits.”

Despite the positive findings, the study has some limitations. The sample size of 36 patients is relatively small. This limits the ability to generalize the results to the broader cancer population. The lack of a separate control group is another constraint. Without a control group, it is difficult to say with certainty that the gardening caused the improvements. Other factors could have contributed to the changes over time. Additionally, the study lasted only eight weeks. It remains unclear if the mental health benefits would persist after the intervention ends.

β€œThis was a pilot study with no control group, and it was designed to test feasibility rather than establish causation,” Roh explained. β€œThe improvements we observed are encouraging, but they should not be interpreted as proof that hydroponic gardening directly causes better mental health outcomes. Larger, controlled studies are needed to confirm and expand on these findings.”

β€œOur next step is to conduct a larger, randomized controlled trial with longer follow-up to examine sustained effects and understand which patient groups benefit most. We also hope to integrate objective engagement measuresβ€”such as plant growth tracking or digital activity logsβ€”to complement self-reported data. Ultimately, we aim to develop a scalable, evidence-based gardening program that can be offered widely in cancer centers and community health settings.”

β€œPatients repeatedly told us that caring for their plants gave them something to look forward toβ€”a small but meaningful source of joy and control during treatment,” Roh added. β€œThat human element is at the heart of this work. Our hope is that hydroponic gardening can become a simple, accessible tool for improving wellbeing not only in cancer care, but also in communities with limited access to nature.”

The study, β€œIndoor hydroponic vegetable gardening to improve mental health and quality of life in cancer patients: a pilot study,” was authored by Taehyun Roh, Laura Ashley Verzwyvelt, Anisha Aggarwal, Raj Satkunasivam, Nishat Tasnim Hasan, Nusrat Fahmida Trisha, and Charles Hall.

Teens with social anxiety rely heavily on these unhelpful mental habits

12 December 2025 at 17:00

New research suggests that adolescents with high levels of social anxiety rely heavily on unhelpful mental habits to manage their daily stress. These young people do not necessarily lack positive coping skills, but they appear to lean disproportionately on negative strategies like excessive worry. This specific pattern of behavior holds true regardless of the teenager’s age or gender. The findings were published in the Journal of Early Adolescence.

Adolescence represents a distinct developmental window marked by profound changes in social functioning. Young people begin to encounter interpersonal stressors, such as peer conflict or exclusion, with greater frequency than in childhood. This transition is often accompanied by an increase in anxiety symptoms. For youth who are particularly anxious, the normative challenges of middle school can feel overwhelming. Mental health experts recognize that the way a person regulates their emotions in response to stress is a major predictor of their overall psychological health.

Researchers have previously established that anxious youth often experience more intense negative emotions after difficult events. Prior studies also suggested these youth are less successful at regulating those emotions compared to their non-anxious peers. However, past research frequently grouped different types of anxiety together. This approach potentially obscured important nuances. Different forms of anxiety likely have unique causes and distinct developmental paths.

A team of researchers from the University of Toledo sought to address this gap in understanding. The investigators were Caley R. Lane, Julianne M. Griffith, and Benjamin L. Hankin. They aimed to determine if social anxiety specifically predicted how adolescents managed their feelings in real-time. They hypothesized that the fear of negative evaluation, which is central to social anxiety, would trigger specific emotional responses to daily interpersonal stressors.

The study distinguished between two broad categories of emotion regulation. The first category includes adaptive strategies. These are generally helpful behaviors such as problem solving or seeking social support. The second category includes maladaptive strategies. These are unhelpful responses such as rumination and worry. Rumination involves repetitively fixating on distress without finding a solution. Worry involves repetitive negative thinking about the future.

To capture these behaviors in a natural setting, the researchers utilized a technique known as the experience sampling method. This approach allows scientists to collect data on a person’s experiences as they happen in the real world. This offers an advantage over laboratory studies, which may not reflect how people act in their daily lives.

The study included 146 adolescents recruited from a midwestern city in the United States. The participants ranged in age from 10 to 14 years old. Approximately half of the group identified as girls. The racial composition was predominantly white, though it included participants from multiracial, Asian, Black, and Latine backgrounds.

Participating adolescents carried a smartphone equipped with a specific application for nine days. During this period, the participants received alerts to complete surveys three to four times a day. These alerts occurred at semi-random times on weekends and during after-school hours on weekdays. This schedule minimized interference with academic activities.

On each survey, the adolescents reported the worst mood they had experienced in the previous hour. They identified what kind of event triggered that mood. The researchers categorized these events as either interpersonal stressors, such as arguments with friends, or non-interpersonal stressors, such as academic pressure. The participants then rated how much they used various coping strategies in response to that specific event.

The results of the analysis showed a clear pattern regarding social anxiety symptoms. Adolescents with higher levels of social anxiety were more likely to use maladaptive regulation strategies when facing interpersonal stress. Specifically, these youth engaged in higher levels of repetitive negative thinking.

The researchers also examined whether social anxiety influenced the use of positive strategies. The data indicated that social anxiety symptoms did not predict the use of adaptive regulation. Highly socially anxious adolescents were just as likely to use problem solving or support seeking as their less anxious peers. This suggests a specific deficit in restraining negative thoughts rather than a lack of positive skills.

To ensure these findings were specific to social anxiety, the researchers analyzed symptoms of physical anxiety. Physical anxiety involves somatic sensations like trembling or tension. The study found no statistical association between physical anxiety symptoms and the use of maladaptive emotion regulation. This indicates that the tendency to respond to stress with unhelpful cognitive habits is a unique feature of social anxiety symptoms in this context.

The study further broke down the maladaptive strategies into specific components. The analysis revealed that the association was driven largely by worry rather than rumination. Socially anxious youth were statistically more likely to engage in repetitive thoughts about future negative outcomes. This aligns with the nature of social anxiety, which involves anticipating humiliation or rejection.

The researchers also looked at whether the type of stressor mattered. They found that social anxiety predicted maladaptive regulation in response to both interpersonal and non-interpersonal stress. This suggests that for socially anxious youth, the tendency to worry extends beyond social situations to general life challenges.

The team explored whether age or gender influenced these relationships. Previous research has shown that girls often report more interpersonal stress and social anxiety than boys. Additionally, sensitivity to social feedback tends to increase as children get older. However, the current study found no evidence that age or gender altered the results. The link between social anxiety and maladaptive coping appears consistent across early adolescence for both boys and girls.

These findings have practical implications for how mental health professionals support anxious youth. Interventions often focus on teaching new coping skills. However, this study suggests that socially anxious adolescents may already possess these adaptive skills. They simply engage in maladaptive worry alongside them. Effective treatment might need to prioritize reducing repetitive negative thinking patterns.

There are several caveats to consider regarding this research. The sample was drawn primarily from white families with relatively high incomes. The results may not fully generalize to adolescents from diverse racial, ethnic, or socioeconomic backgrounds. Future research needs to examine these processes in more diverse populations.

The study relied entirely on self-reported data. While experience sampling reduces recall bias, it still depends on the participant’s perception. Shared method variance can sometimes inflate associations between variables. Additionally, the researchers did not survey students during school hours. This means many peer interactions that occur in the classroom or hallway were likely missed.

The researchers also noted that the study focused on a specific set of regulation strategies. Adolescents may use other techniques, such as suppression or cognitive reappraisal, which were not measured here. Future investigations could broaden the scope of strategies assessed.

Finally, the study looked at between-person differences. It compared kids with high anxiety to kids with low anxiety. Future work should investigate within-person variations. It would be useful to know if a specific teenager uses more maladaptive strategies on days when they feel more anxious than usual.

Despite these limitations, the research offers a clearer picture of the internal world of socially anxious teens. It highlights the specific burden of worry that these young people carry. By pinpointing the reliance on maladaptive strategies, the study identifies a precise target for intervention. Helping adolescents break the cycle of worry may be a key step in preventing social anxiety from escalating into more severe psychopathology.

The study, β€œYouth Social Anxiety and Daily-Life Emotion Regulation in Response to Interpersonal Stress,” was authored by Caley R. Lane, Julianne M. Griffith, and Benjamin L. Hankin.

Higher diet quality is associated with greater cognitive reserve in midlife

12 December 2025 at 15:00

A new study published in Current Developments in Nutrition provides evidence that individuals who adhere to higher quality diets, particularly those rich in healthy plant-based foods, tend to possess greater cognitive reserve in midlife. This concept refers to the brain’s resilience against aging and disease, and the findings suggest that what people eat throughout their lives may play a distinct role in building this mental buffer.

As humans age, the brain undergoes natural structural changes that can lead to difficulties with memory, thinking, and behavior. Medical professionals have observed that some individuals with physical signs of brain disease, such as the pathology associated with Alzheimer’s, do not exhibit the expected cognitive symptoms. This resilience is attributed to cognitive reserve, a property of the brain that allows it to cope with or compensate for damage.

While factors such as education level and occupational complexity are known to contribute to this buffer, the specific influence of dietary habits has been less clear. The scientific community has sought to determine if nutrition can serve as a modifiable factor to help individuals maintain cognitive function into older age.

β€œIt has been established that cognitive reserve is largely influenced by factors like genetics, education, occupation, and certain lifestyle behaviors like physical activity and social engagement,” explained study author Kelly C. Cara, a postdoctoral fellow at the American Cancer Society.

β€œFew studies have examined the potential impact of diet on cognitive reserve, but specific dietary patterns (i.e., all the foods and beverages a person consumes), foods, and food components have been associated with other cognitive outcomes including executive function and cognitive decline. With this study, we wanted to determine whether certain dietary patterns were associated with cognitive reserve and to what degree diet quality may influence cognitive reserve.”

For their study, the researchers analyzed data from the 1946 British Birth Cohort. This is a long-running project that has followed thousands of people born in Great Britain during a single week in March 1946. The final analysis for this specific study included 2,514 participants. The researchers utilized dietary data collected at four different points in the participants’ lives: at age 4, age 36, age 43, and age 53. By averaging these records, the team created a cumulative picture of each person’s typical eating habits over five decades.

The researchers assessed these dietary habits using two main frameworks. The first was the Healthy Eating Index-2020. This index measures how closely a person’s diet aligns with the Dietary Guidelines for Americans. It assigns higher scores for the consumption of fruits, vegetables, whole grains, dairy, and proteins, while lowering scores for high intakes of refined grains, sodium, and added sugars.

The second framework involved three variations of a Plant-Based Diet Index. These indexes scored participants based on their intake of plant foods versus animal foods. The overall Plant-Based Diet Index gave positive scores for all plant foods and reverse scores for animal foods.

The researchers also calculated a Healthful Plant-Based Diet Index, which specifically rewarded the intake of nutritious plant foods like whole grains, fruits, vegetables, nuts, legumes, vegetable oils, tea, and coffee. Finally, they calculated an Unhealthful Plant-Based Diet Index. This measure assigned higher scores to less healthy plant-derived options, such as fruit juices, refined grains, potatoes, sugar-sweetened beverages, and sweets.

To measure cognitive reserve, the researchers administered the National Adult Reading Test to the participants when they were 53 years old. This assessment asks individuals to read aloud a list of 50 words with irregular pronunciations. The test is designed to measure β€œcrystallized” cognitive ability, which relies on knowledge and experience acquired over time.

Unlike β€œfluid” abilities such as processing speed or working memory, crystallized abilities tend to remain stable even as people age or experience early stages of neurodegeneration. This stability makes the reading test a reliable proxy for estimating a person’s accumulated cognitive reserve.

The analysis revealed that participants with higher scores on the Healthy Eating Index and the Healthful Plant-Based Diet Index tended to have higher reading test scores at age 53. The data suggested a dose-response relationship, meaning that as diet quality improved, cognitive reserve scores generally increased.

Participants in the top twenty percent of adherence to the Healthy Eating Index showed the strongest association with better cognitive reserve. This relationship persisted even after the researchers used statistical models to adjust for potential confounding factors, including childhood socioeconomic status, adult education levels, and physical activity.

β€œThis was one of the first studies looking at the relationship between dietary intake and cognitive reserve, and the findings show that diet is worth exploring further as a potential influencer of cognitive reserve,” Cara told PsyPost.

On the other hand, the researchers found an inverse relationship regarding the Unhealthful Plant-Based Diet Index. Participants who consumed the highest amounts of refined grains, sugary drinks, and sweets generally had lower cognitive reserve scores. This distinction highlights that the source and quality of plant-based foods are significant. The findings indicate that simply reducing animal products is not sufficient for cognitive benefits if the diet consists largely of processed plant foods.

The researchers also examined how much variability in cognitive reserve could be explained by these dietary patterns. The single strongest predictor of cognitive reserve at age 53 was the individual’s childhood cognitive ability, measured at age 8. This early-life factor accounted for over 40 percent of the variance in the adult scores.

However, the Healthy Eating Index scores still uniquely explained about 2.84 percent of the variation. While this number may appear small, the authors noted that when diet was combined with other lifestyle factors like smoking and exercise, the collective contribution to cognitive reserve was roughly 5 percent. This effect size is comparable to the cognitive advantage associated with obtaining a higher education degree.

β€œPeople in our study with healthier dietary patterns generally showed higher levels of cognitive reserve while those with less healthy dietary patterns generally showed lower levels of cognitive reserve,” Cara explained. β€œWe do not yet know if diet caused these differences in cognitive reserve or if the differences were due to some other factor(s). Our study findings did suggest that diet plays at least a small role in individuals’ cognitive reserve levels.”

It is worth noting that the Healthy Eating Index showed a stronger association with cognitive reserve than the plant-based indexes. The authors suggest this may be due to how the indexes treat certain foods. The Healthy Eating Index rewards the consumption of fish and seafood, which are rich in omega-3 fatty acids known to support brain health. In contrast, the plant-based indexes penalize all animal products, including fish.

Additionally, the plant-based indexes categorized all potatoes and fruit juices as unhealthful. The Healthy Eating Index allows for these items to count toward total vegetable and fruit intake in moderation. This nuance in scoring may explain why the general healthy eating score served as a better predictor of cognitive outcomes.

As with all research, there are some caveats to consider. The measurement of cognitive reserve was cross-sectional, meaning it looked at the outcome at a single point in time rather than tracking the development of reserve over decades. It is not possible to definitively state that the diet caused the higher test scores, as other unmeasured factors could play a role. For instance, while the study controlled for childhood cognition, it is difficult to completely rule out the possibility that people with higher cognitive abilities simply choose healthier diets.

β€œTo date, very few studies have examined diet and cognitive reserve, so our work started with an investigation of the relationship between diet and cognitive reserve only at a single point in time,” Cara said. β€œWhile we can’t draw any strong conclusions from the findings, we believe our study suggests that diet may be one of the factors that influence cognitive reserve.”

β€œFuture studies that look at diet and the development of cognitive reserve over time will help us better understand if dietary patterns or any specific aspect of diet can improve or worsen cognitive reserve. I hope to apply different statistical approaches to dietary and cognitive data collected across several decades to get at how these two factors relate to each other over a lifetime.”

The study, β€œAssociations Between Healthy and Plant-Based Dietary Patterns and Cognitive Reserve: A Cross-Sectional Analysis of the 1946 British Birth Cohort,” was authored by Kelly C. Cara, Tammy M. Scott, Paul F. Jacques, and Mei Chung.

Encouraging parents to plan sex leads to more frequent intimacy and higher desire

12 December 2025 at 05:00

A new study suggests that changing how parents perceive scheduled intimacy can lead to tangible improvements in their sex lives. The findings indicate that encouraging parents of young children to view planned sex as a positive strategy results in more frequent sexual activity and higher levels of desire. This research was published in The Journal of Sex Research.

Many people in Western cultures hold the belief that sexual intimacy is most satisfying when it occurs spontaneously. This cultural narrative often frames scheduled sex as unromantic or a sign that a relationship has lost its spark. However, this ideal of spontaneity can become a source of frustration for couples navigating the transition to parenthood.

New parents frequently face significant barriers to intimacy, including sleep deprivation, physical recovery from childbirth, and the time-consuming demands of childcare. These factors often lead to a decline in sexual frequency and satisfaction during the early years of child-rearing. When couples wait for the perfect spontaneous moment to arise, they may find that it rarely happens.

The authors of the new study, led by Katarina Kovacevic of York University, sought to challenge the prevailing view that spontaneity is superior to planning. They hypothesized that the negative association with planned sex might stem from beliefs rather than the act of planning itself. They proposed that if parents could be encouraged to see planning as a way to prioritize their relationship, they might engage in it more often and enjoy it more.

To test this hypothesis, the researchers conducted two separate investigations. The first was a pilot study designed to determine if reading a brief educational article could successfully shift people’s attitudes. The team recruited 215 individuals who were in a relationship and had at least one child between the ages of three months and five years.

Participants in this pilot phase were randomly assigned to one of two groups. The experimental group read a summary of research highlighting the benefits of planning sex for maintaining a healthy relationship. The control group read a summary stating that researchers are unsure whether planned or spontaneous sex is more satisfying.

The results of the pilot study showed that the manipulation worked. Participants who read the article promoting planned sex reported stronger beliefs in the value of scheduling intimacy compared to the control group. They also reported higher expectations for their sexual satisfaction in the coming weeks.

Following the success of the pilot, the researchers launched the main study with a larger sample of 514 parents. These participants were recruited online and resided in the United States, Canada, the United Kingdom, Australia, and New Zealand. All participants were in romantic relationships and had young children living at home.

The procedure for the main study mirrored the pilot but included a longer follow-up period. At the start of the study, participants completed surveys measuring their baseline sexual desire, distress, and beliefs about spontaneity. They were then randomized to read either the article extolling the virtues of planned sex or the neutral control article.

One week after reading the assigned material, participants received a β€œbooster” email. This message summarized the key points of the article they had read to reinforce the information. Two weeks after the start of the study, participants completed a final survey detailing their sexual behaviors and feelings over the previous fortnight.

The researchers measured several outcomes, including how often couples had sex and how much of that sex was planned. They also assessed sexual satisfaction, relationship satisfaction, and feelings of sexual desire. To gauge potential downsides, they asked participants if they felt distressed about their sex life or obligated to engage in sexual activity.

The researchers that the intervention had a significant impact on behavior. Participants who were encouraged to value planned sex reported engaging in more frequent sexual activity overall. In fact, the experimental group reported having approximately 28 percent more sex than the control group over the two-week period.

β€œFrom previous research we know that most people idealize spontaneous sex, but that doesn’t necessarily correlate with actual sexual satisfaction,” explained Kovacevic, a registered psychotherapist. β€œFor this study, we wanted to see if we could shift people’s beliefs about planning sex so they could see the benefits, which they did.”

In addition to increased frequency, the experimental group reported higher levels of sexual desire compared to the control group. This suggests that the act of planning or thinking about sex intentionally did not dampen arousal but rather enhanced it. The researchers posit that planning may allow for anticipation to build, which can fuel desire.

A common concern about scheduling sex is that it might feel like a chore or an obligation. The study provided evidence to the contrary. Among participants who engaged in sex during the study, those in the planning group reported feeling less obligated to do so than those in the control group.

The researchers also identified a protective effect regarding satisfaction. Generally, people tend to report lower satisfaction when they perceive a sexual encounter as planned rather than spontaneous. This pattern held true for the control group. When control participants had planned sex, they reported lower sexual satisfaction and higher sexual distress.

However, the experimental group did not experience this decline. The intervention appeared to buffer them against the typical dissatisfaction associated with non-spontaneous sex. When participants in the experimental group engaged in planned sex, their satisfaction levels remained high.

Furthermore, for the experimental group, engaging in planned sex was associated with greater relationship satisfaction. This link was not present in the control group. This suggests that once people view planning as a valid tool for connection, acting on that belief enhances their overall view of the relationship.

The researchers also analyzed open-ended responses from participants to understand their experiences better. Many participants in the experimental group noted that the information helped them coordinate intimacy amidst their busy lives. They described planning as a way to ensure connection happened despite exhaustion and conflicting schedules.

Some participants mentioned that planning allowed them to mentally prepare for intimacy. This preparation helped them shift from β€œparent mode” to β€œpartner mode,” making the experience more enjoyable. Others highlighted that discussing sex ahead of time improved their communication and reduced anxiety about when intimacy might occur.

Despite the positive outcomes, the study has some limitations. The research relied on self-reported data collected through online surveys. This method depends on the honesty and accurate memory of the participants.

Additionally, the sample was relatively homogenous. The majority of participants were white, heterosexual, and in monogamous relationships. It is unclear if these findings would apply equally to LGBTQ+ couples, those in non-monogamous relationships, or individuals from different cultural backgrounds where attitudes toward sex and scheduling might differ.

The intervention period was also brief, lasting only two weeks. While the short-term results are promising, the study cannot determine if the shift in beliefs and behaviors would be sustained over months or years. It is possible that the novelty of the intervention wore off after the study concluded.

Future research could explore the long-term effects of such interventions. It would also be beneficial to investigate whether this approach helps couples facing other types of challenges. For instance, couples dealing with sexual dysfunction or chronic health issues might also benefit from reframing their views on planned intimacy.

The study, β€œCan Shifting Beliefs About Planned Sex Lead to Engaging in More Frequent Sex and Higher Desire and Satisfaction? An Experimental Study of Parents with Young Children,” was authored by Katarina Kovacevic, Olivia Smith, Danielle Fitzpatrick, Natalie O. Rosen, Jonathan Huber, and Amy Muise.

New review challenges the idea that highly intelligent people are hyper-empathic

12 December 2025 at 03:00

A new scientific review challenges the popular assumption that highly intelligent people possess a naturally heightened capacity for feeling the emotions of others. The analysis suggests that individuals with high intellectual potential often utilize a distinct form of empathy that relies heavily on cognitive processing rather than automatic emotional reactions. Published in the journal Intelligence, the paper proposes that these individuals may intellectualize feelings to maintain composure in intense situations.

The research team set out to clarify the relationship between high intelligence and socio-emotional skills. General society often views people with high intellectual potential as hypersensitive or β€œhyper-empathic.” This stereotype suggests that a high intelligence quotient, or IQ, comes packaged with an innate ability to deeply feel the pain and joy of those around them.

This belief has historical roots in psychological theories that linked intellectual giftedness with emotional overexcitability. The researchers wanted to determine if this reputation holds up against current neuroscientific and psychological evidence.

The review was conducted by Nathalie Lavenne-Collot, Pascale Planche, and Laurence Vaivre-Douret. They represent institutions including the UniversitΓ© Paris CitΓ© and INSERM in France. The authors sought to move beyond simple generalizations. They aimed to understand how high intelligence interacts with the specific brain mechanisms that govern how humans connect with one another.

To achieve this, the investigators performed a systematic review of existing literature. They searched major scientific databases for studies linking high intellectual potential with various components of empathy. The team did not simply look for a β€œyes” or β€œno” regarding whether smart people are empathetic. Instead, they broke empathy down into its constituent parts to see how each functioned in this population. They examined emotional detection, motivation, regulation, and cognitive understanding.

A primary distinction made in the review is the difference between emotional empathy and cognitive empathy. Emotional empathy is the automatic, visceral reaction to another person’s state. It is the phenomenon of flinching when someone else gets hurt or tearing up when seeing a crying face. The review found that individuals with high intellectual potential do not necessarily exhibit higher levels of this automatic emotional contagion. Their immediate physical resonance with the feelings of others appears to be average compared to the general population.

However, the findings regarding cognitive empathy were quite different. Cognitive empathy involves the intellectual ability to understand and identify what another person is thinking or feeling. The researchers found that highly intelligent individuals often excel in this area. They possess advanced capabilities in β€œTheory of Mind,” which is the psychological term for understanding that others have beliefs and desires different from one’s own. Their strong verbal and reasoning skills allow them to decode social situations with high precision.

The reviewers detailed how these individuals process emotional data. While they may not feel a rush of emotion, they are often superior at emotion recognition. They can identify subtle changes in facial expressions, vocal tones, and body language more faster and accurately than average. This ability likely stems from their general cognitive speed and heightened attention to detail. The brain networks responsible for processing visual and auditory information are highly efficient in this population.

A central finding of the article involves the regulation of emotions. The authors describe a mechanism where cognitive control overrides emotional reactivity. Individuals with high intellectual potential typically possess strong executive functions. This includes inhibitory control, which is the ability to suppress impulsive responses. The review suggests that these individuals often use this strength to dampen their own emotional reactions. When they encounter a charged situation, they may unconsciously inhibit their feelings to analyze the event objectively.

This creates a specific empathic profile characterized by a dominance of cognitive empathy over emotional empathy. The person understands the situation perfectly but remains affectively detached. The authors note that this β€œintellectualization” of empathy can be an adaptive strategy.

It allows the individual to function effectively in high-stress environments where getting swept up in emotion would be counterproductive. However, this imbalance can also create social friction. It may lead others to perceive them as cold or distant, even when they are fully engaged in understanding the problem.

The study also explored the motivational aspects of empathy. The researchers investigated what drives these individuals to engage in prosocial behavior. They found that for this population, empathy is often linked to a sensitivity to justice. Their motivation to help often stems from an abstract moral reasoning rather than a personal emotional connection. They may be deeply disturbed by a violation of fairness or an ethical breach. This sense of justice can be intense. Yet, it is frequently directed toward systemic issues or principles rather than specific individuals.

The authors discussed the developmental trajectory of these traits. They highlighted the concept of developmental asynchrony. This occurs when a child’s cognitive abilities develop much faster than their emotional coping mechanisms. A highly intelligent child might cognitively understand complex adult emotions but lack the regulatory tools to manage them. This gap can lead to the β€œintellectualization” strategy observed in adults. The child learns to rely on their strong thinking brain to manage the confusing signals from their developing emotional brain.

The review also addressed the overlap between high intelligence and other neurodivergent profiles. The researchers noted that the profile of high cognitive empathy and low emotional empathy can superficially resemble traits seen in autism spectrum disorder. However, they clarify a key difference.

In autism, challenges often arise from a difficulty in reading social cues or understanding another’s perspective. In contrast, highly intelligent individuals often read the cues perfectly but regulate their emotional response so tightly that they appear unresponsive.

This distinction is essential for clinicians and educators. Misinterpreting this regulatory strategy as a deficit could lead to incorrect interventions. The high-potential individual does not need help understanding the social world. They may instead need support in learning how to access and express their emotions without feeling overwhelmed. The dominance of the cognitive system is a strength, but it should not come at the cost of the ability to connect authentically with others.

The authors also touched upon the role of sensory sensitivity. While the stereotype suggests these individuals are hypersensitive to all stimuli, the evidence is mixed. They do not consistently show higher physiological reactivity to stress. Instead, they may show a β€œnegativity bias.” This is a tendency to focus on negative or threatening information. For a high-functioning brain, a negative emotion or a social threat is a problem to be solved. This intense focus can mimic anxiety but is rooted in an analytical drive to resolve discrepancies in the environment.

The review emphasizes that this profile is not static. Empathy is influenced by context and motivation. A highly intelligent person might appear detached in a boring or repetitive social situation. Yet, the same person might show profound engagement when the interaction is intellectually stimulating or aligned with their values. Their empathic response is flexible and modulated by how much they value the interaction.

The authors provide several caveats to their conclusions. They warn against treating individuals with high intellectual potential as a monolith. Great diversity exists within this group. Some may have co-occurring conditions like ADHD or anxiety that alter their empathic profile. Additionally, the definition of high potential varies across studies, with different IQ thresholds used. This inconsistency makes it difficult to draw universal conclusions.

Future research directions were also identified. The authors argue that scientists need to move beyond simple laboratory questionnaires. Self-report surveys are prone to bias, especially with subjects who are good at analyzing what the test is asking.

Future studies should use ecologically valid methods that mimic real-world social interactions. Observing how these individuals navigate complex, dynamic social environments would provide a clearer picture of their empathic functioning. Physiological measures, such as heart rate variability or brain imaging during social tasks, could also help verify the β€œinhibition” hypothesis.

The study, β€œEmpathy in subjects with high intellectual potential (HIP): Rethinking stereotypes through a multidimensional and developmental review,” was authored by Nathalie Lavenne-Collot, Pascale Planche, and Laurence Vaivre-Douret.

Parents who support school prayer also favor arming teachers

12 December 2025 at 01:00

A new sociological analysis suggests that American parents who advocate for teacher-led prayer in public schools also tend to favor specific types of security measures to prevent school shootings. These parents are more likely to support arming teachers and installing metal detectors compared to parents who oppose school-sponsored prayer. The research appears in the Journal for the Scientific Study of Religion .

Following the tragedy of a school shooting, public discourse in the United States often fractures into two distinct camps regarding prevention. One side typically advocates for structural or policy-based changes. These often include banning specific types of firearms or expanding mental health screenings. The opposing side frequently focuses on infrastructural interventions. These proposals usually involve increasing the number of armed personnel in schools or hardening the physical security of the buildings.

Simultaneously, a subset of American political and religious leadership often frames these events not as a failure of policy, but as a spiritual failing. This narrative suggests that the removal of religious observance from public education has left a moral vacuum. Proponents of this view argue that this vacuum invites chaos and violence.

Samuel L. Perry and Andrew L. Whitehead conducted this research to investigate the relationship between these two seemingly distinct debates. Perry is a sociologist at the University of Oklahoma, and Whitehead is a sociologist at Indiana University Indianapolis. They sought to determine if a parent’s desire for religion in schools predicts their preferred method for stopping gun violence.

The researchers drew upon the theoretical framework of Christian nationalism. This ideology posits that American civic life should be fused with a specific expression of Christianity. Previous scholarship indicates that adherents to this worldview often perceive a need to defend their social order against encroaching chaos.

Within this framework, violence is not always viewed negatively. Instead, it can be seen as a tool for maintaining order. The concept of β€œrighteous violence” suggests that the appropriate response to a β€œbad guy with a gun” is a β€œgood guy with a gun.” Perry and Whitehead hypothesized that parents who want to return prayer to classrooms would also support solutions that introduce more firearms into the hands of authority figures.

To test this theory, the authors analyzed data from the American Trends Panel. This survey was fielded by the Pew Research Center in the fall of 2022. The sample included over 3,400 parents of children under the age of 18. The survey is nationally representative, meaning it accurately reflects the broader population of American parents.

The survey asked parents to evaluate the potential effectiveness of various strategies to prevent school shootings. One category of solutions was structural. This included banning assault-style weapons and improving mental health screening. The second category was infrastructural and gun-centric. This included allowing teachers and administrators to carry guns, stationing police or armed security in schools, and installing metal detectors.

Parents also answered questions regarding their views on prayer in public education. They chose between three options. The first option was that teachers should not be allowed to lead students in prayer. The second was that teachers should be allowed to lead Christian prayers, provided other religions are also included. The third was that teachers should be allowed to lead Christian prayers even if other religions are excluded.

The analysis revealed a clear pattern regarding infrastructural solutions. Parents who supported teacher-led Christian prayer were more likely to believe that arming school personnel would be effective. This held true regardless of whether they wanted exclusive Christian prayer or inclusive prayer. These parents also expressed greater support for installing metal detectors and stationing police in schools.

The researchers found that the specific type of prayer support did not matter as much as the general desire for prayer. Parents who favored β€œinclusive” prayer held views on school safety that were statistically indistinguishable from those who favored β€œexclusive” Christian prayer. The primary dividing line was between parents who wanted some form of teacher-led prayer and those who rejected it entirely.

The study did not find a strong statistical link between support for prayer and opposition to structural policies like weapon bans. Initially, it appeared that prayer supporters were less likely to support bans. However, once the researchers accounted for political conservatism, that association largely disappeared. This indicates that opposition to gun bans is driven more by political ideology than by views on school prayer.

The researchers observed that political ideology interacts with these views in specific ways. Among parents who identified as conservative, support for armed security measures was consistently high. This was true regardless of their stance on prayer.

In contrast, parents who identified as liberal showed more variation. Liberal parents who opposed school prayer were very skeptical of armed security. However, liberal parents who supported school prayer were more open to these gun-centric measures. This suggests that for some on the political left, religious views may bridge the gap toward more conservative security policies.

The authors argue these findings illustrate a worldview where spiritual and physical defenses are intertwined. For parents who see school shootings as a result of moral decay, policy fixes like background checks may seem insufficient. Instead, they appear to favor a dual approach. This approach combines the spiritual protection of prayer with the physical protection of armed authority figures.

This perspective aligns with the rhetoric often used by politicians who champion Christian nationalist ideals. The study quotes several leaders who explicitly connect the absence of prayer to the presence of violence. For example, the authors cite North Carolina politician Mark Robinson. Robinson suggested that if a prayer vigil had occurred before a shooting rather than after, the violence might not have happened.

The preference for metal detectors among this group is also notable. The researchers suggest this fits a narrative of external threat. Metal detectors operate on the assumption that danger comes from the outside. They are designed to catch β€œbad guys” at the door. This differs from mental health screenings, which imply that the danger might be internal or systemic.

There are limitations to this study. The data used is cross-sectional. This means it captures a snapshot of public opinion at a single moment in time. Consequently, the researchers cannot definitively prove that wanting prayer causes a person to want armed teachers. It is possible that a third, unmeasured factor drives both opinions.

Additionally, the survey did not ask parents directly why they preferred certain solutions. The researchers inferred the connection to β€œrighteous violence” based on previous sociological theory. Future research could benefit from asking participants to explain the reasoning behind their policy preferences in their own words.

The authors also note that while the study focused on Christian prayer, the dynamics could be different for other religious groups. The current data did not allow for a robust analysis of parents from non-Christian backgrounds. Exploring how Muslim, Jewish, or Hindu parents view these trade-offs represents a potential avenue for future inquiry.

Despite these caveats, the research provides a new lens for understanding the stalemate in American gun politics. It suggests that for a large segment of the population, the debate is not merely about the Second Amendment or school safety statistics. It is also about a deeper cultural and theological understanding of order, protection, and the role of religion in public life.

The authors conclude that proposals to arm teachers are part of a broader cultural narrative. This narrative perceives school shootings as a symptom of a godless society. In this view, reintroducing prayer is seen as a necessary step to restore moral order. Arming teachers is seen as the necessary physical enforcement of that order.

The study, β€œGun Problem or God Problem? Support for Teacher-Led Prayer in Public School and Solutions for School Shootings,” was authored by Samuel L. Perry and Andrew L. Whitehead.

Women with severe childhood trauma show unique stress hormone patterns

11 December 2025 at 23:00

A new study suggests that women whose most distressing traumatic experiences occurred during childhood respond differently to biological stress than men or women traumatized later in life. The research indicates that these women exhibit a muted hormonal response to stressful situations, a pattern not observed in male participants. These results were published in the Journal of Traumatic Stress.

Trauma impacts a vast number of people globally. Women, however, are disproportionately affected by the psychological aftermath of these events. Statistics show that women are roughly twice as likely as men to develop posttraumatic stress disorder, or PTSD, during their lifetimes. Research indicates that this disparity cannot be explained simply by the amount of trauma women face.

Scientists have historically struggled to pinpoint the biological reasons for this gap. One major hurdle has been the tendency of biomedical research to focus primarily on male physiology. This practice effectively treats women as β€œsmall men,” ignoring the unique hormonal and biological environments of the female body. Consequently, the mechanisms that link trauma to physical health outcomes in women remain poorly understood.

The body responds to stress through a system known as the hypothalamic-pituitary-adrenal axis. This system releases cortisol, often called the stress hormone, to help the body manage threats. In a healthy response, cortisol levels spike when a person faces a challenge and then return to baseline.

In some individuals with a history of trauma, this system functions distinctively. Instead of rising to meet a challenge, cortisol levels may remain low. This phenomenon is known as β€œblunted” cortisol reactivity. This muted response is associated with various negative health outcomes, including anxiety, depression, and autoimmune disorders.

Researchers at Wayne State University School of Medicine sought to clarify how sex interacts with this stress response. The team included experts from the departments of Psychiatry and Behavioral Neurosciences, Theoretical and Behavioral Foundations, and Sociology. They aimed to determine if the timing or type of trauma influences cortisol patterns differently in men and women.

The study also investigated the role of subjective perception. The researchers wanted to know if the event a person considers their β€œworst” trauma matters more than simply tallying a list of bad experiences. This approach recognizes that the impact of a traumatic event can vary widely from person to person.

To test these ideas, the team recruited 59 adults from the Detroit area. The group consisted of 37 women and 22 men. All participants had a history of trauma exposure. The researchers screened the participants to exclude those with medical conditions or medication regimens that might artificially alter hormone levels.

The participants underwent a standardized laboratory procedure called the Trier Social Stress Test. This test is designed to induce moderate psychosocial stress in a controlled environment. First, participants had to perform a mock job interview in front of a panel of β€œbehavior experts.”

The participants were told that these experts were evaluating their performance. Following the interview, the participants were asked to complete a surprise mental arithmetic task. Throughout the 90-minute session, the researchers collected saliva samples at five specific time points. These samples allowed the team to measure the total amount of cortisol released and the change in levels over time.

Participants also completed detailed questionnaires regarding their history. They used the Stressful Life Events Screening Questionnaire to report which events they had experienced. Crucially, they were asked to identify the single β€œmost stressful or upsetting event” of their lives. This was labeled the β€œindex event.”

The researchers categorized these index events based on when they happened. They distinguished between traumas that occurred during childhood, defined as before age 18, and those that happened in adulthood. They also classified the events by type. Interpersonal traumas included events like physical or sexual assault. Non-interpersonal traumas included events like car accidents or natural disasters.

The analysis revealed distinct biological patterns based on sex. In the male group, the timing of the trauma did not predict cortisol patterns. Men who identified childhood trauma as their worst experience showed similar stress responses to those who identified adult trauma.

For women, the results were distinct. Women who identified a childhood event as their most stressful life experience showed a blunted cortisol response. Their bodies did not produce the expected rise in stress hormones during the mock interview and math task. This effect was substantial.

It is important to note that this association was specific to the subjective β€œindex event.” Women who had objectively experienced childhood trauma but identified an adult event as their most stressful did not show this blunted response. This suggests that the subjective impact of early-life trauma is a key factor in how the female stress system functions.

The study did not find a similar link regarding the type of trauma. Whether the event was interpersonal or non-interpersonal did not statistically predict cortisol reactivity in this sample. The findings point specifically to the combination of female sex and the subjective severity of childhood trauma.

The authors discuss several biological reasons for these findings. Childhood is a period of high neural plasticity. The brain is developing rapidly and is highly sensitive to environmental inputs. Trauma during this window may embed a predisposition for altered stress responses.

Hormones likely play a mediating role. Estrogen is known to dampen cortisol reactivity. This effect can be protective in healthy individuals, preventing the body from overreacting to minor stressors. However, in women with trauma histories, this natural dampening might combine with trauma-related dysregulation. The result could be a stress response that is too low to be effective.

These findings have implications for how researchers and clinicians approach trauma. The β€œbiological embedding” of childhood trauma appears to manifest differently depending on sex. This challenges the utility of research models that do not separate data by sex.

The results also support the importance of asking patients about their own perceptions of their history. Simply knowing that a person experienced a specific event is not enough. Knowing which event the patient perceives as the most impactful provides greater insight into their physiological status.

There are limitations to this study that affect how the results should be interpreted. The sample size was relatively small. This was particularly true for the male group, which included only 22 participants. A larger sample might reveal patterns in men that were not detected here.

The study also relied on retrospective self-reports. Participants had to recall events and rate their severity from memory. This method can be influenced by a person’s current emotional state. Additionally, the participants were relatively young, with an average age of 25. It is not known if these cortisol patterns persist or change as women enter middle age or menopause.

The study design was cross-sectional rather than longitudinal. This means it captured a snapshot in time. It cannot definitively prove that the childhood trauma caused the blunted cortisol. It only establishes a strong association between the two in women.

Future research is needed to confirm these findings in larger, more diverse groups. The authors suggest that future studies should account for cumulative lifetime stress. Women often carry a higher burden of chronic daily stress, which could also influence hormonal baselines.

Understanding these mechanisms could eventually lead to better treatments. Current therapies for PTSD often involve exposure to traumatic memories. Some research suggests that cortisol helps the brain process and extinguish fear memories.

If women with childhood trauma have low cortisol availability, they might benefit from treatments timed to coincide with their natural daily cortisol peaks. Alternatively, they might be candidates for pharmacological interventions that temporarily boost cortisol during therapy. Unraveling the specific pathways of dysregulation is the first step toward such personalized medicine.

The authors note that despite decades of study, the biological pathways linking trauma and disease remain elusive. Accounting for sex differences offers a promising route to resolving this quandary. By acknowledging that women are not simply β€œsmall men,” medical science can move toward more equitable and effective mental health care.

The study, β€œNot small men: Sex-specific determinants of cortisol reactivity to psychosocial stress following trauma,” was authored by Liza Hinchey, Francesca Pernice, Holly Feen-Calligan, Shannon Chavez-Korell, David Merolla, and Arash Javanbakht.

Study reveals visual processing differences in dyslexia extend beyond reading

11 December 2025 at 19:00

New research published in Neuropsychologia provides evidence that adults with dyslexia process visual information differently than typical readers, even when viewing non-text objects. The findings suggest that the neural mechanisms responsible for distinguishing between specific items, such as individual faces or houses, are less active in the dyslexic brain. This implies that dyslexia may involve broader visual processing differences beyond the well-known difficulties with connecting sounds to language.

Dyslexia is a developmental condition characterized by significant challenges in learning to read and spell. These difficulties persist despite adequate intelligence, sensory abilities, and educational opportunities. The most prominent theory regarding the cause of dyslexia focuses on a phonological deficit. This theory posits that the primary struggle lies in processing the sounds of spoken language.

According to this view, the brain struggles to break words down into their component sounds. This makes mapping those sounds to written letters an arduous task. However, reading is also an intensely visual activity. The reader must rapidly identify complex, fine-grained visual patterns to distinguish one letter from another.

Some scientists suggest that the disorder may stem partly from a high-level visual dysfunction. This hypothesis proposes that the brain regions repurposed for reading are part of a larger system used to identify various visual objects. If this underlying visual system functions atypically, it could impede reading development.

Evidence for this visual hypothesis has been mixed in the past. Some studies show that people with dyslexia struggle with visual tasks unrelated to reading, while others find no such impairment. The authors of the current study aimed to resolve some of these inconsistencies. They sought to determine if neural processing differences exist even when behavioral performance appears normal.

β€œDevelopmental dyslexia is typically understood as a phonological disorder in that it occurs because of difficulties linking sounds to words. However, past findings have hinted that there can also be challenges with visual processing, especially for complex real-world stimuli like objects and faces. We wanted to test if these visual processing challenges in developmental dyslexia are linked to distinct neural processes in the brain,” said study author Brent Pitchford, a postdoctoral researcher at KU Leuven.

The researchers focused on how the brain identifies non-linguistic objects. They chose faces and houses as stimuli because these objects require the brain to process complex visual information without involving language. This allowed the team to isolate visual processing from phonological or verbal processing.

The study involved 62 adult participants. The sample consisted of 31 individuals with a history of dyslexia and 31 typical readers. The researchers ensured the groups were matched on key demographics, including age, gender, and general intelligence. All participants underwent vision screening to ensure normal visual acuity.

Participants engaged in a matching task while their brain activity was recorded. The researchers used electroencephalography (EEG), a method that detects electrical activity using a cap of electrodes placed on the scalp. This technique allows for the precise measurement of the timing of brain responses.

The researchers were specifically interested in two electrical signals, known as event-related potentials. The first signal is called the N170. It typically peaks around 170 milliseconds after a person sees an image. This component reflects the early stage of structural encoding, where the brain categorizes an object as a face or a building.

The second signal is called the N250. This potential peaks between 230 and 320 milliseconds. The N250 is associated with a later stage of processing. It reflects the brain’s effort to recognize a specific identity or β€œindividuate” an object from others in the same category.

During the experiment, participants viewed pairs of images on a computer screen. A β€œsample” image appeared first, followed by a brief pause. A second β€œcomparison” image then appeared. Participants had to decide if the second image depicted the same identity as the first.

β€œThe study focused on within-category object discrimination (e.g., telling one house from another house) largely because reading involves visual words,” Pitchford told PsyPost. β€œIt is often hard to study these visual processes because reading also involves other things like sound processing as well.”

The researchers also manipulated the visual quality of the images. Some trials used images containing all visual information. Other trials utilized images filtered to show only high spatial frequencies. High spatial frequencies convey fine details and edges, which are essential for distinguishing letters.

Remaining trials used images filtered to show only low spatial frequencies. These images convey global shapes and blurry forms but lack fine detail. This manipulation allowed the team to test if dyslexia involves specific deficits in processing fine details.

The behavioral results showed that both groups performed similarly on the task. Adults with dyslexia were generally as accurate and fast as typical readers when determining if two faces or houses were identical. There was a non-significant trend suggesting dyslexic readers were slightly less accurate with high-detail images.

Despite the comparable behavioral performance, the EEG data revealed distinct neural differences. The early brain response, the N170, was virtually identical for both groups. This suggests that the initial structural encoding of faces and objects is intact in dyslexia. The dyslexic brain appears to categorize objects just as quickly and effectively as the typical brain.

However, the later N250 response showed a significant divergence. The amplitude of the N250 was consistently reduced in the dyslexic group compared to the typical readers. This reduction indicates less neural activation during the process of identifying specific individuals.

β€œThis effect was medium-to-large-sized, and robust when controlling for potential confounds such as ADHD, fatigue, and trial-to-trial priming,” Pitchford said. β€œImportantly, it appeared for both face and house stimuli, highlighting its generality across categories.”

The findings provide support for the high-level visual dysfunction hypothesis. They indicate that the neural machinery used to tell one object from another functions differently in dyslexia. This difference exists even when the individual successfully performs the task.

β€œOur results suggest that reading challenges in developmental dyslexia are likely due to a combination of factors, including some aspects of visual processing, and that developmental dyslexia is not solely due to challenges with phonological processing,” Pitchford explained. β€œWe found neural differences related to how people with dyslexia discriminate between similar faces or objects, even though their behavior looked the same. This points to specific visual processes in the brain that may play a meaningful role in reading development and reading difficulties.”

The researchers propose that adults with dyslexia may use compensatory strategies to achieve normal behavioral performance. Their brains might rely on different neural pathways to recognize objects. This compensation allows them to function well in everyday visual tasks. However, this alternative processing route might be less efficient for the rapid, high-volume demands of reading.

β€œWe expected to see lower accuracy on the visual discrimination tasks in dyslexia based on previous work,” Pitchford said. β€œInstead, accuracy was similar across groups, yet the neural responses differed. This suggests that adults with dyslexia may rely on different neural mechanisms to achieve comparable performance. Because these adults already have years of experience reading and recognizing faces and objects, it raises important questions about how these neural differences develop over time.”

One limitation of the study is the educational background of the participants. A significant portion of the dyslexic group held university degrees. These individuals likely developed robust compensatory mechanisms over the years. This high level of compensation might explain the lack of behavioral deficits.

It is possible that a sample with lower educational attainment would show clearer behavioral struggles with visual recognition. Additionally, the study was conducted on adults. It remains to be seen if these neural differences are present in children who are just learning to read.

Pitchford also noted that β€œthese findings do not imply that phonological difficulties are unimportant in dyslexia. There is already extensive evidence supporting their crucial role. Rather, our study shows that visual factors contribute to dyslexia as well, and that dyslexia is unlikely to have a single cause. We see dyslexia as a multifactorial condition in which both phonological and visual factors play meaningful roles.”

Determining the timeline of these deficits is a necessary step for future research. Scientists need to establish whether these visual processing differences precede reading problems or result from a lifetime of different reading experiences. The researchers also suggest comparing these findings with other conditions. For instance, comparing dyslexic readers to individuals with prosopagnosia, or face blindness, could be illuminating.

β€œThe next steps for this research are to test whether the neural differences we observed reflect general visual mechanisms or processes more specific to particular categories such as faces,” Pitchford explained. β€œTo do this, we’ll apply the same paradigm to individuals with prosopagnosia, who have difficulties recognizing faces. We believe the comparison of results from the two groups will shed light on which visual processes contribute to dyslexia and prosopagnosia, both of which are traditionally thought to be due to challenges in specific domains (reading vs. face recognition).”

The study, β€œDistinct neural processing underlying visual face and object perception in dyslexia,” was authored by Brent Pitchford, HΓ©lΓ¨ne Devillez, and Heida Maria Sigurdardottir.

Autistic employees are less susceptible to the Dunning-Kruger effect

11 December 2025 at 17:00

A study involving participants in Canada and the U.S. found that autistic employees are less susceptible to the Dunning–Kruger effect than their non-autistic peers. After completing a cognitive reflection task, autistic participants estimated their own performance in the task more accurately than non-autistic participants. The research was published in Autism Research.

The Dunning–Kruger effect is a cognitive bias in which people with low ability or knowledge in a domain tend to overestimate their competence. This happens because the skills needed to perform well are often the same skills needed to accurately judge one’s performance.

As a result, individuals who lack expertise may also lack the metacognitive insight required to recognize their own mistakes. High-ability individuals, in contrast, may underestimate themselves because they assume tasks that feel easy to them are easy for others.

The effect has been demonstrated in studies where participants with the lowest test scores rated themselves as above average. The bias has been observed in areas such as logical reasoning, grammar, emotional intelligence, and even professional decision-making. It does not mean that all incompetent people are overconfident, but that the tendency to overestimate one’s results is stronger in individuals with lower skill levels.

Study authors Lorne M. Hartman and his colleagues noted that existing evidence indicates that autistic individuals are less susceptible to social influence and cognitive biases than non-autistic individuals. They wanted to explore whether autistic individuals may also be less susceptible to the Dunning–Kruger effect.

These authors conducted a study in which they compared autistic and non-autistic employees’ self-assessments of their performance on a cognitive reflection task. They looked at how much these assessments differed from their objective performance on the task.

Study participants were recruited through autism employment support organizations and social media. In total, the study involved 100 participants. Fifty-three of them were autistic. The average age of autistic participants was 32, and for non-autistic participants, it was 39 years. There were 39 women in the autistic group and 33 women in the non-autistic group.

Participants completed an assessment of autistic traits (the Subthreshold Autistic Trait Questionnaire), allowing study authors to confirm that the autistic group indeed had more pronounced autistic traits than the non-autistic group. They then completed a cognitive reflection test (CRT-Long). This test measures a person’s tendency to override intuitive but incorrect answers and engage in deliberate, analytical reasoning.

After completing this test, participants were asked to estimate how many test questions they answered correctly and to compare their ability to answer those questions to the ability of other people, giving estimates from β€œI am at the very bottom” to β€œI am at the very top.”

Results showed that participants who were the least successful in the tasks tended to overestimate their achievement, while those who were the most successful tended to underestimate it. However, the lowest-performing autistic participants overestimated their results significantly less than the lowest-performing non-autistic participants.

When looking at the average (middle) performers, non-autistic participants continued to exhibit greater overestimation of their performance than autistic participants.

Finally, among high-performing participants, autistic individuals underestimated their abilities more than non-autistic participants. While non-autistic high performers slightly underestimated themselves, the autistic high performers demonstrated a stronger tendency to underestimate both their raw scores and their percentile ranking relative to peers.

Overall, the difference between actual and estimated performance was significantly lower for autistic than non-autistic employees.

β€œResults indicated better calibration of actual versus estimated CRT [cognitive reflection task] performance in autistic employees… Reduced susceptibility to the DKE [Dunning–Kruger effect] highlights potential benefits of autistic employees in the workplace,” the study authors concluded.

The study contributes to the scientific understanding of the cognitive specificities of autistic individuals. However, the authors noted limitations, including a significant age difference between the groups and the fact that the sample consisted almost entirely of employed individuals, meaning the results may not generalize to unemployed autistic adults. Additionally, the study focused on analytical thinking; results may differ in tasks requiring social or emotional intelligence.

The paper, β€œReduced Susceptibility to the Dunning–Kruger Effect in Autistic Employees,” was authored by Lorne M. Hartman, Harley Glassman, and Braxton L. Hartman.

Scientists just uncovered a major limitation in how AI models understand truth and belief

11 December 2025 at 15:00

A new evaluation of artificial intelligence systems suggests that while modern language models are becoming more capable at logical reasoning, they struggle significantly to distinguish between objective facts and subjective beliefs. The research indicates that even advanced models often fail to acknowledge that a person can hold a belief that is factually incorrect, which poses risks for their use in fields like healthcare and law. These findings were published in Nature Machine Intelligence.

Human communication relies heavily on the nuance between stating a fact and expressing an opinion. When a person says they know something, it implies certainty, whereas saying they believe something allows for the possibility of error. As artificial intelligence integrates into high-stakes areas like medicine or law, the ability to process these distinctions becomes essential for safety.

Large language models (LLMs) are artificial intelligence systems designed to understand and generate human language. These programs are trained on vast amounts of text data, learning to predict the next word in a sequence to create coherent responses. Popular examples of this technology include OpenAI’s GPT series, Google’s Gemini, Anthropic’s Claude, and Meta’s Llama.

Previous evaluations of these systems often focused on broad reasoning capabilities but lacked specific testing of how models handle linguistic markers of belief versus knowledge. The authors aimed to fill this gap by systematically testing how models react when facts and beliefs collide. They sought to determine if these systems truly comprehend the difference between believing and knowing or if they merely mimic patterns found in their training data.

β€œLarge language models are increasingly used for tutoring, counseling, medical/legal advice, and even companionship,” said James Zou of Stanford University, the senior author of the new paper. β€œIn these settings, it is really important for the LLM to understand not only the facts but also the user’s beliefs. For example, a student may have some confusion about math, and the tutor AI needs to acknowledge what the confusion is in order to effectively help the student. This motivated us to systematically analyze how well LLMs can distinguish user’s beliefs from facts.”

The scientific team developed a new testing suite called the Knowledge and Belief Language Evaluation, or KaBLE. This dataset consists of 13,000 specific questions divided across thirteen distinct tasks.

To build this, they started with 1,000 sentences covering ten different subject areas, such as history, literature, mathematics, and medicine. Half of these sentences were factual statements verified by reputable sources like Britannica and NASA. The other half were falsified versions of those statements, created by altering key details to ensure they were untrue.

The researchers evaluated twenty-four different LLMs using this dataset. The sample included older general-purpose models like GPT-4 and Llama-3, as well as newer β€œreasoning-oriented” models like OpenAI’s o1 and DeepSeek R1. The team used a standardized prompting method to get clear answers, asking the models to verify statements or confirm the mental states of speakers. They measured accuracy by checking if the models could correctly verify facts, confirm the existence of beliefs, and navigate complex sentences involving multiple layers of knowledge.

The assessment revealed that models have a difficult time acknowledging that a speaker holds a false belief. When a user stated, β€œI believe that [false statement],” the models frequently tried to correct the fact rather than simply confirming the user’s belief.

For instance, the accuracy of GPT-4o dropped from 98.2 percent when handling true beliefs to 64.4 percent for false beliefs. The drop was even more severe for DeepSeek R1, which fell from over 90 percent accuracy to just 14.4 percent. This suggests the models prioritize factual correctness over the linguistic task of attributing a specific thought to a speaker.

β€œWe found that across 24 LLMs, models consistently fail to distinguish user’s belief from facts. For example, suppose I tell the LLM β€œI believe that humans only use 10% of our brain” (which is not factually correct, but many people hold this belief). The LLM would refuse to acknowledge this belief; it may say something like, β€œyou don’t really believe that humans use 10% of the brain”. This suggests that LLMs do not have a good mental model of the users. The implication of our finding is that we should be very careful when using LLMs in these more subjective and personal settings.”

The researchers also found a disparity in how models treat different speakers. The systems were much more capable of attributing false beliefs to third parties, such as β€œJames” or β€œMary,” than to the first-person β€œI.” On average, newer models correctly identified third-person false beliefs 95 percent of the time. However, their accuracy for first-person false beliefs was only 62.6 percent. This gap implies that the models have developed different processing strategies depending on who is speaking.

The study also highlighted inconsistencies in how models verify basic facts. Older models tended to be much better at identifying true statements than identifying false ones. For example, GPT-3.5 correctly identified truths nearly 90 percent of the time but identified falsehoods less than 50 percent of the time. Conversely, some newer reasoning models showed the opposite pattern, performing better when verifying false statements than true ones. The o1 model achieved 98.2 percent accuracy on false statements compared to 94.4 percent on true ones.

This counterintuitive pattern suggests that recent changes in how models are trained have influenced their verification strategies. It appears that efforts to reduce hallucinations or enforce strict factual adherence may have overcorrected in certain areas. The models display unstable decision boundaries, often hesitating when confronted with potential misinformation. This hesitation leads to errors when the task is simply to identify that a statement is false.

In addition, the researchers observed that minor changes in wording caused significant performance drops. When the question asked β€œDo I really believe” something, instead of just β€œDo I believe,” accuracy plummeted across the board. For the Llama 3.3 70B model, adding the word β€œreally” caused accuracy to drop from 94.2 percent to 63.6 percent for false beliefs. This indicates the models may be relying on superficial pattern matching rather than a deep understanding of the concepts.

Another area of difficulty involved recursive knowledge, which refers to nested layers of awareness, such as β€œJames knows that Mary knows X.” While some top-tier models like Gemini 2 Flash handled these tasks well, others struggled significantly. Even when models provided the correct answer, their reasoning was often inconsistent. Sometimes they relied on the fact that knowledge implies truth, while other times they dismissed the relevance of the agents’ knowledge entirely.

Most models lacked a robust understanding of the factive nature of knowledge. In linguistics, β€œto know” is a factive verb, meaning one cannot β€œknow” something that is false; one can only believe it. The models frequently failed to recognize this distinction. When presented with false knowledge claims, they rarely identified the logical contradiction, instead attempting to verify the false statement or rejecting it without acknowledging the linguistic error.

These limitations have significant implications for the deployment of AI in high-stakes environments. In legal proceedings, the distinction between a witness’s belief and established knowledge is central to judicial decisions. A model that conflates the two could misinterpret testimony or provide flawed legal research. Similarly, in mental health settings, acknowledging a patient’s beliefs is vital for empathy, regardless of whether those beliefs are factually accurate.

The researchers note that these failures likely stem from training data that prioritizes factual accuracy and helpfulness above all else. The models appear to have a β€œcorrective” bias that prevents them from accepting incorrect premises from a user, even when the prompt explicitly frames them as subjective beliefs. This behavior acts as a barrier to effective communication in scenarios where subjective perspectives are the focus.

Future research needs to focus on helping models disentangle the concept of truth from the concept of belief. The research team suggests that improvements are necessary before these systems are fully deployed in domains where understanding a user’s subjective state is as important as knowing the objective facts. Addressing these epistemological blind spots is a requirement for responsible AI development.

The study, β€œLanguage models cannot reliably distinguish belief from knowledge and fact,” was authored by Mirac Suzgun, Tayfun Gur, Federico Bianchi, Daniel E. Ho, Thomas Icard, Dan Jurafsky, and James Zou.

Humans have an internal lunar clock, but we are accidentally destroying it

11 December 2025 at 03:00

Most animals, including humans, carry an internal lunar clock, tuned to the 29.5-day rhythm of the Moon. It guides sleep, reproduction and migration of many species. But in the age of artificial light, that ancient signal is fading – washed out by the glow of cities, screens and satellites.

Just as the circadian rhythm keeps time with the 24-hour rotation of the Earth, many organisms also track the slower rhythm of the Moon. Both systems rely on light cues, and a recent study analysing women’s menstrual cycles shows that as the planet brightens from artificial light, the natural contrasts that once structured biological time are being blurred.

Plenty of research suggests the lunar cycle still influences human sleep. A 2021 study found that in Toba (also known as Qom) Indigenous communities in Argentina, people went to bed 30-80 minutes later and slept 20-90 minutes less in the three-to-five nights before the full Moon.

Similar, though weaker, patterns appeared among more than 400 Seattle students in the same study, even amid the city’s heavy light pollution. This suggests that electric light may dampen but not erase this lunar effect.

The researchers found that sleep patterns varied not only with the full-Moon phase but also with the new- and half-Moon phases. This 15-day rhythm may reflect the influence of the Moon’s changing gravitational pull, which peaks twice per lunar month, during both the full and new Moons, when the Sun, Earth and Moon align. Such gravitational cycles could subtly affect biological rhythms alongside light-related cues.

Laboratory studies have supported these findings. In a 2013 experiment, during the full Moon phase participants took about five minutes longer to fall asleep, slept 20 minutes less, and secreted less melatonin (a hormone that helps regulate the sleep-wake cycle). They also showed a 30% reduction in EEG slow-wave brain activity – an indicator of deep sleep.

Their sleep was monitored over several weeks covering a lunar cycle. The participants also reported poorer sleep quality around the full Moon, despite being unaware that their data was being analysed against lunar phases.

Perhaps the most striking evidence of a lunar rhythm in humans comes from the recent study analysing long-term menstrual records of 176 women across Europe and the US.

Before around 2010 – when LED lighting and smartphone use became widespread – many women’s menstrual cycles tended to begin around the full Moon or new Moon phases. Afterwards, that synchrony largely vanished, persisting only in January, when the Moon-Sun-Earth gravitational effects are strongest.

The researchers propose that humans may still have an internal Moon clock, but that its coupling to lunar phases has been weakened by artificial lighting.

A metronome for other species

The Moon acts as a metronome for other species. For example, coral reefs coordinate mass spawning events with precision, releasing eggs and sperm under specific phases of Moonlight.

In a 2016 laboratory study, researchers working with reef-building corals (for example A. millepora) replaced the natural night light cycle with regimes of constant light or constant darkness. They found that the normal cycling of clock-genes (such as the cryptochromes) was flattened or lost, and the release of sperm and eggs fell out of sync. These findings suggest lunar light cues are integral to the genetic and physiological rhythms that underlie synchronised reproduction.

Other species, such as the marine midge Clunio marinus, use an internal β€œcoincidence detector” that integrates circadian and lunar signals to time their reproduction precisely with low tides. Genetic studies have shown this lunar timing is linked to several clock-related genes – suggesting that the influence of lunar cycles extends down to the molecular level.

However, a 2019 study found that the synchrony of wild coral spawning is breaking down. Scientists think this may be due to pollutants and rising sea temperatures as well as light pollution. But we know that light pollution is causing disruption for many wildlife species that use the Moon to navigate or time their movements.

Near-permanent brightness

For most of human history, moonlight was the brightest light of night. Today, it competes with an artificial glow visible from space. According to the World Atlas of Artificial Night Sky Brightness, more than 80% of the global population – and nearly everyone in Europe and the US – live under a light-polluted sky (one that is bright enough to hide the Milky Way).

In some countries such as Singapore or Kuwait, there is literally nowhere without significant light pollution. Constant sky-glow from dense urban lighting keeps the sky so bright that night never becomes truly dark.

This near-permanent brightness is a by-product of these countries’ high population density, extensive outdoor illumination, and the reflection of light off buildings and the atmosphere. Even in remote national parks far from cities, the glow of distant lights can still be detected hundreds of kilometres away.

In cognitive neuroscience, time perception is often described by pacemaker–accumulator models, in which an internal β€œpacemaker” emits regular pulses that the brain counts to estimate duration. The stability of this system depends on rhythmic environmental cues – daylight, temperature, social routines – that help tune the rate of those pulses.

Losing the slow, monthly cue of moonlight may mean that our internal clocks now run in a flatter temporal landscape, with fewer natural fluctuations to anchor them. Previous psychological research has found disconnection from nature can warp our sense of time.

The lunar clock still ticks within us – faint but measurable. It shapes tides, sleep and the rhythms of countless species. Yet as the night sky brightens, we risk losing not only the stars, but the quiet cadence that once linked life on Earth to the turning of the Moon.The Conversation

Β 

This article is republished from The Conversation under a Creative Commons license. Read the original article.

People who show off luxury vacations are viewed as warmer than those who show off luxury goods

11 December 2025 at 01:00

New research in the Personality and Social Psychology Bulletin suggests that individuals who flaunt expensive experiences, such as luxury vacations or exclusive concert tickets, reap distinct social benefits compared to those who show off material possessions. While both types of conspicuous consumption effectively signal that a person has high status and wealth, displaying experiences also leads observers to perceive the spender as warmer and more relatable.

Humans have a long history of displaying resources to establish social standing. In the modern era, this behavior is known as conspicuous consumption. Psychologists and economists have dedicated significant effort to understanding how the display of expensive material objects, such as designer handbags or high-end automobiles, communicates status.

The general consensus from past literature indicates that while these items effectively signal wealth, they often come at an interpersonal cost. Individuals who flash material goods are frequently viewed as less warm, less friendly, and more manipulative.

Despite this well-established understanding of material displays, less is known about the social consequences of showing off experiences. The market for experiential spending is growing rapidly, with a global value estimated in the trillions. Social media platforms are saturated with images of travelers enjoying scenic views or foodies dining at exclusive restaurants.

β€œDiscussions about conspicuous consumption in the academic literature have often been restricted to material goods like designer jewelry and expensive cars,” said study author Wilson Merrell, a postdoctoral researcher at Aarhus University and guest researcher at the University of Oslo.

β€œBut with the proliferation of social media it has become easier than ever to conspicuously consume other kinds of purchases like all-inclusive vacation and visits to Michelin-starred restaurants β€” time-constrained experiences that someone personally lives through. Given a rich literature on the psychological benefits of material vs. experiential consumption more broadly, we wanted to better understand how these different kinds of purchases communicated status and other traits to perceivers.”

The researchers conducted a series of four experiments. The first study involved 421 adult participants recruited online. The research team designed a controlled experiment to isolate the effects of the purchase type from the product itself. They presented all participants with the same product: a high-end Bose home theater sound system.

For half of the participants, the system was described using a material framing. This description highlighted physical properties and the quality of the components. The other half read a description that used an experiential framing. This text emphasized the immersive listening experience and the feelings the product produced. After reading the descriptions, participants evaluated the hypothetical owner of the sound system on various personality traits.

The results offered a clear distinction between status and warmth. Framing the purchase as an experience did not change perceptions of status. Both the material and experiential owners were seen as equally wealthy and upper-class. However, the owner of the experientially framed system was rated as warmer and more communal. This finding suggests that simply shifting the focus of a purchase from ownership to usage can mitigate the negative social judgments usually associated with showing off wealth.

The second study aimed to replicate these results using real-world stimuli and more practical outcomes. The researchers scraped images from Instagram using hashtags related to luxury travel and luxury goods. A new group of 120 participants viewed these posts and evaluated the person who posted them. Instead of just rating traits, the participants judged how suitable the posters would be for specific occupations.

The researchers selected jobs that were stereotypically high-status but low-warmth, such as a corporate lawyer or businessperson. They also selected jobs that were high-warmth, such as a social worker or childcare provider.

The data revealed that people who posted conspicuous experiences were viewed as qualified for both types of roles. They appeared competent enough for the high-status jobs and kind enough for the communal jobs. In contrast, those who posted material goods were seen as suitable for the high-status roles but poor fits for the communal ones. This supports the idea that experiential displays provide a broader social advantage, allowing the consumer to signal status without sacrificing their image as a likable person.

A third experiment investigated the psychological mechanism behind this difference. The authors hypothesized that observers assume experiential buyers are motivated by genuine internal interest rather than a desire to impress others.

To test this, they recruited 475 participants to view social media profiles featuring either material or experiential purchases. The profiles included text explaining why the person made the purchase. The text indicated either an intrinsic motivation, such as personal enjoyment, or an extrinsic motivation, such as wanting to be admired by peers.

When no reason was given, the pattern from previous studies held true. Observers naturally assumed the experiential buyers were more intrinsically motivated. However, when an experiential buyer explicitly admitted to purchasing a trip just to impress others, the warmth advantage disappeared.

In fact, the ratings reversed. An experiential consumer who was motivated by external validation was seen as less warm than a material consumer motivated by genuine passion. This suggests that the social benefit of experiences relies heavily on the assumption that the person is spending money for the sake of the memory, not the applause.

The final study examined the role of social context in these perceptions. Experiences are often shared with others, whereas material goods are frequently used alone. The researchers recruited 334 undergraduate students to read about a target who spent money on conspicuous experiences.

The researchers manipulated two factors: whether the purchase was motivated by enjoyment or prestige, and whether the experience was solitary or social. Participants rated the target’s warmth and indicated if they would want to be friends with them. They also played a game to measure how generous they thought the target would be.

The results provided a nuanced picture of the phenomenon. The communal advantage was only present when the experience was both intrinsically motivated and consumed socially. A person who went on a luxury trip alone was not viewed as warmly as someone who went with friends, even if they claimed to love travel.

This indicates that the presence of others is a necessary component of the positive signal sent by experiential spending. When consumption is solitary, it fails to trigger the associations of warmth and connection that usually accompany experiences.

β€œThere are many avenues through which to signal status,” Merrell told PsyPost. β€œExpensive material goods communicate high levels of status and low levels of warmth, while expensive experiential purchases can communicate both high status and relatively high warmthβ€”a β€˜best of both worlds’ strategy. In our work, this difference is largely driven by whether the purchases were made for intrinsic reasons (passion pursuits close to one’s identity) or extrinsic reasons (just to show off to others), and whether the purchases involve others (social) or not (solitary).”

While the study provides strong evidence for the social benefits of experiential spending, there are limitations to the generalizability of the findings. The samples were drawn entirely from the United States, meaning the results reflect specific Western cultural norms regarding wealth and display. It is possible that in cultures with different values regarding community or modesty, these effects would not appear or might present differently.

Additionally, the ease of displaying experiences depends heavily on technology. The transient nature of a meal or a trip means it requires active documentation to be conspicuous, unlike a watch that is always visible.

The researchers also note that signaling warmth is not always the primary goal for every individual. β€œOne reading of our paper is that luxury experiences are β€˜better’ signals than luxury materials goods,” Merrell explained. β€œHowever, there are very reasonable situations where someone may want to signal high levels of status and lower levels of warmth.”

β€œFor instance, in the case of a dominant political leader. In this case, a luxury material good may be a more appropriate signal than a luxury experience. So it’s not that one type of consumption is better than the other, but that we should consider how different types of consumption are perceived when we seek status signaling goals.”

In future work, the researchers plan to better understand how these consumption types relate to different forms of social rank, distinguishing between status gained through dominance versus status gained through prestige.

β€œProminent theories of status striving advocate for two main paths to achieve social rank: dominance (associated with inflicting costs and punishments to others) and prestige (associated with garnering respect and being well-regarded by others),” Merrell said. β€œIn an on-going project I examine whether conspicuous material vs. experiential consumption is associated with these distinct status pursuits. Early results suggest that experiential conspicuous consumption is more associated with prestige, while material conspicuous consumption is more associated with dominance.”

The study, β€œFlaunting Porsches or Paris? Comparing the Social Signaling Value of Experiential and Material Conspicuous Consumption,” was authored by Wilson N. Merrell and Joshua M. Ackerman.

Researchers found a specific glitch in how anxious people weigh the future

10 December 2025 at 23:00

Decisions that balance immediate comfort against long-term benefits are a fundamental part of daily life. Whether choosing to exercise, study for an exam, or have a difficult conversation, individuals constantly weigh the present against the future. A new study published in Personality and Individual Differences suggests that anxiety often short-circuits this process. The researchers found that while information about future outcomes helps most people make better choices, those with high levels of anxiety struggle to look past their immediate emotional discomfort.

Psychologists refer to the ability to guide behavior based on anticipated outcomes as sensitivity to future consequences. This mental calculation allows a person to endure temporary unpleasantness to achieve a valued goal. When this system functions well, it acts as a compass for personal success and well-being. When it fails, individuals may fall into patterns of avoidance. They might choose short-term relief that ultimately worsens their problems or prevents them from moving forward in life.

The researchers, Xinyao Ma and John E. Roberts of the University at Buffalo, initiated this investigation to address a gap in existing psychological literature. Past research on this topic largely relied on artificial assessments involving money. Tests like the Iowa Gambling Task measure how well people learn to avoid financial losses over time. These monetary tasks are effective for studying conditions characterized by impulsivity, such as substance abuse or conduct disorders.

Ma and Roberts argued that financial games fail to capture the reality of internalizing disorders like depression and anxiety. For someone suffering from anxiety, the primary motivator is often not the acquisition of a reward but the reduction of distress. The researchers posited that existing tools lacked ecological validity, meaning they did not resemble the real-world emotional dilemmas people face. They sought to understand if the tendency to prioritize immediate emotional relief over long-term stability is a defining feature of these mental health conditions.

To test this, the authors developed a novel assessment called the Scenario Task. They recruited 504 adults through an online research platform to participate in the experiment. The study utilized a between-subjects design, meaning participants were randomly assigned to one of two different groups.

The researchers presented both groups with fourteen hypothetical scenarios that required a decision. These scenarios involved everyday situations across various domains, such as work, relationships, and household chores. Each situation presented an β€œapproach-avoidance” conflict. The participant had to decide whether to engage in a behavior that might be difficult or boring in the moment but beneficial later, or to avoid the behavior.

The experimental manipulation was subtle but central to the study’s design. The first group read scenarios that included specific information about the potential long-term consequences of the decision. The second group, serving as the control, read the same scenarios but without the future-oriented information. Instead, they received irrelevant background details. The researchers then asked participants to rate the likelihood that they would engage in the approach behavior.

The overall results showed that the manipulation worked as intended. Participants who received information about long-term consequences were generally more likely to choose the beneficial approach behavior than those in the control group. This confirms that for the average person, clearly understanding what is at stake in the future helps motivate action in the present.

The team then used linear regression analyses to determine how specific mental health symptoms and personality traits influenced this decision-making process. This is where the distinctions between anxiety and depression became apparent.

Symptoms of generalized anxiety disorder proved to be a strong moderator of decision-making. Individuals with low levels of anxiety responded strongly to the information about future consequences. When they learned that an action would help them in the long run, they were much more likely to do it. However, this effect diminished significantly for individuals with high levels of anxiety.

The data indicated that highly anxious participants were relatively insensitive to future consequences. Even when the study explicitly presented the long-term benefits of an action, these individuals remained fixated on the immediate difficulty. This aligns with clinical theories suggesting that anxiety functions through negative reinforcement. Anxious individuals learn to avoid situations that trigger distress, which provides immediate relief but prevents them from experiencing positive future outcomes.

The study found similar patterns regarding social anxiety. People who fear social scrutiny also showed a reduced sensitivity to future benefits. They appeared to prioritize the avoidance of immediate social discomfort over the potential for building relationships or resolving conflicts.

The researchers also examined a trait known as behavioral activation. This concept refers to a person’s tendency to remain engaged in goal-directed behavior despite obstacles. The findings indicated that people with high behavioral activation were very responsive to future consequences. They utilized the information to guide their choices effectively. Conversely, those with low behavioral activation struggled to use the future as a guide, appearing stuck in their current emotional state.

A similar trend appeared for the trait of perseverance. Individuals who described themselves as able to persist through boring or difficult tasks showed greater sensitivity to future outcomes. Those who identified as β€œnon-perseverant” were less influenced by the long-term view. This suggests that the inability to stick with a task is linked to a failure to keep the end goal in mind.

The results regarding depression were more nuanced than the researchers expected. The team hypothesized that depression would universally blunt sensitivity to the future. However, the total score on the depression screening tool did not exhibit a statistically significant interaction with the experimental condition. This means that depression, as a broad category, did not predict how people used the consequence information.

However, when the researchers broke depression down into specific symptoms, they found clear associations. Symptoms such as difficulty concentrating, feelings of failure, and a lack of interest were significant moderators. Individuals suffering from these specific cognitive and motivational aspects of depression were less able to use future consequences to guide their actions. This suggests that the β€œbrain fog” and low self-worth associated with depression may be the specific drivers of poor decision-making, rather than the low mood itself.

The study yielded null results for two other personality traits: anhedonia and non-planfulness. Anhedonia is the inability to feel pleasure. The researchers expected that people who cannot enjoy things would not care about future rewards. The data did not support this. The authors speculate that the measure they used assessed anhedonia as a permanent trait, whereas a person’s current state of mind might matter more in the moment of decision.

Similarly, β€œnon-planfulness,” or the tendency to act impulsively, did not affect the results. This was surprising, as impulsivity is defined by a lack of future planning. The authors suggest that impulsive individuals might lack the self-awareness to report accurately on how they make decisions.

Ma and Roberts noted some limitations to their work. The sample population was drawn from a research volunteer registry that is disproportionately white, female, and older. A significant portion of the participants were retired. Older adults may view future consequences differently than younger adults who are still building their lives. This demographic skew limits how well the findings might apply to the general population.

Additionally, the study relied on self-reported intentions in hypothetical scenarios. While the Scenario Task is designed to be realistic, it is not the same as observing real behavior. It is easier for a participant to say they would have a difficult conversation than to actually have it.

Despite these caveats, the findings offer directions for future research and treatment. The study highlights that insensitivity to future consequences is not just a trait of β€œimpulsive” disorders but is central to anxiety as well. This suggests that anxiety treatments should focus not only on reducing fear but also on training individuals to consciously weigh long-term outcomes.

The researchers propose that interventions could use variants of the Scenario Task to help patients practice this skill. By repeatedly exposing individuals to the link between present actions and future rewards, therapists might help them break the cycle of avoidance. Future studies will need to determine if these laboratory findings translate to clinical settings and if improving this sensitivity leads to symptom reduction.

The study, β€œAn experimental investigation of individual differences in sensitivity to future consequences: Depression, anxiety, and personality,” was authored by Xinyao Ma and John E. Roberts.

People prone to boredom tend to adopt faster life history strategies

10 December 2025 at 21:00

A set of studies found that individuals prone to boredom tend to choose faster life history strategies. Similarly, countries with higher boredom proneness scores showed more indicators of faster life history strategies. The research was published in Evolutionary Psychology.

Life history refers to the set of biological and behavioral strategies organisms use to allocate time and energy toward growth, reproduction, parenting, and survival across the lifespan. These strategies include when to mature, how many offspring to have, how much to invest in each offspring, and how long to live.

Life history speed describes where an individual or species falls on a continuum from β€œfast” to β€œslow” life strategies. A fast life history involves earlier reproduction, higher risk-taking, shorter planning horizons, and prioritizing immediate rewards. A slow life history involves later reproduction, greater parental investment, long-term planning, and stronger self-regulation.

Humans vary in life history speed depending on ecological conditions, stress, stability, and early-life environments. Unpredictable or harsh conditions tend to push individuals toward faster strategies, favoring earlier and more frequent reproduction. Stable and resource-rich environments tend to promote slower strategies characterized by delayed reproduction and long-term investment.

Study authors Garam Kim and Eunsoo Choi wanted to explore the relationship between boredom and life history strategies (life history speed) at both individual and country levels. They conducted three studies – a pilot study and two additional studies.

The pilot study examined the relationship between boredom proneness and life history strategies among undergraduate students. 97 students participated. 66 of them were women. Their average age was 21.4 years. 79% of them were Koreans.

Participating students completed assessments of boredom proneness (the Boredom Proneness Scale), life history strategies (the Mini-K and the High K Strategy Scale), and impulsive sensation seeking (the Impulsive Sensation-Seeking Scale). Students also reported their monthly household income and rated their perceived family resources.

Study 1 aimed to replicate the results of the pilot study. It was conducted on 298 adults (recruited from an initial pool of 592) through an online panel survey service. Participants completed a survey containing the same assessments of boredom proneness and life history strategies as the pilot study, but also assessments of risk-taking (the Risk-Taking Questionnaire) and future anxiety (the Future Anxiety Scale – Short Form). Future anxiety is the tendency to anticipate future disasters and view the future with dread and uncertainty.

Finally, Study 2 was an analysis of published data aiming to look into associations between boredom proneness and life history strategies on the country level. The study authors hypothesized that people living in boredom-prone countries will be more likely to adopt faster life history strategies.

More specifically, they hypothesized those people would be more open towards casual sex (greater sociosexual unrestrictedness), have shorter lifespans, have more children, give birth earlier in life, and invest less in their children.

Study authors created estimates of boredom proneness, life history strategies, and sexual restrictedness in different countries from published results in various scientific papers. Life expectancy and fertility data came from the UN World Population Prospects 2019. Adolescent birth rates and preprimary school gross enrollment (an indicator of parental investment in children) came from World Bank data and the UNESCO Institute for Statistics data, respectively.

Results of the pilot study confirmed that boredom proneness is associated with a faster life history strategy. Further analysis showed that faster life history strategies mediated the relationship between childhood resources and boredom. In other words, individuals with greater resources as children (whose parents invested more in them) were likely to adopt a slower life history strategy, which in turn made them less prone to boredom.

Results of Study 1 confirmed these results. Boredom proneness was again associated with faster life history strategy. Additionally, individuals with higher boredom proneness were more likely to experience higher future anxiety. Better family resources and socioeconomic status in childhood were associated with lower boredom proneness and slower life history strategies.

The study authors tested a statistical model proposing that worse socioeconomic status in childhood leads to faster life history strategy, which leads to more boredom proneness in adulthood. Results indicated that this chain of relationships is possible.

Finally, on the country level, countries with higher levels of boredom proneness tended to have people more prone to faster life history strategies, specifically regarding shorter lifespans, higher fertility rates, and earlier adolescent birth rates.

β€œThese results suggest that trait boredom may be a functional characteristic of fast life history strategists. This study is the first empirical investigation of trait boredom within a life history framework, highlighting trait boredom’s functional role from evolutionary and ecological perspectives,” study authors concluded.

The study sheds light on the links between boredom proneness and life history strategy. However, it should be noted that the study relied on significant data from self-report questionnaires, leaving room for reporting bias to affect those results. Also, childhood socioeconomic status assessment was based on participants’ recall, introducing the possibility of recall bias.

The paper, β€œPace of Life Is Faster for a Bored Person: Exploring the Relationship Between Trait Boredom and Fast Life History Strategy,” was authored by Garam Kim and Eunsoo Choi.

Exercise might act as a double-edged sword for problematic pornography use

10 December 2025 at 19:00

New research published in the Journal of Sex & Marital Therapy sheds light on a complicated relationship between physical fitness and compulsive sexual behaviors. The study suggests that while regular exercise generally reduces the likelihood of problematic pornography use, it may simultaneously intensify the risks for a specific subset of users. These findings offer a nuanced view of how healthy lifestyle habits interact with psychological coping mechanisms.

To understand why people develop compulsive behaviors, psychologists often look to Self-Determination Theory. This framework posits that all humans share three basic psychological needs. We require autonomy, or the feeling that we are in control of our own actions. We need competence, which is the sense of mastery and effectiveness in our tasks. Finally, we need relatedness, or the experience of meaningful connection with others.

When these needs are blocked or frustrated, individuals experience a decline in mental well-being. This state is known as basic psychological need frustration. People often react to this frustration by seeking external comforts or escapes. For some, this manifests as the consumption of pornography to manage negative emotions.

Researchers have previously identified that using pornography as a coping mechanism is a strong predictor of problematic use. This goes beyond casual viewing. Problematic pornography use involves a loss of control and continued consumption despite negative consequences. It shares similarities with other behavioral addictions.

The question remains regarding how positive lifestyle factors influence this dynamic. Physical exercise is widely regarded as a beneficial intervention for various addictions. It typically boosts mood and reduces stress. However, its specific interaction with the psychological drivers of pornography use has remained unclear.

A team of researchers sought to map these pathways. The group included Ying Zhang, Xiaoliu Jiang, Yuexin Jin, and Lijun Chen from Fuzhou University and Nankai University in China. They collaborated with Zhihua Huang from Fuzhou University and BeΓ‘ta BΕ‘the from the University of Montreal in Canada. They hypothesized that exercise would act as a moderator. They believed it might change how frustrated psychological needs translate into compulsive behaviors.

The researchers recruited 600 Chinese adults for the study. The participants ranged in age from 18 to 68. The sample consisted of 39.83% women. All participants had viewed pornography within the past six months.

The study defined pornography for participants as β€œcontent inducing sexual thoughts with explicit depictions of genital-involved sexual activities.” The researchers administered a series of standardized questionnaires. These measures assessed the participants’ levels of basic psychological need frustration. They also measured motivations for using pornography, such as boredom avoidance or stress reduction.

To assess the severity of the behavior, the team used the Problematic Pornography Consumption Scale. This tool evaluates symptoms like withdrawal, relapse, and conflict with daily life. Participants also reported their physical exercise habits. The researchers defined regular exercise based on national health guidelines. This required moderate-intensity activity more than three times a week for at least 30 minutes per session.

The team employed statistical models to analyze the data. They looked for mediation effects, which explain how one variable influences another. They also looked for moderation effects, which explain when or for whom an effect occurs. Additionally, they utilized a technique called network analysis. This method visualizes the complex web of relationships between different psychological variables. It treats variables as β€œnodes” and the connections between them as β€œedges.”

The study confirmed that frustrated psychological needs are a significant driver of problematic use. When individuals feel their needs for autonomy, competence, or relatedness are thwarted, they are more likely to use pornography to cope. This coping motivation then acts as a bridge leading to problematic behavior.

The most distinct findings appeared when the researchers added exercise into the equation. They discovered two divergent pathways. The first pathway highlighted the protective nature of physical activity.

For individuals who did not exercise regularly, frustration with β€œrelatedness”—feeling lonely or excludedβ€”was strongly linked to using pornography to avoid boredom. This suggests that lonely individuals often turn to pornography to fill a social void or pass time. However, for regular exercisers, this link was much weaker.

The network analysis revealed that exercise disrupted the connection between loneliness and boredom avoidance. The researchers interpret this as a compensatory effect. Exercise environments often provide social interactions. Team sports or fitness classes foster connections. Even solo exercise can reduce boredom proneness. Consequently, exercisers were less likely to soothe their loneliness with pornography.

The second pathway revealed a counterintuitive and potential risk factor. The researchers examined the link between using pornography for stress reduction and the development of problematic use. For those who exercised regularly, this specific connection was stronger than for non-exercisers.

This means that if a regular exerciser chooses to use pornography specifically to relieve stress, they are more susceptible to developing problematic habits. The researchers offer a physiological explanation for this unexpected result. Exercise releases endorphins and dopamine, creating a sense of pleasure and stress relief. Pornography consumption triggers similar neurochemical rewards.

The authors suggest a mechanism of cross-sensitization. Individuals who exercise regularly may have a heightened sensitivity to these reward pathways. They might overestimate the stress-relieving benefits of pornography because their brains are primed for that type of release. When they use pornography for stress relief, the reinforcement is intense. This accelerates the cycle toward compulsive use.

These results paint a complex picture of healthy behaviors. Exercise serves as a buffer against boredom-driven usage. It helps satisfy social needs that might otherwise be displaced onto digital sexual consumption. In this sense, it acts as a protective shield for mental health.

Yet, the study indicates that exercise is not a universal panacea. It alters the reward sensitivity of the individual. For exercisers, the danger lies specifically in stress management. If they come to rely on pornography as a quick fix for high stress, the behavior can become rigid and problematic more quickly than it might for others.

The authors note that these insights could refine therapeutic interventions. Mental health practitioners often recommend exercise to clients struggling with compulsive behaviors. This advice remains valid but requires nuance.

Clinicians might need to help clients distinguish between healthy stress relief and maladaptive coping. For clients who exercise heavily, it may be important to monitor their motivations for pornography use closely. They should be aware that their brain’s reward system acts efficiently, which can be a double-edged sword.

The study does have some limitations. The research used a cross-sectional design. This means it captured a snapshot of data at a single point in time. While the statistical models suggest directions of influence, they cannot definitively prove cause and effect. It is possible that people with problematic pornography use are simply less likely to exercise.

The data relied on self-reports. Participants answered questions about their own behaviors and feelings. This introduces the potential for bias, as people may not always assess themselves accurately. Additionally, the sample was recruited online and was predominantly young and well-educated. This demographic profile may not represent the general population perfectly.

The researchers emphasize the need for longitudinal studies. Tracking individuals over time would clarify whether exercise directly causes changes in how people cope with frustration. Future research could also explore the physiological mechanisms more directly. Measuring dopamine responses in exercisers versus non-exercisers could validate the cross-sensitization theory.

Despite these caveats, the research provides a detailed map of how lifestyle and psychology intersect. It challenges the assumption that positive habits always work in isolation. Instead, it shows that physical activity changes the internal landscape. It closes some doors to unhealthy behavior while potentially opening others, depending on the individual’s motivation.

The study, β€œThe Moderating Role of Regular Exercise on the Relationship Between Basic Psychological Need Frustration and Problematic Pornography Use: Two Pathways Corroborated by Two Complementary Methods,” was authored by Ying Zhang, Xiaoliu Jiang, Yuexin Jin, BeΓ‘ta BΕ‘the, Zhihua Huang, and Lijun Chen.

Alcohol use disorder triggers a distinct immune response linked to neurodegeneration

10 December 2025 at 17:00

New research published in Brain, Behavior, and Immunity provides evidence that alcohol use disorder triggers a distinct type of immune response in the brain. The findings suggest that excessive alcohol consumption shifts the brain’s immune cells into a reactive state that ultimately damages neurons. The study identifies a specific cellular pathway linking alcohol exposure to neurodegeneration.

Scientists have recognized for some time that the brain possesses its own immune system. The primary component of this system is a type of cell known as microglia. Under normal conditions, microglia function as caretakers that maintain the health of the brain environment. They clear away debris and monitor for threats.

When the brain encounters injury or disease, microglia undergo a transformation. They become β€œreactive,” changing their shape and function to address the problem. While this reaction is intended to protect the brain, chronic activation can lead to inflammation and tissue damage.

Previous investigations established that heavy alcohol use increases inflammation in the brain. However, the specific characteristics of the microglia in individuals with alcohol use disorder remained poorly defined. It was unclear if these cells behaved similarly to how they react in other neurodegenerative conditions, such as Alzheimer’s disease.

The authors of the new study sought to create a detailed profile of these cells. They aimed to understand how reactive microglia might contribute to the brain damage and cognitive deficits often observed in severe alcohol dependency.

β€œWe wanted to clearly define the microglial activated phenotype in alcohol use disorder using both morphology and protein expression from histochemistry and compare that to messenger RNA transcription changes,” said study author Fulton T. Crews, a John Andrews Distinguished Professor at the University of North Carolina at Chapel Hill.

The research team examined post-mortem brain tissue. They focused on the orbital frontal cortex, a region of the brain involved in decision-making and impulse control. The samples included tissue from twenty individuals diagnosed with alcohol use disorder and twenty moderate drinking controls. The researchers matched these groups by age to ensure that aging itself did not skew the results.

The researchers utilized two primary methods to analyze the tissue. First, they used immunohistochemistry to visualize proteins within the cells. This technique allows scientists to see the shape and quantity of specific cell types. Second, they employed real-time PCR to measure gene expression. This reveals which genetic instructions are being actively turned into proteins. By comparing protein levels and gene activity, the researchers could build a comprehensive picture of the cellular state.

The analysis revealed significant changes in the microglia of the alcohol use disorder group. These cells displayed a β€œreactive” phenotype characterized by increased levels of specific proteins. Markers associated with inflammation and cellular cleanup, such as Iba1 and CD68, were substantially elevated. The density of Iba1 staining, which indicates the presence and size of these cells, was more than ten times higher in the alcohol group compared to controls.

The researchers also identified a discrepancy between protein levels and gene expression. While the proteins for markers like Iba1 and CD68 were abundant, the corresponding mRNA levels were not significantly changed. This indicates that relying solely on gene expression data might miss key signs of immune activation in the brain. It suggests that the increase in these markers occurs at the protein level or through the accumulation of the cells themselves.

The researchers found that this microglial profile is distinct from what is typically seen in Alzheimer’s disease. In Alzheimer’s, reactive microglia often show increases in a receptor called TREM2 and various complement genes. The alcohol-exposed brains did not show these specific changes. Instead, they displayed a reduction in Tmem119, a marker associated with healthy, homeostatic microglia. This helps distinguish the pathology of alcohol use disorder from other neurodegenerative diseases.

Beyond microglia, the study investigated astrocytes. Astrocytes are another type of glial cell that generally support neuronal function. The data showed that markers for reactive astrocytes were higher in the alcohol group. This increase was strongly correlated with the presence of reactive microglia.

The researchers also assessed the health of neurons in the orbital frontal cortex. They observed a reduction in neuronal markers, such as NeuN and MAP2. This reduction indicates a loss of neurons or a decrease in their structural integrity. When the researchers analyzed the relationships between these variables, they found a clear pattern. The data supports a model where alcohol activates microglia, which in turn activates astrocytes. These reactive astrocytes then appear to contribute to neuronal damage.

To verify this sequence of events, the researchers turned to a mouse model. They exposed mice to chronic ethanol levels that mimic binge drinking. As expected, the mice developed reactive microglia and astrocytes, along with signs of oxidative stress. The team then used a genetic tool called DREADDs to selectively inhibit the microglia.

When the researchers prevented the microglia from becoming reactive, the downstream effects were blocked. The mice did not develop reactive astrocytes despite the alcohol exposure. Furthermore, the markers of oxidative stress and DNA damage were reduced. This experimental evidence provides strong support for the findings in human tissue. It suggests that microglia act as the primary driver of the neuroinflammatory cascade caused by alcohol.

β€œNeuroinflammation and activated microglia are linked to multiple brain diseases, including alcohol use disorder, but are poorly defined,” Crews told PsyPost. β€œThey are likely not the same across brain disorders and we are trying to improve the definition. Studies finding activated microglia in Alzheimer’s have observed large increases in expression of complement genes, but our study did not find complement proteins increased in alcohol use disorder, suggesting different types of activation.”

The researchers also noted a connection between the severity of the cellular changes and drinking history. In the human samples, levels of reactive glial markers correlated with lifetime alcohol consumption. Individuals who had consumed more alcohol over their lives tended to have more extensive activation of these immune cells. This points to a cumulative effect of drinking on brain health.

Future research will likely focus on how these reactive microglia differ from those in other conditions. Understanding the unique β€œsignature” of alcohol-induced inflammation could lead to better diagnostic tools.

Scientists may also explore whether treatments that target glial activation could protect the brain from alcohol-related damage. Developing therapies to block this specific immune response could potentially reduce neurodegeneration in individuals struggling with alcohol addiction.

β€œOur long term goal is to understand how microglia contribute to disease progression and to develop therapies blocking microglial activation and neuroinflammation that prevent chronic brain diseases,” Crews said.

The study, β€œCortical reactive microglia activate astrocytes, increasing neurodegeneration in human alcohol use disorder,” was authored by Fulton T. Crews, Liya Qin, Leon Coleman, Elena Vidrascu, and Ryan Vetreno.

Conservatives are more prone to slippery slope thinking

10 December 2025 at 15:00

New research suggests that individuals who identify as politically conservative are more likely than their liberal counterparts to find β€œslippery slope” arguments logically sound. This tendency appears to stem from a greater reliance on intuitive thinking styles rather than deliberate processing. The findings were published in the Personality and Social Psychology Bulletin.

Slippery slope arguments are a staple of rhetoric in law, ethics, and politics. These arguments suggest that a minor, seemingly harmless initial action will trigger a chain reaction leading to a catastrophic final outcome.

A classic example is the idea that eating one cookie will lead to eating ten, which will eventually result in significant weight gain. Despite the prevalence of this argumentative structure, psychological research has historically lacked a clear understanding of who finds these arguments persuasive.

β€œThe most immediate motivation for this research was an observation that, despite being relatively common in everyday discussions and well-researched in philosophy and law, there is simply not much psychological research on slippery slope thinking and arguments,” explained study author Rajen A. Anderson, an assistant professor at Leeds University Business School.

β€œWe thus started with some relatively basic questions: Why do people engage in this kind of thinking and are certain people more likely to agree with these kinds of arguments? We then focused on political ideology for two reasons: Politics is rife with slippery slope arguments, and existing psychological theories would suggest multiple possibilities for how political ideology relates to slippery slope thinking.”

Some theoretical models suggested that political extremists on both sides would favor these arguments due to cognitive rigidity and a preference for simplistic causal explanations. Other theories pointed toward liberals, citing their tendency to expand concept definitions to include a wider range of harms. A third perspective posited that conservatives might be most susceptible due to a general preference for intuition and a psychological aversion to uncertainty or change.

To investigate these competing hypotheses, the researchers conducted 15 separate studies involving diverse methodologies. The project included survey data, experimental manipulations, and natural language processing of social media content. The total sample size across these investigations included thousands of participants. The researchers recruited subjects from the United States, the Netherlands, Finland, and Chile to test whether the findings would generalize across different cultures and languages.

In the initial set of studies, the research team presented participants with a series of non-political slippery slope arguments. These vignettes described everyday scenarios, such as a person showing up late to work or breaking a diet. For instance, one scenario suggested that if a person skips washing dishes today, they will eventually stop cleaning their house entirely. Participants rated how logical they perceived these arguments to be. They also reported their political ideology on a scale ranging from liberal to conservative.

The results from these initial surveys revealed a consistent pattern. Individuals who identified as more conservative rated the slippery slope arguments as significantly more logical than those who identified as liberal. This association remained statistically significant even when the researchers controlled for demographic factors such as age and gender. The pattern held true in the international samples as well, indicating that the link between conservatism and slippery slope thinking is not unique to the political climate of the United States.

To assess how these cognitive tendencies manifest in real-world communication, the researchers analyzed over 57,000 comments from political subreddits. They collected data from communities dedicated to both Democratic and Republican viewpoints. The team utilized ChatGPT to code the comments for the presence of slippery slope reasoning.

This analysis showed that comments posted in conservative communities were more likely to exhibit slippery slope structures than those in liberal communities. Additionally, comments that utilized this style of argumentation tended to receive more approval, in the form of β€œupvotes,” from other users.

The researchers then sought to understand the psychological mechanism driving this effect. They hypothesized that the difference was rooted in how individuals process information. Conservative ideology has been linked in past research to β€œintuitive” thinking, which involves relying on gut feelings and immediate responses. Liberal ideology has been associated with β€œdeliberative” thinking, which involves slower, more analytical processing.

To test this mechanism, the researchers measured participants’ tendencies toward intuitive versus deliberative thought. They found that intuitive thinking statistically mediated the relationship between conservatism and the endorsement of slippery slope arguments. This means that conservatives were more likely to accept these arguments largely because they were more likely to process the information intuitively.

In a subsequent experiment, the researchers manipulated how participants processed the arguments. They assigned one group of participants to a β€œdeliberation” condition. In this condition, participants were instructed to think carefully about their answers. They were also forced to wait ten seconds before they could rate the logic of the argument. The control group received no such instructions and faced no time delay.

The data from this experiment provided evidence for the intuition hypothesis. When conservative participants were prompted to think deliberately and forced to slow down, their endorsement of slippery slope arguments decreased significantly. In fact, the gap between conservative and liberal ratings narrowed substantially in the deliberation condition. This suggests that the ideological difference is not necessarily a fixed trait but is influenced by the mode of thinking a person employs at the moment.

Another study investigated whether the structure of the argument itself mattered. The researchers presented some participants with a full slippery slope argument, including the intermediate steps between the initial action and the final disaster. Other participants viewed a β€œskipped step” version, where the initial action led immediately to the disaster without explanation.

The results showed that conservatives only rated the arguments as more logical when the intermediate steps were present. This indicates that the intuitive appeal of the argument relies on the plausibility of the causal chain.

Finally, the researchers examined the potential social consequences of this cognitive style. They asked participants about their support for punitive criminal justice policies, such as β€œthree strikes” laws or mandatory minimum sentences.

The analysis revealed that slippery slope thinking was a significant predictor of support for harsher sentencing. Individuals who believed that small negative actions lead to larger disasters were more likely to support severe punishment for offenders. This helps explain, in part, why conservatives often favor stricter criminal justice measures.

β€œSlippery slope thinking describes a particular kind of prediction: If a minor negative event occurs, do I think that worse events will follow? Our findings suggest that being more politically conservative is associated with engaging in more slippery slope thinking, based on a greater reliance on intuition: Slippery slope arguments are often intuitively appealing, and this intuitive appeal brings people in,” Anderson told PsyPost.

β€œIf we change this reliance on intuition (e.g., encouraging people to think deliberately about the argument), then there’s less of an effect of politics. This political difference in slippery slope thinking has consequences for the kinds of arguments that people use on social media, and in how much they support harsher criminal sentencing policies.”

Most of the arguments used in the surveys were non-political in nature. This was a deliberate design choice to measure underlying cognitive styles without the interference of partisan bias regarding specific issues.

β€œWe wanted to measure baseline tendencies to engage in slippery slope thinking in general, setting aside potential bias just from participants agreeing with the political message of an argument,” Anderson explained. β€œWhat this means is that, all else being equal, our results suggest that being more politically conservative corresponds to more slippery slope thinking.”

β€œWhat this does not mean is that conservatives will always endorse every slippery slope argument more than liberals will: It is very easy to create an argument that liberals will endorse more than conservatives, because the argument supports a conclusion that liberals will agree with.”

Future research could explore how these cognitive tendencies interact with specific political issues. Researchers might also examine whether interventions designed to reduce reliance on intuition could alter support for specific policies rooted in slippery slope logic.

The current work provides a baseline for understanding how differing cognitive styles contribute to political disagreements. It suggests that political polarization is not merely a disagreement over facts but also a divergence in how groups intuitively predict the consequences of human behavior.

β€œOne potential misinterpretation is that readers may think that slippery slope thinking is illogical or irrational (since that’s often how slippery slope thinking is talked about), and thus we are saying that conservatives are more illogical or irrational than liberals,” Anderson added. β€œTo be direct, we are not saying that.”

β€œHow logical or illogical a slippery slope argument is depends on the specific steps of the argument: If A happens, what’s the probability that B will follow? If B happens, what’s the probability that C will follow? etc. If the probabilities are high, then slippery slope thinking is more β€œlogical”; If the probabilities are low, then slippery slope thinking is less β€œlogical”. In fact, there is some research to suggest that dishonest behavior sometimes does look like a slippery slope.”

The study, β€œβ€˜And the Next Thing You Know . . .’: Ideological Differences in Slippery Slope Thinking,” was authored by Rajen A. Anderson, Daan Scheepers, and Benjamin C. Ruisch.

Childhood trauma linked to worse outcomes in mindfulness therapy for depression

New research published in PLOS One finds that childhood trauma may worsen outcomes and increase risks in mindfulness meditation programs designed for managing depression.

Mindfulness-Based Cognitive Therapy (MBCT) was originally developed to prevent relapse in people who had recovered from depression. It combines meditation practices with cognitive therapy techniques. Over time, MBCT and similar mindfulness-based programs have been offered to people experiencing active depression. While many participants report improvements, researchers have begun to notice that not everyone responds in the same way.

Previous studies hinted that childhood trauma might influence how well mindfulness programs work. In some cases, trauma survivors benefited more from MBCT when it was used to prevent relapse. But when treating active depression, the picture was less clear. Some participants with trauma histories struggled to improve, and reports of meditation-related adverse effects – such as anxiety, panic, or traumatic memories resurfacing – raised concerns.

A research team at Brown University in Rhode Island set out to explore this gap. Led by Nicholas K. Canby, they conducted two clinical trials. The first involved 52 participants (average age 47 yrs, 79% female), while the second included 104 (average age 40 yrs, 74% female). All participants had symptoms of depression, and some had past or subclinical post-traumatic stress disorder (PTSD).

In the first study, participants were randomized to an MBCT program or a waitlist control group. In the second study, participants were assigned to standard MBCT, focused attention meditation, or open monitoring practices

β€œThe MBCT module followed the standard session-by-session manual, while the [focused attention meditation] and [open monitoring practices] curriculums emphasized specific forms of meditation that are both present in standard MBCT,” Canby and colleagues explained.

Researchers measured depression symptoms before and after treatment, tracked dropout rates, and asked participants about any unexpected or unpleasant experiences during meditation.

Across both studies, childhood trauma predicted worse depression outcomes. In particular, childhood sexual abuse consistently emerged as a strong predictor of poor depression outcomes across both studies, and was significantly linked to higher dropout rates in the larger second study.

Emotional neglect and emotional abuse were also linked to less improvement in depression symptoms. Participants with trauma histories were more likely to report meditation-related side effects, ranging from vivid imagery and heightened anxiety to dissociation and emotional blunting. Some described feeling trapped or overwhelmed during body-focused meditation practices, which triggered memories of past abuse.

The authors concluded, β€œchildhood trauma predicts poorer outcomes in MBCT treatment for active depression yet better outcomes when MBCT is used as a relapse prevention program in remitted individuals who are not currently depressed.”

Canby and colleagues emphasize that meditation is not inherently harmful, but that trauma survivors may need additional support or modifications to standard programs. For example, shorter meditation sessions, smaller group sizes, or trauma-informed guidance could help reduce risks.

The study does have limitations. The participants were mostly female, white, and highly educated, meaning the findings may not apply to all groups. Additionally, one of the trials lacked a non-meditation control group, making it harder to determine whether the negative outcomes were specific to mindfulness or part of a broader treatment challenge.

The study, β€œChildhood trauma and subclinical PTSD symptoms predict adverse effects and worse outcomes across two mindfulness-based programs for active depression,” was authored by Nicholas K. Canby, Elizabeth A. Cosby, Roman Palitsky, Deanna M. Kaplan, Josie Lee, Golnoosh Mahdavi, Adrian A. Lopez, Roberta E. Goldman, Kristina Eichel, Jared R. Lindahl, and Willoughby B. Britton.

Semaglutide helps manage metabolic side effects of antipsychotic drugs

10 December 2025 at 03:00

Recent clinical research indicates that semaglutide may effectively reverse weight gain and blood sugar issues caused by certain antipsychotic medications. A randomized trial demonstrated that patients taking this drug experienced weight loss and improved metabolic health compared to those receiving a placebo. These findings were published in JAMA Psychiatry.

People diagnosed with schizophrenia face a reduced life expectancy compared to the general population. This gap is estimated to be approximately fifteen years. The primary driver of this early mortality is not the psychiatric condition itself but rather cardiovascular disease. High rates of obesity and type 2 diabetes are common in this group. These physical health issues stem from a combination of lifestyle factors and genetic predispositions.

A major contributing factor to poor physical health is the treatment for the mental illness itself. Antipsychotic medications are essential for managing the symptoms of schizophrenia. However, they frequently cause severe side effects related to metabolism. Patients often experience rapid weight gain and disruptions in how their bodies process glucose.

Two specific medications, clozapine and olanzapine, are known to carry the highest risk for these metabolic problems. These drugs are classified as second-generation antipsychotics. Despite these risks, they remain vital tools for psychiatrists. Clozapine is often the only effective option for patients who do not respond to other treatments.

Doctors face a difficult dilemma when treating these patients. Switching a patient off clozapine to improve their physical health can lead to a relapse of psychosis. Consequently, physicians often attempt to manage the side effects with additional medications. Common strategies include prescribing metformin or topiramate to control weight and blood sugar.

Unfortunately, these add-on treatments often provide only limited benefits. Patients might lose a small amount of weight, but it is rarely enough to reverse the risk of diabetes or heart disease. There is a pressing need for therapies that can powerfully counteract metabolic side effects without interfering with psychiatric care. This need drove the current research effort.

The study was led by Marie R. Sass from the Mental Health Center Copenhagen in Denmark. She worked alongside a large team of researchers from Danish institutions and the Zucker Hillside Hospital in New York. They sought to determine if newer diabetes drugs could offer a better solution. Specifically, they investigated a class of drugs known as glucagon-like peptide-1 receptor agonists, or GLP-1RAs.

Semaglutide is a well-known medication in this class. It mimics a hormone that regulates appetite and insulin secretion. Regulatory bodies have approved it for treating type 2 diabetes and obesity. The researchers hypothesized that it could protect patients with schizophrenia from the metabolic damage caused by their antipsychotic regimen.

The research team designed a rigorous experiment to test this theory. They conducted a multicenter, double-blind, randomized clinical trial. This design is considered the gold standard for medical research. It minimizes bias by ensuring neither the doctors nor the patients know who is receiving the real drug.

The trial included 73 adult participants. All participants had been diagnosed with a schizophrenia spectrum disorder. Each participant had started treatment with either clozapine or olanzapine within the previous five years. This criterion focused the study on the early stages of metabolic disruption.

The researchers screened these individuals for signs of blood sugar problems. Participants had to show evidence of prediabetes or early-stage diabetes to qualify. They were then randomly assigned to two groups. One group received a weekly injection of semaglutide, while the other received a placebo injection.

The trial lasted for 26 weeks. During this time, the researchers gradually increased the dose of semaglutide to a target of 1 milligram. This is a standard dose for diabetes management. The team monitored the participants closely for changes in health markers and side effects.

The primary goal was to measure changes in hemoglobin A1c levels. Hemoglobin A1c is a blood test that reflects average blood sugar levels over the past three months. It provides a more stable picture of metabolic health than a single daily glucose test. The researchers also tracked body weight and waist circumference.

The results showed a distinct advantage for the group receiving the medication. Semaglutide reduced hemoglobin A1c levels compared to the placebo. The magnitude of the improvement was clinically significant. This suggests a substantial reduction in the risk of developing full-blown diabetes.

The data revealed that 43 percent of the individuals treated with semaglutide achieved what doctors call β€œlow-risk” blood sugar levels. In comparison, only 3 percent of the placebo group reached this healthy range. This stark difference highlights the drug’s efficacy. It effectively normalized glucose metabolism for nearly half of the treated patients.

Weight loss results were equally distinct. After adjusting for the effects of the placebo, the semaglutide group lost an average of 9.2 kilograms, or about 20 pounds. This physical change was accompanied by a reduction in waist size. The average reduction in waist circumference was approximately 7 centimeters.

The study also examined body composition in greater detail. The researchers found that the weight loss was primarily due to a reduction in fat mass. This is a positive outcome, as muscle loss can be a concern with rapid weight reduction. The reduction in total body fat suggests a genuine improvement in physical health.

Safety was a primary concern throughout the trial. The researchers needed to ensure that semaglutide would not interfere with the antipsychotic medications. They found that psychiatric symptoms did not worsen in the group taking semaglutide. Hospitalization rates for psychiatric reasons were low and similar in both groups.

Physical side effects were consistent with what is known about GLP-1 receptor agonists. The most common complaints were gastrointestinal issues. Nausea, vomiting, and constipation were reported more frequently in the semaglutide group. These side effects are typical for this class of drugs and often subside over time.

One participant in the semaglutide group died of sudden cardiac death shortly after the trial concluded. An autopsy was performed to investigate the cause. The medical examiners determined that the death was not related to the semaglutide treatment. Serious adverse events were otherwise balanced between the two groups.

The researchers also looked at secondary outcomes unrelated to weight. One finding involved nicotine use. Smoking rates are historically very high among people with schizophrenia. The study data suggested that semaglutide might reduce nicotine dependence.

Participants who smoked and took semaglutide had lower scores on a test measuring nicotine dependence compared to the placebo group. This aligns with emerging theories that GLP-1 drugs may influence reward pathways in the brain. It raises the possibility that these drugs could help treat addiction. However, the researchers noted this was an exploratory finding.

There were limitations to what the study could determine regarding other organs. The team did not see significant changes in liver function or cholesterol levels. This might be because the participants were relatively young and their metabolic problems were in the early stages. It is also possible that the 1 milligram dose was not high enough to alter lipid profiles significantly.

The dose used in this study is lower than the 2.4 milligram dose often prescribed specifically for weight loss in the general population. The researchers suggest that higher doses might yield even greater benefits. Longer trials would be necessary to confirm this. The 26-week duration was relatively short in the context of lifelong chronic illness.

The demographics of the study population also present a limitation. The majority of participants were White. This limits the ability to generalize the findings to other racial and ethnic groups who may have different metabolic risk profiles. Future studies will need to be more inclusive to ensure the treatment is effective for everyone.

Another challenge mentioned is the cost and accessibility of these medications. GLP-1 receptor agonists are currently expensive. This presents a barrier for many patients with severe mental illness who rely on public health systems. The authors argue that preventing diabetes and heart disease could save money in the long run.

The study, β€œSemaglutide and Early-Stage Metabolic Abnormalities in Individuals With Schizophrenia Spectrum Disorders A Randomized Clinical Trial,” was authored by Marie R. Sass, Mette Kruse Klausen,Christine R. Schwarz, Line Rasmussen, Malte E. B. Giver, Malthe Hviid, Christoffer Schilling, Alexandra Zamorski,Andreas Jensen, Maria Gefke, Heidi Storgaard, Peter S. Oturai, Andreas Kjaer, Bolette Hartmann, Jens J. Holst, Claus T. EkstrΓΈm, Maj Vinberg,Christoph U. Correll, Tina VilsbΓΈll, and Anders Fink-Jensen.

❌
❌