Normal view

Today — 3 November 2025Main stream

Hair shine linked to perceptions of youth and health in women

2 November 2025 at 23:00

A new study provides evidence that specific hair characteristics, namely alignment and shine, play a significant part in how a woman’s age, health, and attractiveness are perceived. The research, published in the International Journal of Cosmetic Science, suggests that women with straighter and shinier hair are consistently judged as being younger, healthier, and more attractive.

Scientific investigations into female physical appearance have historically concentrated on facial features like symmetry or skin condition. In many of these studies, information about hair is intentionally removed, either by having participants wear a hairband or by digitally editing it out of images. This approach has left a gap in understanding how hair, which is easily altered, contributes to social perceptions.

“Research investigating female physical appearance mostly considered the role of facial features in assessments of, for example, attractiveness. Hair has typically been removed or covered in rating studies,” said study author Bernhard Fink, who is affiliated with the Department of Evolutionary Anthropology at the University of Vienna and is the CEO of Biosocial Science Information.

“Yet people report high concern with the appearance of their hair, and poor hair condition can impact self-perception and self-esteem. We had evidence from previous research using computer-generated (rendered) female hair that human observers are sensitive to even subtle variations of hair diameter, density, and style. Here, we extend this evidence to the study of natural hair wigs, worn by female models, and the systematic manipulation of hair alignment, shine, and volume.”

The research consisted of two experiments in which female participants rated images of a woman wearing different wigs. In the first experiment, the researchers focused specifically on the impact of hair shine. They prepared 10 pairs of natural Caucasian hair wigs that varied in color, length, and style, including straight and curly options. For each pair, one wig was treated to be high-shine, while the other was treated with a dry shampoo to appear low-shine.

A female model wore each of the 20 wigs and was photographed from a three-quarter back view, so that her facial features were not visible. These image pairs were then shown to 1,500 female participants from three countries: the United States, Germany, and Spain. Participants were asked to look at each pair and choose which image showed a woman who appeared younger, healthier, or more attractive.

The results of this first experiment were consistent across all three countries. For nearly all wig styles, the high-shine version was selected as appearing more youthful and more attractive. The preference for high-shine hair was even stronger when participants were asked to judge health, with the shiny version being chosen for all 10 hair types. This suggests that hair shine is a potent signal of health and vitality that is recognizable across different Western cultures.

The second experiment was designed to explore a more complex picture by adding two more hair features: alignment and volume. The researchers prepared wigs in both neutral blonde and dark brown. They created eight different versions for each color by combining high and low levels of shine, alignment, and volume. For example, one wig might have high alignment (very straight), high shine, and low volume.

A model was photographed wearing each of these wigs from three different angles: front, three-quarter front, and three-quarter back. A group of 2,000 women in the United States then rated the resulting images for youth, health, and attractiveness. This design allowed the researchers to determine the relative importance of each hair feature and see if the effects changed with hair color or viewing angle.

The findings from this experiment pointed to hair alignment as the most influential factor. Hair that was straight-aligned was consistently perceived as the most youthful, healthy, and attractive, regardless of its color or the angle from which it was viewed. High shine also had a positive effect on ratings, though its impact was not as strong as that of straight alignment.

“Most participants provided their assessments of hair images on mobile devices,” Fink noted. “One would assume that subtle variations in hair condition are evident only when presented and viewed on larger screens. This was not the case. Although the hair manipulations were subtle, especially those of alignment and shine, participants were sensitive and provided systematic responses. This has practical implications, as consumers’ assessment of ‘beautiful hair,’ e.g., through viewing on the Internet, influences their wishes for their own hair.”

In contrast, high volume did not receive such positive assessments. The combination that was rated most favorably across the board was hair with high alignment, high shine, and low volume. The study also detected some minor interactions between the hair features and the viewing angle, but the main effects of alignment and shine were far more significant. These results suggest that the smooth, orderly appearance of straight, shiny hair sends powerful positive signals.

“The key message of the study is that female head hair plays a role in assessments of age, health, and attractiveness,” Fink told PsyPost. “Straight hair and shiny hair are perceived as youthful, healthy, and attractive. This observation was made by systematically manipulating hair features using natural hair wigs, thus carefully controlling hair alignment, shine, and volume. High volume did not have as positive an impact on hair assessments as alignment and shine had. The positive effect of shiny hair on assessments was observed in raters from three countries (USA, Germany, Spain).”

But as with all research, there are limitations to consider. The experiments used only natural Caucasian hair wigs, so the findings may not apply to hair from other ethnic groups, which can have different fiber characteristics. It is also worth noting that the participants in both experiments were women judging images of other women. This means the results capture a female-to-female perspective, and it remains unclear whether these preferences would be shared by male observers.

The researchers also disclosed that several authors are employees of or consultants for The Procter & Gamble Company, a leading manufacturer of hair care products. “I would like to note that this study was conducted in collaboration with partners from Procter & Gamble,” Fink said. “This is important because the systematic manipulations of hair were made by professional stylists. Likewise, the imaging setup required work, resulting in a system dedicated to capturing the subtle hair feature variations, especially those that result from light interacting with hair fibers.”

For future directions, the researchers aim to expand this research to non-Western populations, such as in Asian countries, to see if the preference for shiny, aligned hair is a more universal phenomenon. Examining how hair features interact with other variables, like skin pigmentation, is another avenue for further investigation.

The study, “Perceptions of female age, health and attractiveness vary with systematic hair manipulations,” was authored by Susanne Will, Mandy Beckmann, Kristina Kunstmann, Julia Kerschbaumer, Yu Lum Loh, Samuel Stofel, Paul J. Matts, Todd K. Shackelford, and Bernhard Fink.

Yesterday — 2 November 2025Main stream

Cognitive issues in ADHD and learning difficulties appear to have different roots

2 November 2025 at 17:00

A new study reports that the widespread cognitive difficulties in children with learning problems appear to be a core feature of their condition, independent of their attentional behaviors. In contrast, the more limited cognitive challenges found in children with Attention Deficit Hyperactivity Disorder (ADHD) who do not have learning difficulties may be consequences of their inattention and hyperactivity. The research was published in the Journal of Attention Disorders.

Children with ADHD and those with specific learning difficulties often exhibit overlapping challenges with attention and certain thinking skills. This has led researchers to question the nature of this relationship: Are the difficulties with memory and processing simply a side effect of being inattentive or hyperactive? A team of researchers sought to disentangle these factors to better understand the underlying cognitive profiles of these distinct but frequently co-occurring conditions.

“While there have been previous studies that examined the link between ADHD symptoms and learning or cognitive skills in groups of children with ADHD or learning difficulties, there has been no study that examined how ADHD symptoms influence cognitive skills that are key to learning in these neurodivergent groups,” said study author Yufei Cai, a PhD researcher in the Department of Psychiatry at the University of Cambridge.

“Understanding how ADHD attentional behaviors influence these cognitive skills that are essential for successful learning in these neurodivergent populations can offer suggestions for designing interventions that might improve cognitive or learning functioning in these neurodivergent groups.”

To investigate, the researchers performed a detailed analysis of existing data from the Centre for Attention, Learning, and Memory, a large cohort of children referred for concerns related to attention, memory, or learning. They selected data from 770 children, aged 5 to 18, and organized them into four distinct groups. These groups included children with a diagnosis of ADHD only, children with learning difficulties only, children with both ADHD and learning difficulties, and a comparison group of children with no known neurodevelopmental conditions.

Each child had completed a broad range of standardized tests. These assessments measured fundamental cognitive skills such as verbal and visuospatial short-term memory, working memory (the ability to hold and manipulate information temporarily), processing speed, and sustained attention. Higher-level executive functions, like the ability to flexibly shift between different tasks or rules, were also evaluated. Alongside these direct assessments, parents provided ratings of their child’s daily behaviors related to inattention and hyperactivity or impulsivity.

“Our study is the first to date that has (1) a relatively large neurodivergent sample size, (2) a comprehensive battery of cognitive and learning measures, and (3) the inclusion of a co-occurring condition group of those with both ADHD and learning difficulties to examine the extent to which elevated scores of ADHD symptoms can account for the group differences in cognitive skills that are key to learning between these neurodivergent and comparison groups,” Cai told PsyPost.

“The study aims to characterize the cognitive profiles of these three neurodivergent groups, as well as examine the associations between ADHD symptoms (i.e., inattentive and hyperactive/impulsive behaviors) and cognitive skills that are key to learning in children with ADHD, learning difficulties, and those with both conditions.”

The core of the analysis involved a two-step comparison. First, the researchers compared the performance of the four groups across all the cognitive tests to identify where differences existed. Next, they applied a statistical approach to see what would happen to these differences if they mathematically adjusted for each child’s level of parent-rated inattention and hyperactivity. If a group’s cognitive weakness disappeared after this adjustment, it would suggest the cognitive issue might be a consequence of attentional behaviors. If the weakness remained, it would point to a more fundamental cognitive deficit.

The results revealed a clear divergence between the groups. Children with learning difficulties, both with and without a co-occurring ADHD diagnosis, displayed a broad pattern of lower performance across many cognitive domains. They showed weaknesses in short-term memory, working memory, processing speed, sustained attention, and the ability to sequence information.

When the researchers statistically accounted for levels of inattention and hyperactivity, these cognitive deficits largely persisted. This outcome suggests that for children with learning difficulties, these cognitive challenges are likely foundational to their condition, not just a byproduct of attentional issues.

The profile for children with ADHD only was quite different. This group performed at age-appropriate levels on many of the cognitive tasks, including verbal short-term memory, working memory, processing speed, and sustained attention. They did show some specific difficulties, particularly in visuospatial short-term memory and the ability to quickly sequence numbers or letters.

However, these particular challenges were no longer apparent after the statistical analysis adjusted for their levels of inattention and hyperactivity. This finding indicates that for these children, their attentional behaviors may directly interfere with performance on certain cognitive tasks.

One specific challenge did appear to be independent of attentional behaviors for the ADHD only group. Their difficulty with set shifting, or mentally switching between different task rules, remained even after accounting for inattention and hyperactivity. This points to a more specific executive function challenge in ADHD that may not be fully explained by its primary behavioral symptoms.

Overall, the findings paint a picture of two different neurodevelopmental pathways. For children with learning difficulties, core cognitive weaknesses appear to drive their academic struggles. For many children with ADHD alone, their primary attentional challenges may be what creates more limited and specific hurdles in their cognitive performance.

“Children with learning difficulties, either with or without ADHD, had lower levels of cognitive skills than children with ADHD without co-occurring learning difficulties and those in the comparison group,” Cai explained. “Elevated levels of inattention and hyperactive/impulsive behaviors did not influence the low cognitive performance observed in these children. Instead, this lower cognitive performance may be more closely associated with their learning ability, which is central to their neurodevelopmental characteristics.”

“However, these attentional behaviors are closely linked to the more limited cognitive challenges observed in children with ADHD without co-occurring learning difficulties. Understanding whether neurodivergent children with ADHD, learning difficulties, or both experience cognitive or learning-related challenges provides a valuable framework for designing targeted intervention and support strategies.”

But as with all research, the study includes some limitations. The group with learning difficulties was identified based on low scores on academic tests rather than formal clinical diagnoses of conditions like dyslexia or dyscalculia, which might not be perfectly equivalent. The study’s design provides a snapshot at a single point in time, so it cannot capture how these relationships might evolve as children develop.

Future research could build on these findings by following children over several years to observe these developmental trajectories directly. Incorporating a wider array of cognitive measures and gathering behavioral information from multiple sources, including teachers, could also help create an even more detailed understanding. Such work could help refine support strategies, ensuring that interventions are targeted to a child’s specific profile of cognitive and behavioral needs.

The study, “Associations Between ADHD Symptom Dimensions and Cognition in Children With ADHD and Learning Difficulties,” was authored by Yufei Cai, Joni Holmes, and Susan E. Gathercole.

Men’s brains shrink faster with age, deepening an Alzheimer’s mystery

2 November 2025 at 15:00

A new large-scale brain imaging study suggests that the normal process of aging does not affect female brains more severely than male brains. In fact, the findings indicate that men tend to experience slightly greater age-related decline in brain structure, a result that challenges the idea that brain aging patterns explain the higher prevalence of Alzheimer’s disease in women. The research was published in the Proceedings of the National Academy of Sciences.

Alzheimer’s disease is a progressive neurodegenerative condition that impairs memory and other essential cognitive functions. It is the most common cause of dementia, and women account for a significant majority of cases worldwide. Because advancing age is the single greatest risk factor for developing Alzheimer’s, researchers have long wondered if sex-based differences in how the brain ages might contribute to this disparity.

Previous studies on this topic have produced mixed results, with some suggesting men’s brains decline faster and others indicating the opposite. To provide a clearer picture, an international team of researchers led by scientists at the University of Oslo set out to investigate this question using an exceptionally large and diverse dataset. They aimed to determine if structural changes in the brain during healthy aging differ between men and women, and if any such differences become more pronounced with age.

“Women are diagnosed with Alzheimer’s disease more often than men, and since aging is the main risk factor, we wanted to test whether men’s and women’s brains change differently with age. If women’s brains declined more, that could have helped explain their higher Alzheimer’s prevalence,” said study author Anne Ravndal, a PhD candidate at the University of Oslo.

To conduct their investigation, the researchers combined data from 14 separate long-term studies, creating a massive dataset of 12,638 magnetic resonance imaging (MRI) scans from 4,726 cognitively healthy participants. The individuals ranged in age from 17 to 95 years old. The longitudinal nature of the data, with each person being scanned at least twice over an average interval of about three years, allowed the team to track brain changes within individuals over time.

Using this information, they measured changes in several key brain structures, including the thickness and surface area of the cortex, which is the brain’s outer layer responsible for higher-level thought.

The analysis began by examining the raw changes in brain structure without any adjustments. In this initial step, the team found that men experienced a steeper decline than women in 17 different brain measures. These included reductions in total brain volume, gray matter, white matter, and the volume of all major brain lobes. Men also showed a faster thinning of the cortex in visual and memory-related areas and a quicker reduction in surface area in other regions.

Recognizing that men typically have larger heads and brains than women, the researchers performed a second, more nuanced analysis that corrected for differences in head size. After this adjustment, the general pattern held, though some specifics changed. Men still showed a greater rate of decline in the occipital lobe volume and in the surface area of the fusiform and postcentral regions of the cortex. In contrast, women only exhibited a faster decline in the surface area of a small region within the temporal lobe.

The findings were in line with the researchers expectations: “Although earlier studies have shown mixed findings, especially for cortical regions, our results align with the overall pattern that men show slightly steeper age-related brain decline,” Ravndal told PsyPost. “Still, it was important to demonstrate this clearly in a large longitudinal multi-cohort sample covering the full adult lifespan.”

The study also revealed age-dependent effects, especially in older adults over 60. In this age group, men showed a more rapid decline in several deep brain structures, including the caudate, nucleus accumbens, putamen, and pallidum, which are involved in motor control and reward. Women in this age group, on the other hand, showed a greater rate of ventricular expansion, meaning the fluid-filled cavities within the brain enlarged more quickly.

Notably, after correcting for head size, there were no significant sex differences in the rate of decline of the hippocampus, a brain structure central to memory formation that is heavily affected by Alzheimer’s disease.

The researchers also conducted additional analyses to test the robustness of their findings. When they accounted for the participants’ years of education, some of the regions showing faster decline in men were no longer statistically significant.

Another analysis adjusted for life expectancy. Since women tend to live longer than men, a man of any given age is, on average, closer to the end of his life. After accounting for this “proximity to death,” several of the cortical regions showing faster decline in men became non-significant, while some areas in women, including the hippocampus in older adults, began to show a faster rate of decline. This suggests that differences in longevity and overall biological aging may influence the observed patterns.

“Our findings add support to the idea that normal brain aging doesn’t explain why women are more often diagnosed with Alzheimer’s,” Ravndal said. “The results instead point toward other possible explanations, such as differences in longevity and survival bias, detection and diagnosis patterns, or biological factors like APOE-related vulnerability and differential susceptibility to pathological processes, though these remain speculative.”

The study, like all research, has some caveats to consider. The data were collected from many different sites, which can introduce variability. The follow-up intervals for the brain scans were also relatively short in the context of a human lifespan. A key consideration is that the participants were all cognitively healthy, so these findings on normal brain aging may not apply to the changes that occur in the pre-clinical or early stages of Alzheimer’s disease.

It is also important to that although the study identified several statistically significant differences in brain aging between the sexes, the researchers characterized the magnitude of these effects as modest. For example, in the pericalcarine cortex, men showed an annual rate of decline of 0.24% compared to 0.14% for women, a difference of just one-tenth of a percentage point per year.

“The sex differences we found were few and small,” Ravndal told PsyPost. “Importantly, we found no evidence of greater decline in women that could help explain their higher Alzheimer’s disease prevalence. Hence, if corroborated in other studies, the practical significance is that women don’t need to think that their brain declines faster, but that other reasons underlie this difference in prevalence.”

Future research could explore factors such as differences in longevity, potential biases in how the disease is detected and diagnosed, or biological variables like the APOE gene, a known genetic risk factor that may affect men and women differently.

“We are now examining whether similar structural brain changes relate differently to memory function in men and women,” Ravndal said. “This could help reveal whether the same degree of brain change has different cognitive implications across sexes.”

The study, “Sex differences in healthy brain aging are unlikely to explain higher Alzheimer’s disease prevalence in women,” was authored by Anne Ravndal, Anders M. Fjell, Didac Vidal-Piñeiro, Øystein Sørensen, Emilie S. Falch, Julia Kropiunig, Pablo F. Garrido, James M. Roe, José-Luis Alatorre-Warren, Markus H. Sneve, David Bartrés-Faz, Alvaro Pascual-Leone, Andreas M. Brandmaier, Sandra Düzel, Simone Kühn, Ulman Lindenberger, Lars Nyberg, Leiv Otto Watne, Richard N. Henson, for the Australian Imaging Biomarkers and Lifestyle flagship study of ageing (AIBL), the Alzheimer’s Disease Neuroimaging Initiative (ADNI), Kristine B. Walhovd, and Håkon Grydeland.

Your politics are just as hot as your profile picture, according to new online dating study

1 November 2025 at 22:00

A new study has found that a person’s political affiliation is a powerful factor in online dating choices, carrying about as much weight as physical attractiveness. At the same time, the research suggests that a willingness to date someone from an opposing party, a signal of political tolerance, is an even more desirable trait. The findings, published in Political Science Research and Methods, provide a nuanced look at how political divisions are shaping our most personal relationships.

The research was conducted by a team from Queen Mary University of London and the London School of Economics and Political Science. They were motivated by the observation that political polarization has begun to influence decisions far outside the voting booth, from hiring to personal friendships.

The researchers questioned whether this bias is purely about party labels, or if those labels act as a shorthand for other assumed characteristics, such as values or lifestyle. By focusing on the complex world of online dating, they sought to disentangle the raw effect of partisanship from the many other factors that guide the search for a partner.

To investigate these questions, the scientists designed a realistic online dating simulation for 3,000 participants in the United Kingdom. Each participant was shown a series of paired dating profiles and asked to choose which person they would prefer to date. The profiles were generated with a mix of randomly assigned traits, creating a wide variety of potential partners. This method, known as a conjoint experiment, allows researchers to precisely measure the independent influence of each characteristic on a person’s choice.

The profiles included key political attributes, such as party affiliation (Labour or Conservative) and political tolerance. The tolerance attribute was presented as a statement in the profile’s bio, either expressing openness (“Open to match with anyone”) or intolerance (“No Tories/Labour!”). Profiles also featured nonpolitical traits common on dating apps, including physical appearance, race, education level, height, and even dietary habits, such as being vegetarian. The use of actual photographs, pre-rated for attractiveness, was intended to make the experience more similar to using a real dating app.

The results showed that political identity has a substantial effect on dating decisions. On average, a person was 18.2 percentage points more likely to be chosen if they shared the same party affiliation as the participant. This effect was similar in magnitude to the preference for a physically attractive person and was twice as strong as the preference for a potential date with a university degree. This suggests that in the modern dating market, political alignment can be just as important as conventional standards of attraction.

However, the single most influential trait was not party affiliation, but political tolerance. A profile that signaled an openness to dating people from any political background was nearly 20 percentage points more likely to be chosen than a profile expressing intolerance. This preference for open-mindedness was slightly stronger than the preference for a shared party. Participants appeared to value tolerance even when evaluating someone from their own party, indicating a genuine appreciation for the trait rather than just an aversion to being rejected themselves.

The study also uncovered a notable asymmetry in partisan behavior. While supporters of both major parties preferred to date within their own political group, this tendency was much stronger on the left. Labour supporters were approximately twice as likely to choose a fellow Labour supporter compared to the rate at which Conservatives chose other Conservatives. This finding points to different social dynamics within the two partisan groups in the UK.

Another surprising asymmetry emerged when participants encountered profiles that defied political stereotypes. Conservative participants were more likely to select a Labour supporter who broke from the typical mold, for example, by being White or holding “traditional” values.

In contrast, Labour supporters were less likely to choose a Conservative profile that broke stereotypes, such as a Black or vegetarian Conservative. The researchers suggest this could be related to a negative reaction against individuals who violate strong group expectations, making them seem unfamiliar.

The researchers acknowledge certain limitations. The study focused only on Labour and Conservative supporters, which may not capture the full complexity of the UK’s multiparty political system. While the experiment identifies these differing preferences between partisan groups and genders, it does not fully explain the underlying psychological reasons for them. Future research could explore these motivations in greater depth.

Additional work might also examine the role of geography, as dating pool size and composition in urban versus rural areas could alter how people weigh political and nonpolitical traits. The influence of other major political identities, such as a person’s stance on Brexit, could also be a productive area for investigation.

The study’s findings suggest that while partisan divides are real and affect relationship formation, they are not absolute. An expressed sense of tolerance may be one of the most effective ways to bridge these political gaps in the personal sphere.

The study, “‘Sleeping with the enemy’: partisanship and tolerance in online dating,” was authored by Yara Sleiman, Georgios Melios and Paul Dolan.

Before yesterdayMain stream

New study finds CBD worsens cannabis effects in schizophrenia

1 November 2025 at 20:00

A new study has found that, contrary to expectations, pre-treatment with cannabidiol, or CBD, exacerbated the acute memory impairment and psychotic symptoms caused by cannabis in patients with schizophrenia. This research, which offers a more complex picture of how cannabinoids interact in this clinical population, was published in Neuropsychopharmacology.

Researchers have long observed that cannabis use can worsen symptoms and increase the risk of relapse in people diagnosed with schizophrenia. The adverse effects of cannabis are largely attributed to one of its main components, delta-9-tetrahydrocannabinol, or THC. Another major component of the cannabis plant is cannabidiol, or CBD.

While structurally similar to THC, CBD acts quite differently in the body and does not produce an intoxicating “high.” Its exact mechanism of action is still an area of active investigation, but it is thought to interact with the body’s endocannabinoid system in complex ways. One leading theory suggests CBD alters the function of the brain’s primary cannabinoid receptor, known as CB1, changing how it responds to THC and the body’s own cannabinoid molecules.

Because of these properties, CBD has been investigated as a potential treatment for psychosis. Several clinical trials have suggested that high doses of CBD can help reduce psychotic symptoms in people with schizophrenia. It also appears to have a favorable safety profile and is generally well-tolerated by patients, making it a promising candidate for a new therapeutic approach.

The question remained, however, whether CBD could also protect against the acute negative effects of THC. Previous experimental studies in healthy volunteers have produced mixed results. Some found that CBD could lessen THC-induced impairment, while others reported no effect or even an increase in some adverse effects. These discrepancies could be due to variations in dosage, the timing of administration, and whether the substances were inhaled or taken orally.

The new study was designed to clarify this relationship in a clinically relevant population: individuals with schizophrenia who also regularly use cannabis. The researchers hypothesized that a high dose of CBD given before cannabis use would protect against THC-induced memory problems and psychotic symptoms.

“Cannabis addiction is fairly common in people with schizophrenia and is linked to poor outcomes. I always encourage my patients to try and reduce their use, as this should improve their quality of life and risk of relapse, but there’s a large group of people who don’t want to stop,” said study author Edward Chesney, a clinical lecturer at King’s College London.

“Since CBD is being developed as a treatment for schizophrenia, and for cannabis addiction too, we designed this laboratory study to see if CBD could be used to prevent or reduce cannabis-induced psychosis. We therefore recruited people with schizophrenia who use cannabis, randomized them to treatment with a clinical dose of CBD or a placebo, and then gave them a large dose of vaporized cannabis.”

A randomized, double-blind, placebo-controlled, crossover trial is considered a robust method for testing interventions. Thirty participants, all diagnosed with schizophrenia or schizoaffective disorder and a co-occurring cannabis use disorder, completed the main part of the study. Each participant attended two separate experimental sessions.

In one session, they received a 1000 mg oral dose of CBD. In the other session, they received an identical-looking placebo capsule. The order in which they received CBD or placebo was random, and neither the participants nor the researchers knew which treatment was given on which day. Three hours after taking the capsule, to allow the CBD to reach its peak concentration in the body, participants inhaled a controlled dose of vaporized cannabis containing THC.

The researchers measured several outcomes. The primary measure of cognitive function was a test of delayed verbal recall, which assesses the ability to remember a list of words after a short delay. To measure psychotic symptoms, they used a standardized clinical interview called the Positive and Negative Syndrome Scale, focusing on the positive symptoms subscale which includes items like paranoia and disorganized thinking. The team also collected blood samples to measure the concentrations of THC and CBD in the participants’ systems.

The results of the experiment were the opposite of what the researchers had predicted. When participants were pre-treated with CBD, their performance on the memory test was worse than when they were pre-treated with the placebo. On average, they recalled about 1.3 fewer words after receiving CBD compared to the placebo condition.

Similarly, the psychotic symptoms induced by cannabis were more severe following CBD pre-treatment. The average increase in the psychosis rating scale score was 5.0 points after CBD, compared to an increase of 2.9 points after the placebo. The researchers noted that large increases in these symptoms were observed in seven participants in the CBD condition. Specifically, CBD appeared to heighten cannabis-induced conceptual disorganization and feelings of suspiciousness.

“The effects were very clear and clinically meaningful,” Chesney told PsyPost. “Almost all the large psychotic reactions we observed were in the CBD pre-treatment group. The results were completely unexpected. We thought CBD would reduce the effects of THC, but the opposite happened — CBD actually increased THC’s adverse effects.”

“Interestingly, CBD didn’t change how strong or long the high felt, nor did it affect anxiety levels. I had initially assumed that CBD had increased all the effects of the cannabis, but it seems to have specifically increased the psychotic and cognitive symptoms for reasons we don’t yet understand.”

To understand why this might be happening, the researchers examined the blood samples. They looked for a pharmacokinetic interaction, which would occur if CBD changed the way the body metabolizes THC, perhaps by increasing the levels of THC in the blood. They found no evidence for this. The plasma concentrations of THC and its main active metabolite, 11-hydroxy-THC, were not significantly different between the CBD and placebo conditions. This suggests the effect was likely pharmacodynamic, meaning it relates to how the two substances interact with receptors and systems in the brain, rather than how they are processed by the body.

The findings highlight “that cannabinoids and the endocannabinoid system are very complex,” Chesney said. “We didn’t observe a pharmacokinetic interaction between CBD and THC, so perhaps there’s something more interesting at play – perhaps there’s something different about the brains of people with schizophrenia, or heavy cannabis users, which makes them sensitive to the effects of CBD as well as THC.”

The study has some limitations. The findings apply to a specific population of patients with both schizophrenia and a cannabis use disorder, and the results may not generalize to people with schizophrenia who do not use cannabis regularly. The experiment used a single high dose of CBD, and the effects could be different at other doses. Also, the cannabis dose was fixed by the researchers, which differs from real-world scenarios where users can adjust their intake.

Future research could explore whether these effects are present in people with schizophrenia who do not have a cannabis use disorder, or in people with a cannabis use disorder who do not have schizophrenia. This would help determine if the observed interaction is specific to the combination of these two conditions.

Despite these limitations, the study provides important information about the complex interactions between cannabinoids, particularly in a vulnerable clinical population. The results suggest that for patients with schizophrenia who use cannabis, taking CBD may not be a safe strategy to mitigate the harms of THC and could potentially make them worse.

“I don’t think this makes it less likely that CBD will work as a treatment for schizophrenia,” Chesney added. “It’s just a single study, and we only used a single dose of CBD. With antidepressants, for example, you often see an initial increase in anxiety levels and restlessness before you start to see some benefit. The results of clinical trials of CBD, where patients have received treatment for several weeks, are still very encouraging. I still come across lots of people who think that CBD is just a placebo, the results of my study suggest that it is definitely doing something.”

The study, “Does cannabidiol reduce the adverse effects of cannabis in schizophrenia? A randomised, double-blind, cross-over trial,” was authored by Edward Chesney, Dominic Oliver, Ananya Sarma, Ayşe Doğa Lamper, Ikram Slimani, Millie Lloyd, Alex M. Dickens, Michael Welds, Matilda Kråkström, Irma Gasparini-Andre, Matej Orešič, Will Lawn, Natavan Babayeva, Tom P. Freeman, Amir Englund, John Strang, and Philip McGuire.

Google’s AI co-scientist just solved a biological mystery that took humans a decade

1 November 2025 at 16:00

Can artificial intelligence function as a partner in scientific discovery, capable of generating novel, testable hypotheses that rival those of human experts? Two recent studies highlight how a specialized AI developed by Google not only identified drug candidates that showed significant anti-fibrotic activity in a laboratory model of chronic liver disease but also independently deduced a complex mechanism of bacterial gene transfer that had taken human scientists years to solve.

The process of scientific discovery has traditionally relied on human ingenuity, combining deep expertise with creative insight to formulate new questions and design experiments. However, the sheer volume of published research makes it challenging for any single scientist to connect disparate ideas across different fields. A new wave of artificial intelligence tools aims to address this challenge by augmenting, and accelerating, human-led research.

One such tool is Google’s AI co-scientist, which its developers hope will significantly alter the landscape of biomedical research. Recent studies published in Advanced Science and Cell provide early evidence of this potential, showing the system’s ability to not only sift through vast datasets but also to engage in a reasoning process that can lead to high-impact discoveries.

Google’s AI Co-scientist: A Multi-Agent System for Discovery

Google’s AI co-scientist is a multi-agent system built upon the Gemini 2.0 large language model, designed to mirror the iterative process of the scientific method. It operates not as a single entity, but as a team of specialized AI agents working together. This structure is intended to help scientists generate new research ideas, create detailed proposals, and plan experiments.

The system operates through a “scientist-in-the-loop” model, where human experts can provide initial research goals, offer feedback, and guide the AI’s exploration using natural language. The specialized agents each handle a distinct part of the scientific reasoning process. The Generation Agent acts as a brainstormer, exploring scientific literature and engaging in simulated debates to produce initial ideas. The Reflection Agent serves as a peer reviewer, critically assessing these ideas for quality, novelty, and plausibility.

Other agents contribute to refining the output. The Ranking Agent runs an Elo-based tournament, similar to chess rankings, to prioritize the most promising hypotheses. The Evolution Agent works to improve top-ranked ideas by combining concepts or thinking in unconventional ways. A Meta-review Agent synthesizes all the feedback to improve the performance of the other agents over time. This collaborative, self-improving cycle is designed to produce increasingly novel and high-quality scientific insights.

AI Pinpoints New Drug Candidates for Liver Fibrosis

In the study published in Advanced Science, researchers partnered with Google to explore new ways of treating liver fibrosis, a progressive condition marked by excessive scarring in the liver. Current treatment options are extremely limited, in part because existing models for studying the disease do not accurately replicate how fibrosis develops in the human liver. These limitations have hindered drug development for years.

To address this gap, the research team asked the AI co-scientist to generate new, testable hypotheses for treating liver fibrosis. Specifically, they tasked the AI with exploring how epigenomic mechanisms—chemical changes that influence gene activity without altering the DNA sequence—might be targeted to reduce or reverse fibrosis.

“For the data used in the paper, we provided a single prompt and received a response from AI co-scientist, which are shown in supplemental data file 1,” explained Gary Peltz, a professor at Stanford University School of Medicine. “The prompt was carefully prepared, providing the area (epigenomic effects in liver fibrosis) and experimental methods (use of our hepatic organoids) to focus on. However, in most cases, it is important to iteratively engage with an AI in order to better define the question and enable it to provide a more complete answer.”

The AI system scanned the scientific literature and proposed that three classes of epigenomic regulators could be promising targets for anti-fibrotic therapy: histone deacetylases (HDACs), DNA methyltransferase 1 (DNMT1), and bromodomain protein 4 (BRD4). It also outlined experimental techniques for testing these ideas, such as single-cell RNA sequencing to track how the drugs might affect different cell populations. The researchers incorporated these suggestions into their experimental design.

To test the AI’s proposals, the team used a laboratory system based on human hepatic organoids—three-dimensional cell cultures derived from stem cells that resemble key features of the human liver. These mini-organs contain a mix of liver cell types and can model fibrosis when exposed to fibrotic triggers like TGF-beta, a molecule known to promote scarring. The organoid system allowed researchers to assess not just whether a drug could reduce fibrosis, but also whether it would be toxic or promote regeneration of liver tissue.

The findings provided evidence that two of the drug classes proposed by AI (HDAC inhibitors and BRD4 inhibitors) showed strong anti-fibrotic effects. One of the tested compounds, Vorinostat, is an FDA-approved cancer drug. In the organoid model, it not only suppressed fibrosis but also appeared to stimulate the growth of healthy liver cells.

“Since I was working on the text for a grant submission in this area, I was surprised by the AI co-scientist output,” Peltz told PsyPost.

In particular, Peltz was struck by how little prior research had explored this potential. After checking PubMed, he found over 180,000 papers on liver fibrosis in general, but only seven that mentioned Vorinostat in this context. Of those, four turned out to be unrelated to fibrosis, and another only referenced the drug in a data table without actually testing it. That left just two studies directly investigating Vorinostat for liver fibrosis.

While the HDAC and BRD4 inhibitors showed promising effects, the third AI-recommended class, DNMT1 inhibitors, did not. One compound in this category was too toxic to the organoids to be considered viable for further study.

To evaluate the AI’s performance, Peltz also selected two additional drug targets for comparison based on existing literature. These were chosen precisely because they had more published support suggesting they might work against fibrosis.

But when tested in the same organoid system, the inhibitors targeting those well-supported pathways did not reduce fibrosis. This outcome suggested that the AI was able to surface potentially effective treatments that human researchers might have missed, despite extensive literature reviews.

Looking ahead, Peltz said his team is “developing additional data with our liver organoid system to determine if Vorinostat can be effective for reducing an established fibrosis, and we are talking with some organizations and drug companies about the potential for Vorinostat being tested as an anti-fibrotic agent.”

An AI Recapitulates a Decade-Long Discovery in Days

In a separate demonstration of its reasoning power, the AI co-scientist was challenged to solve a biological mystery that had taken a team at Imperial College London over a decade to unravel. The research, published in Cell, focused on a peculiar family of mobile genetic elements in bacteria known as capsid-forming phage-inducible chromosomal islands, or cf-PICIs.

Scientists were puzzled by how identical cf-PICIs were found in many different species of bacteria. This was unexpected because these elements rely on viruses called phages to spread, and phages typically have a very narrow host range, often infecting only a single species or strain. The human research team had already solved the puzzle through years of complex experiments, but their findings were not yet public.

They had discovered a novel mechanism they termed “tail piracy,” where cf-PICIs produce their own DNA-filled “heads” (capsids) but lack tails. These tailless particles are then released and can hijack tails from a wide variety of other phages infecting different bacterial species, creating chimeric infectious particles that can inject the cf-PICI’s genetic material into a new host.

To test the AI co-scientist, the researchers provided it only with publicly available information from before their discovery was made and posed the same question: how do identical cf-PICIs spread across different bacterial species?

The AI co-scientist generated five ranked hypotheses. Its top-ranked suggestion was that cf-PICIs achieve their broad host range through “capsid-tail interactions,” proposing that the cf-PICI heads could interact with a wide range of phage tails. This hypothesis almost perfectly mirrored the “tail piracy” mechanism the human team had spent years discovering.

The AI, unburdened by the researchers’ initial assumptions and biases from existing scientific models, arrived at the core of the discovery in a matter of days. When the researchers benchmarked this result, they found that other leading AI models were not able to produce the same correct hypothesis, suggesting a more advanced reasoning capability in the AI co-scientist system.

Limitations and the Path Forward

Despite these promising results, researchers involved in the work caution that significant limitations remain. The performance of the AI co-scientist has so far been evaluated on a small number of specific biological problems. More testing is needed to determine if this capability can be generalized across other scientific domains. The AI’s reasoning is also dependent on the quality and completeness of the publicly available data it analyzes, which may contain its own biases or gaps in knowledge.

Perhaps most importantly, human expertise remains essential. While an AI can generate a large volume of plausible hypotheses, it lacks the deep contextual judgment that comes from years of hands-on experience. An experienced scientist is still needed to evaluate which ideas are truly worth pursuing and to design the precise experiments required for validation. The challenge of how to prioritize AI-generated ideas is substantial, as traditional experimental pipelines are not fast or inexpensive enough to test every promising lead.

“Generally, AI output must be evaluated by people with knowledge in the area; and AI output is most valuable to those with domain-specific expertise because they are best positioned to assess it and to make use of it,” Peltz told PsyPost.

Nevertheless, these two studies provide evidence that AI systems are evolving from helpful assistants into true collaborative partners in the scientific process. By generating novel and experimentally verifiable hypotheses, tools like the AI co-scientist have the potential to supercharge human intuition and accelerate the pace of scientific and biomedical breakthroughs.

“I believe that AI will dramatically accelerate the pace of discovery for many biomedical areas and will soon be used to improve patient care,” Peltz said. “My lab is currently using it for genetic discovery and for drug re-purposing, but there are many other areas of bioscience that will soon be impacted. At present, I believe that AI co-scientist is the best in this area, but this is a rapidly advancing field.”

The study, “AI-Assisted Drug Re-Purposing for Human Liver Fibrosis,” was authored by Yuan Guan, Lu Cui, Jakkapong Inchai, Zhuoqing Fang, Jacky Law, Alberto Alonzo Garcia Brito, Annalisa Pawlosky, Juraj Gottweis, Alexander Daryin, Artiom Myaskovsky, Lakshmi Ramakrishnan, Anil Palepu, Kavita Kulkarni, Wei-Hung Weng, Zhuanfen Cheng, Vivek Natarajan, Alan Karthikesalingam, Keran Rong, Yunhan Xu, Tao Tu, and Gary Peltz.

The study, “Chimeric infective particles expand species boundaries in phage-inducible chromosomal island mobilization,” was authored by Lingchen He, Jonasz B. Patkowski, Jinlong Wang, Laura Miguel-Romero, Christopher H.S. Aylett, Alfred Fillol-Salom, Tiago R.D. Costa, and José R. Penadés.

The study, “AI mirrors experimental science to uncover a mechanism of gene transfer crucial to bacterial evolution,” was authored by José R. Penadés, Juraj Gottweis, Lingchen He, Jonasz B. Patkowski, Alexander Daryin, Wei-Hung Weng, Tao Tu, Anil Palepu, Artiom Myaskovsky, Annalisa Pawlosky, Vivek Natarajan, Alan Karthikesalingam, and Tiago R.D. Costa.

In neuroscience breakthrough, scientists identify key component of how exercise triggers neurogenesis

1 November 2025 at 14:00

A recent study suggests that some of exercise’s brain-enhancing benefits can be transferred through tiny particles found in the blood. Researchers discovered that injecting these particles, called extracellular vesicles, from exercising mice into sedentary mice promoted the growth of new neurons in the hippocampus, a brain region important for learning and memory. The findings were published in the journal Brain Research.

Aerobic exercise can enhance cognitive functions, in part by stimulating the birth of new neurons in the hippocampus. This process, known as adult neurogenesis, is linked to improved learning, memory, and mood regulation. Understanding the specific mechanisms connecting physical activity to brain health could lead to new therapies for age-related cognitive decline and other neurological conditions.

However, the exact biological conversation between active muscles and the distant brain has remained largely a mystery. A leading hypothesis suggests that exercise releases specific factors into the bloodstream that travel to the brain and initiate these changes. Previous work has shown that blood plasma from exercising animals can be transferred to sedentary ones, resulting in cognitive benefits. This observation has narrowed the search for the responsible agents.

Among these potential messengers are extracellular vesicles, which are minuscule sacs released by cells that carry a diverse cargo of proteins, lipids, and genetic material. During exercise, tissues like muscle and liver release these vesicles into circulation at an increased rate. Because these particles are capable of crossing the protective blood-brain barrier, researchers proposed they might be a key vehicle for delivering exercise’s benefits directly to the brain.

“The hippocampus is critical for learning and memory and atrophy of the hippocampus is associated with common mental health problems like depression, anxiety,, PTST, epilepsy, Alzheimer’s disease and normal aging. So figuring out ways of increasing the integrity of the hippocampus is an avenue to pursue for addressing these problems,” explained study author Justin Rhodes, a professor at the University of Illinois, Urbana-Champaign.

“It is known that exercise increases the formation of new neurons in the hippocampus, and is a natural way to combat all the aforementioned mental health problems. It is further known that there are factors released into the blood that contribute to adult hippocampal neurogenesis. But most likely a mixture of factors rather than one magic chemical. The idea that extracellular vesicles containing lots of different kinds of molecules could communicate with complex chemical signatures from the blood to the brain and contribute to neurogenesis was not known until our study.”

To examine this, the researchers used two groups of mice. One group had unlimited access to running wheels for four weeks, while the other group was housed in cages with locked wheels, serving as a sedentary control. As expected, the running mice showed a significant increase in new brain cells in their own hippocampi, confirming the effectiveness of the exercise regimen.

After four weeks, the team collected blood from both the exercising and sedentary donor mice. From this blood, they isolated the extracellular vesicles using a filtration method that separates particles by size. This process yielded two distinct batches of vesicles: one from the exercising mice and one from the sedentary mice.

The team then administered these vesicles to a new set of sedentary recipient mice over a four-week period. These recipients were divided into three groups. One group received injections of vesicles from the exercising donors, a second group received vesicles from the sedentary donors, and a third group received a simple phosphate-buffered saline solution as a placebo.

To track the creation of new cells in the recipients’ brains, the mice were given injections of a labeling compound known as BrdU. This substance is incorporated into the DNA of dividing cells, effectively tagging them so they can be identified and counted later under a microscope. To ensure the reliability of their results, the entire experiment was conducted twice with two independent groups of mice.

The researchers found that mice that received vesicles from the exercising donors exhibited an approximately 50 percent increase in the number of new, BrdU-labeled cells in the hippocampus compared to mice that received vesicles from sedentary donors or the placebo solution. The findings were consistent across both independent cohorts, strengthening the conclusion that something within the exercise-derived vesicles was promoting cell proliferation.

“I was surprised that the vesicles were sufficient to increase neurogenesis,” Rhodes told PsyPost. “That is because when you exercise, there is not only the contribution of blood factors, but things going on in the brain like large amounts of neuronal activity in the hippocampus that I thought would be necessary for neurogenesis to occur. The results suggest apparently not, the vesicles alone without the other physiological components of actual exercise, are sufficient to increase neurogenesis to a degree, not the full degree, but to a degree.”

The researchers also examined the identity of these new cells to determine what they were becoming. Using fluorescent markers, they identified that, across all groups, about 89 percent of the new cells developed into neurons. A smaller fraction, about 6 percent, became a type of support cell called an astrocyte. This indicates that the vesicles from exercising mice increased the quantity of new neurons being formed, rather than changing what type of cells they became.

Finally, the team assessed whether the treatment affected the density of blood vessels in the hippocampus, as exercise is also known to promote changes in brain vasculature. By staining for a protein found in blood vessel walls, they measured the total vascular area. They found no significant differences in vascular coverage among the three groups, suggesting that the neurogenesis-promoting effect of the vesicles may be independent of vascular remodeling.

“One of the reasons exercise improves mental health is that it stimulates new neurons to form in an area of your brain that is important for learning and memory and for inhibiting stress, and now we know a big piece of the puzzle as to how exercise does this,” Rhodes said. “Exercise causes tissues like muscle and liver to secrete vesicles (sacs that contain lots of different kinds of chemicals) that reach the brain and stimulate neurogenesis.”

“Those vesicles can be taken from an animal that exercises and placed into an animal that is not exercising, and it can increase neurogenesis, not to the full level of that exercise does, but significantly increase it. That strongly suggests the vesicles themselves are carrying critical information. One can imagine a therapy in the future where either vesicles are harvested from exercising humans from their blood and introduced into individuals or synthetic vesicles are made that carry the unique mixture of chemicals that are identified in the exercise vesicles.”

While the findings point to extracellular vesicles as key players in exercise-induced brain plasticity, the study also highlights several areas for future inquiry. A primary question is whether the vesicles directly act on the brain or if their effects are mediated by peripheral organs. It is not yet known what fraction of the injected vesicles crossed the blood-brain barrier. The vesicles could potentially trigger signals in other parts of the body that then influence the brain.

The specific molecular cargo within the vesicles responsible for the neurogenic effect also needs to be identified. A companion study by the same research group found that vesicles from exercising mice were enriched with proteins related to brain plasticity, antioxidant defense, and cellular signaling. Future work will be needed to pinpoint which of these molecules, or combination of molecules, is responsible for the observed increase in new neurons.

“I think there are a lot of ways this could go,” Rhodes told PsyPost. “First, it is a pretty big black box between injecting the animals with vesicles and neurogenesis happening in the hippocampus. How many of the extracellular vesicles make it to the brain? Are they acting in the brain or in the periphery, i.e., maybe via peripheral nerves, mesenteric nervous system, immune activation or other ways we didn’t think of yet.”

“If they are reaching the brain, how do they merge with brain cells, do they reach astrocytes first? How do the vesicles get taken up by the brain cells is it through phagocytosis? What do the chemical signals do to the brain cells that causes increased neurogenesis? Do they act directly on neural progenitor cells, or astrocytes or mature neurons? What are the signaling mechanisms involved in the communication from the extracellular vesicles to the neurons/astrocytes/progenitor cells that causes neurogenesis to occur?”

The study, “Exercise-induced plasma-derived extracellular vesicles increase adult hippocampal neurogenesis,” was authored by Meghan G. Connolly, Alexander M. Fliflet, Prithika Ravi, Dan I. Rosu, Marni D. Boppart, and Justin S. Rhodes.

Scientists question caffeine’s power to shield the brain from junk food

1 November 2025 at 02:00

A recent study provides evidence that while a diet high in fat and sugar is associated with memory impairment, habitual caffeine consumption is unlikely to offer protection against these negative effects. These findings, which come from two related experiments, help clarify the complex interplay between diet, stimulants, and cognitive health in humans. The findings were published in Physiology & Behavior.

Researchers have become increasingly interested in the connection between nutrition and brain function. A growing body of scientific work, primarily from animal studies, has shown that diets rich in fat and sugar can impair memory, particularly functions related to the hippocampus, a brain region vital for learning and recall.

Human studies have started to align with these findings, linking high-fat, high-sugar consumption with poorer performance on memory tasks and with more self-reported memory failures. Given these associations, scientists are searching for potential protective factors that might lessen the cognitive impact of a poor diet.

Caffeine is one of the most widely consumed psychoactive substances in the world, and its effects on cognition have been studied extensively. While caffeine is known to improve alertness and reaction time, its impact on memory has been less clear. Some research in animal models has suggested that caffeine could have neuroprotective properties, potentially guarding against the memory deficits induced by a high-fat, high-sugar diet. These animal studies hinted that caffeine might work by reducing inflammation or through other brain-protective mechanisms. However, this potential protective effect had not been thoroughly investigated in human populations, a gap this new research aimed to address.

To explore this relationship, the researchers conducted two experiments. In the first experiment, they recruited 1,000 healthy volunteers between the ages of 18 and 45. Participants completed a series of online questionnaires designed to assess their dietary habits, memory, and caffeine intake. Their consumption of fat and sugar was measured using the Dietary Fat and free Sugar questionnaire, which asks about the frequency of eating various foods over the past year.

To gauge memory, participants filled out the Everyday Memory Questionnaire, a self-report measure where they rated how often they experience common memory lapses, such as forgetting names or misplacing items. Finally, they reported their daily caffeine consumption from various sources like coffee, tea, and soda.

The results from this first experiment confirmed a link between diet and self-perceived memory. Individuals who reported eating a diet higher in fat and sugar also reported experiencing more frequent everyday memory failures. The researchers then analyzed whether caffeine consumption altered this relationship. The analysis suggested a potential, though not statistically strong, moderating effect.

When the researchers specifically isolated the fat component of the diet, they found that caffeine consumption did appear to weaken the association between high fat intake and self-reported memory problems. At low levels of caffeine intake, a high-fat diet was strongly linked to memory complaints, but this link was not present for those with high caffeine intake. This provided preliminary evidence that caffeine might offer some benefit.

The second experiment was designed to build upon the initial findings with a more robust assessment of memory. This study involved 699 healthy volunteers, again aged 18 to 45, who completed the same questionnaires on diet, memory failures, and caffeine use. The key addition in this experiment was an objective measure of memory called the Verbal Paired Associates task. In this task, participants were shown pairs of words and were later asked to recall the second word of a pair when shown the first. This test provides a direct measure of episodic memory, which is the ability to recall specific events and experiences.

The findings from the second experiment once again showed a clear association between diet and memory. A higher intake of fat and sugar was linked to more self-reported memory failures, replicating the results of the first experiment. The diet was also associated with poorer performance on the objective Verbal Paired Associates task, providing stronger evidence that a high-fat, high-sugar diet is connected to actual memory impairment, not just the perception of it.

When the researchers examined the role of caffeine in this second experiment, the results were different from the first. This time, caffeine consumption did not moderate the relationship between a high-fat, high-sugar diet and either of the memory measures. In other words, individuals who consumed high amounts of caffeine were just as likely to show diet-related memory deficits as those who consumed little or no caffeine.

This lack of a protective effect was consistent for both self-reported memory failures and performance on the objective word-pair task. The findings from this more comprehensive experiment did not support the initial suggestion that caffeine could shield memory from the effects of a poor diet.

The researchers acknowledge certain limitations in their study. The data on diet and caffeine consumption were based on self-reports, which can be subject to recall errors. The participants were also relatively young and generally healthy, and the effects of diet on memory might be more pronounced in older populations or those with pre-existing health conditions. Since the study was conducted online, it was not possible to control for participants’ caffeine intake right before they completed the memory tasks, which could have influenced performance.

For future research, the scientists suggest using more objective methods to track dietary intake. They also recommend studying different populations, such as older adults or individuals with obesity, where the links between diet, caffeine, and memory may be clearer. Including a wider array of cognitive tests could also help determine if caffeine has protective effects on other brain functions beyond episodic memory, such as attention or executive function. Despite the lack of a protective effect found here, the study adds to our understanding of how lifestyle factors interact to influence cognitive health.

The study, “Does habitual caffeine consumption moderate the association between a high fat and sugar diet and self-reported and episodic memory impairment in humans?,” was authored by Tatum Sevenoaks and Martin Yeomans.

Vulnerability to stress magnifies how a racing mind disrupts sleep

31 October 2025 at 20:00

A new study provides evidence that a person’s innate vulnerability to stress-induced sleep problems can intensify how much a racing mind disrupts their sleep over time. While daily stress affects everyone’s sleep to some degree, this trait appears to make some people more susceptible to fragmented sleep. The findings were published in the Journal of Sleep Research.

Scientists have long understood that stress can be detrimental to sleep. One of the primary ways this occurs is through pre-sleep arousal, a state of heightened mental or physical activity just before bedtime. Researchers have also identified a trait known as sleep reactivity, which describes how susceptible a person’s sleep is to disruption from stress. Some individuals have high sleep reactivity, meaning their sleep is easily disturbed by stressors, while others have low reactivity and can sleep soundly even under pressure.

Despite knowing these factors are related, the precise way they interact on a daily basis was not well understood. Most previous studies relied on infrequent, retrospective reports or focused on major life events rather than common, everyday stressors. The research team behind this new study sought to get a more detailed picture. They aimed to understand how sleep reactivity might alter the connection between daily stress, pre-sleep arousal, and objectively measured sleep patterns in a natural setting.

“Sleep reactivity refers to an individual’s tendency to experience heightened sleep disturbances when faced with stress. Those with high sleep reactivity tend to show increased pre-sleep arousal during stressful periods and are at greater risk of developing both acute and chronic insomnia,” explained study authors Ju Lynn Ong and Stijn Massar, who are both research assistant professors at the National University of Singapore Yong Loo Lin School of Medicine.

“However, most prior research on stress, sleep, and sleep reactivity has relied on single, retrospective assessments, which may fail to capture the immediate and dynamic effects of daily stressors on sleep. Another limitation is that previous studies often examined either the cognitive or physiological components of pre-sleep arousal in isolation. Although these two forms of arousal are related, they may differ in their predictive value and underlying mechanisms, highlighting the importance of evaluating both concurrently.”

“To address these gaps, the current study investigated how day-to-day fluctuations in stress relate to sleep among university students over a two-week period and whether pre-sleep cognitive and physiological arousal mediate this relationship—particularly in individuals with high sleep reactivity.”

The research team began by recruiting a large group of full-time university students. They had the students complete a questionnaire called the Ford Insomnia Response to Stress Test, which is designed to measure an individual’s sleep reactivity. From this initial pool, the researchers selected two distinct groups for a more intensive two-week study: 30 students with the lowest scores, indicating low sleep reactivity, and 30 students with the highest scores, representing high sleep reactivity.

Over the following 14 days, these 60 participants were monitored using several methods. They wore an actigraphy watch on their wrist, which uses motion sensors to provide objective data on sleep patterns. This device measured their total sleep time, the amount of time it took them to fall asleep, and the time they spent awake after initially drifting off. Participants also wore an ŌURA ring, which recorded their pre-sleep heart rate as an objective indicator of physiological arousal.

Alongside these objective measures, participants completed daily surveys on their personal devices. Each evening before going to bed, they rated their perceived level of stress. Upon waking the next morning, they reported on their pre-sleep arousal from the previous night. These reports distinguished between cognitive arousal, such as having racing thoughts or worries, and somatic arousal, which includes physical symptoms like a pounding heart or muscle tension.

The first part of the analysis examined within-individual changes, which looks at how a person’s sleep on a high-stress day compared to their own personal average. The results showed that on days when participants felt more stressed than usual, they also experienced a greater degree of pre-sleep cognitive arousal. This increase in racing thoughts was, in turn, associated with getting less total sleep and taking longer to fall asleep that night. This pattern was observed in both the high and low sleep reactivity groups.

This finding suggests that experiencing a more stressful day than usual is likely to disrupt anyone’s sleep to some extent, regardless of their underlying reactivity. It appears to be a common human response for stress to activate the mind at bedtime, making sleep more difficult. The trait of sleep reactivity did not seem to alter this immediate, day-to-day effect.

“We were surprised to find that at the daily level, all participants did in fact exhibit a link between higher perceived stress and poorer sleep the following night, regardless of their level of sleep reactivity,” Ong and Massar told PsyPost. “This pattern may reflect sleep disturbances as a natural—and potentially adaptive—response to stress.”

The researchers then turned to between-individual differences, comparing the overall patterns of people in the high-reactivity group to those in the low-reactivity group across the entire two-week period. In this analysis, a key distinction became clear. Sleep reactivity did in fact play a moderating role, amplifying the negative effects of stress and arousal.

Individuals with high sleep reactivity showed a much stronger connection between their average stress levels, their average pre-sleep cognitive arousal, and their sleep quality. For these highly reactive individuals, having higher average levels of cognitive arousal was specifically linked to spending more time awake after initially falling asleep. In other words, their predisposition to stress-related sleep disturbance made their racing thoughts more disruptive to maintaining sleep throughout the night.

The researchers also tested whether physiological arousal played a similar role in connecting stress to poor sleep. They examined both the participants’ self-reports of physical tension and their objectively measured pre-sleep heart rate. Neither of these measures of physiological arousal appeared to be a significant middleman in the relationship between stress and sleep, for either group. The link between stress and sleep disruption in this study seemed to operate primarily through mental, not physical, arousal.

“On a day-to-day level, both groups exhibited heightened pre-sleep cognitive arousal and greater sleep disturbances in response to elevated daily stress,” the researchers explained. “However, when considering the study period as a whole, individuals with high sleep reactivity consistently reported higher average levels of stress and pre-sleep cognitive arousal, which in turn contributed to more severe sleep disruptions compared to low-reactive sleepers. Notably, these stress → pre-sleep arousal → sleep associations emerged only for cognitive arousal, not for somatic arousal—whether assessed through self-reports or objectively measured via pre-sleep heart rate.”

The researchers acknowledged some limitations of their work. The study sample consisted of young university students who were predominantly female and of Chinese descent, so the results may not be generalizable to other demographic groups or age ranges. Additionally, the study excluded individuals with diagnosed sleep disorders, meaning the findings might differ in a clinical population. The timing of the arousal survey, completed in the morning, also means it was a retrospective report that could have been influenced by the night’s sleep. It is also important to consider the practical size of these effects.

While statistically significant, the changes were modest: a day with stress levels 10 points higher than usual was linked to about 2.5 minutes less sleep, and the amplified effect in high-reactivity individuals amounted to about 1.2 additional minutes of wakefulness during the night for every 10-point increase in average stress.

Future research could build on these findings by exploring the same dynamics in more diverse populations. The study also highlights pre-sleep cognitive arousal as a potential target for intervention, especially for those with high sleep reactivity. Investigating whether therapies like cognitive-behavioral therapy for insomnia can reduce this mental activation could offer a path to preventing temporary, stress-induced sleep problems from developing into chronic conditions.

The study, “Sleep Reactivity Amplifies the Impact of Pre-Sleep Cognitive Arousal on Sleep Disturbances,” was authored by Noof Abdullah Saad Shaif, Julian Lim, Anthony N. Reffi, Michael W. L. Chee, Stijn A. A. Massar, and Ju Lynn Ong.

Public Montessori preschool yields improved reading and cognition at a lower cost

31 October 2025 at 16:00

The debate over the most effective models for early childhood education is a longstanding one. While the benefits of preschool are widely accepted, researchers have observed that the academic advantages gained in many programs tend to diminish by the time children finish kindergarten, a phenomenon often called “fade-out.” Some studies have even pointed to potential negative long-term outcomes from certain public preschool programs, intensifying the search for approaches that provide lasting benefits.

This situation prompted researchers to rigorously examine the Montessori method, a well-established educational model that has been in practice for over a century. Their new large-scale study found that children offered a spot in a public Montessori preschool showed better outcomes in reading, memory, executive function, and social understanding by the end of kindergarten.

The research also revealed that this educational model costs public school districts substantially less over three years compared to traditional programs. The findings were published in the Proceedings of the National Academy of Sciences.

The Montessori method is an educational approach developed over a century ago by Maria Montessori. Its classrooms typically feature a mix of ages, such as three- to six-year-olds, learning together. The environment is structured around child-led discovery, where students choose their own activities from a curated set of specialized, hands-on materials. The teacher acts more as a guide for individual and small-group lessons rather than a lecturer to the entire class.

The Montessori model, which has been implemented in thousands of schools globally, had not previously been evaluated in a rigorous, national randomized controlled trial. This study was designed to provide high-quality evidence on its impact in a public school setting.

“There have been a few small randomized controlled trials of public Montessori outcomes, but they were limited to 1-2 schools, leaving open the question of whether the more positive results were due to something about those schools aside from the Montessori programming,” said study author Angeline Lillard, the Commonwealth Professor of Psychology at the University of Virginia.

“This national study gets around that by using 24 different schools, which each had 3-16 Montessori Primary (3-6) classrooms. In addition, the two prior randomized controlled trials that had trained Montessori teachers (making them more valid) compromised the randomized controlled trial in certain ways, including not using intention-to-treat designs that are preferred by some.”

To conduct the research, the research team took advantage of the admissions lotteries at 24 oversubscribed public Montessori schools across the United States. When a school has more applicants than available seats, a random lottery gives each applicant an equal chance of admission. This process creates a natural experiment, allowing for a direct comparison between the children who were offered a spot (the treatment group) and those who were not (the control group). Nearly 600 children and their families consented to participate.

The children were tracked from the start of preschool at age three through the end of their kindergarten year. Researchers administered a range of assessments at the beginning of the study and again each spring to measure academic skills, memory, and social-emotional development. The primary analysis was a conservative type called an “intention-to-treat” analysis, which measures the effect of simply being offered a spot in a Montessori program, regardless of whether the child actually attended or for how long.

The results showed no significant differences between the two groups after the first or second year of preschool. But by the end of kindergarten, a distinct pattern of advantages had emerged for the children who had been offered a Montessori spot. This group demonstrated significantly higher scores on a standardized test of early reading skills. They also performed better on a test of executive function, which involves skills like planning, self-control, and following rules.

The Montessori group also showed stronger short-term memory, as measured by their ability to recall a sequence of numbers. Their social understanding, or “theory of mind,” was also more advanced, suggesting a greater capacity to comprehend others’ thoughts, feelings, and beliefs. The estimated effects for these outcomes were considered medium to large for this type of educational research.

The study found no significant group differences in vocabulary or a math assessment, although the results for math trended in a positive direction for the Montessori group.

In a secondary analysis, the researchers estimated the effects only for the children who complied with their lottery assignment, meaning those who won and attended Montessori compared to those who lost and did not. As expected, the positive effects on reading, executive function, memory, and social understanding were even larger in this analysis.

“For example, a child who scored at the 50th percentile in reading in a traditional school would have been at the 62nd percentile had they won the lottery to attend Montessori; had they won and attended Montessori, they would have scored at the 71st percentile,” Lillard told PsyPost.

Alongside the child assessments, the researchers performed a detailed cost analysis. They followed a method known as the “ingredients approach,” which accounts for all the resources required to run a program. This included teacher salaries and training, classroom materials, and facility space for both Montessori and traditional public preschool classrooms. One-time costs, such as the specialized Montessori materials and extensive teacher training, were amortized over their expected 25-year lifespan.

The analysis produced a surprising finding. Over the three-year period from ages three to six, public Montessori programs were estimated to cost districts $13,127 less per child than traditional programs. The main source of this cost savings was the higher child-to-teacher ratio in Montessori classrooms for three- and four-year-olds. This is an intentional feature of the Montessori model, designed to encourage peer learning and independence. These savings more than offset the higher upfront costs for teacher training and materials.

“I thought Montessori would cost the same, once one amortized the cost of teacher training and materials,” Lillard said. “Instead, we calculated that (due to intentionally higher ratios at 3 and 4, which predicted higher classroom quality in Montessori) Montessori cost less.”

“Even when including a large, diverse array of schools, public Montessori had better outcomes. These finding were robust to many different approaches to the data. And, the cost analysis showed these outcomes were obtained at significantly lower cost than was spent on traditional PK3 through kindergarten programs in public schools.”

But as with all research, there are limitations. The research included only families who applied to a Montessori school lottery, so the findings might not be generalizable to the broader population. The consent rate to participate in the study was relatively low, at about 21 percent of families who were contacted. Families who won a lottery spot were also more likely to consent than those who lost, which could potentially introduce bias into the results.

“Montessori is not a trademarked term, so anyone can call anything Montessori,” Lillard noted. “We required that most teachers be trained by the two organizations with the most rigorous training — AMI or the Association Montessori Internationale, which Dr. Maria Montessori founded to carry on her work, and AMS or the American Montessori Society, which has less rigorous teacher-trainer preparation and is shorter, but is still commendable. Our results might not extend to all schools that call themselves Montessori. In addition, we had low buy-in as we recruited for this study in summer 2021 when COVID-19 was still deeply concerning. We do not know if the results apply to families that did not consent to participation.”

The findings are also limited to the end of kindergarten. Whether the observed advantages for the Montessori group persist, grow, or fade in later elementary grades is a question for future research. The study authors expressed a strong interest in following these children to assess the long-term impacts of their early educational experiences.

“My collaborators at the American Institutes for Research and the University of Pennsylvania and University of Virginia are deeply appreciative of the schools, teachers, and families who participated, and to our funders, the Institute for Educational Sciences, Arnold Ventures, and the Brady Education Foundation,” Lillard added.

The study, “A national randomized controlled trial of the impact of public Montessori preschool at the end of kindergarten,” was authored by Angeline S. Lillard, David Loeb, Juliette Berg, Maya Escueta, Karen Manship, Alison Hauser, and Emily D. Daggett.

Familial link between ADHD and crime risk is partly genetic, study suggests

31 October 2025 at 14:00

A new study has found that individuals with ADHD have a higher risk of being convicted of a crime, and reveals this connection also extends to their family members. The research suggests that shared genetics are a meaningful part of the explanation for this link. Published in Biological Psychology, the findings show that the risk of a criminal conviction increases with the degree of genetic relatedness to a relative with ADHD.

The connection between ADHD and an increased likelihood of criminal activity is well-documented. Past research indicates that individuals with ADHD are two to three times more likely to be arrested or convicted of a crime. Scientists have also established that both ADHD and criminality have substantial genetic influences, with heritability estimates around 70-80% for ADHD and approximately 50% for criminal behavior. This overlap led researchers to hypothesize that shared genetic factors might partly explain the association between the two.

While some previous studies hinted at a familial connection, they were often limited to specific types of crime or a small number of relative types. The current research aimed to provide a more complete picture. The investigators sought to understand how the risk for criminal convictions co-aggregates, or clusters, within families across a wide spectrum of relationships, from identical twins to cousins. They also wanted to examine potential differences in these patterns between men and women.

“ADHD is linked to higher rates of crime, but it’s unclear why. We studied families to see whether shared genetic or environmental factors explain this connection, aiming to better understand how early support could reduce risk,” said study author Sofi Oskarsson, a researcher and senior lecturer in criminology at Örebro University.

To conduct the investigation, researchers utilized Sweden’s comprehensive national population registers. They analyzed data from a cohort of over 1.5 million individuals born in Sweden between 1987 and 2002. ADHD cases were identified through clinical diagnoses or prescriptions for ADHD medication recorded in national health registers. Information on criminal convictions for any crime, violent crime, or non-violent crime was obtained from the National Crime Register, with the analysis beginning from an individual’s 15th birthday, the age of criminal responsibility in Sweden.

The study design allowed researchers to estimate the risk of a criminal conviction for an individual based on whether a relative had ADHD. By comparing these risks across different types of relatives who share varying amounts of genetic material—identical twins (100%), fraternal twins and full siblings (average 50%), half-siblings (average 25%), and cousins (average 12.5%)—the team could infer the potential role of shared genes and environments.

The results first confirmed that individuals with an ADHD diagnosis had a substantially higher risk of being convicted of a crime compared to those without ADHD. The risk was particularly elevated for violent crimes.

The analysis also revealed a significant gender difference: while men with ADHD had higher absolute numbers of convictions, women with ADHD had a greater relative increase in risk compared to women without the disorder. For violent crime, the risk was over eight times higher for women with ADHD, while it was about five times higher for men with ADHD.

“Perhaps not a surprise given what we know today about ADHD, but the stronger associations found among women were very interesting and important,” Oskarsson told PsyPost. “ADHD is not diagnosed as often in females (or is mischaracterized), so the higher relative risk in women suggest that when ADHD is present, it may reflect a more severe or concentrated set of risk factors.”

The central finding of the study was the clear pattern of familial co-aggregation. Having a relative with ADHD was associated with an increased personal risk for a criminal conviction. This risk followed a gradient based on genetic relatedness.

The highest risk was observed in individuals whose identical twin had ADHD, followed by fraternal twins and full siblings. The risk was progressively lower for half-siblings and cousins. This pattern, where the association weakens as genetic similarity decreases, points toward the influence of shared genetic factors.

“Close relatives of people with ADHD were much more likely to have criminal convictions, especially twins, supporting a genetic contribution,” Oskarsson explained. “But the link is not deterministic, most individuals with ADHD or affected relatives are not convicted, emphasizing shared risk, not inevitability.”

The study also found that the stronger relative risk for women was not limited to individuals with ADHD. A similar pattern appeared in some familial relationships, specifically among full siblings and full cousins, where the association between a relative’s ADHD and a woman’s conviction risk was stronger than for men. This suggests that the biological and environmental mechanisms connecting ADHD and crime may operate differently depending on sex.

“People with ADHD are at a higher risk of criminality, but this risk also extend to their relatives,” Oskarsson said. “This pattern suggest that some of the link between ADHD and crime stems from shared genetic and/or environmental factors. Importantly, this does not mean that ADHD causes crime, but that the two share underlying vulnerabilities. Recognizing and addressing ADHD early, especially in families, could reduce downstream risks and improve outcomes.”

As with any study, the researchers acknowledge some limitations. The study’s reliance on official medical records may primarily capture more severe cases of ADHD, and conviction data does not account for all criminal behavior. Because the data comes from Sweden, a country with universal healthcare, the findings may not be directly generalizable to countries with different social or legal systems. The authors also note that the large number of statistical comparisons means the overall consistency of the patterns is more important than any single result.

Future research could explore these associations in different cultural and national contexts to see if the patterns hold. Further investigation is also needed to identify the specific genetic and environmental pathways that contribute to the shared risk between ADHD and criminal convictions. These findings could help inform risk assessment and prevention efforts, but the authors caution that such knowledge must be applied carefully to avoid stigmatization.

“I want to know more about why ADHD and criminality are connected, which symptoms or circumstances matter most, and whether early support for individuals and families can help break that link,” Oskarsson added. “This study underscores the importance of viewing ADHD within a broader family and societal context. Early support for ADHD doesn’t just help the individual, it can have ripple effects that extend across families and communities.”

The study, “The Familial Co-Aggregation of ADHD and Criminal Convictions: A Register-Based Cohort Study,” was authored by Sofi Oskarsson, Ralf Kuja-Halkola, Anneli Andersson, Catherine Tuvblad, Isabell Brikell, Brian D’Onofrio, Zheng Chang, and Henrik Larsson.

New study shows that a robot’s feedback can shape human relationships

31 October 2025 at 00:00

A new study has found that a robot’s feedback during a collaborative task can influence the feeling of closeness between the human participants. The research, published in Computers in Human Behavior, indicates that this effect changes depending on the robot’s appearance and how it communicates.

As robots become more integrated into workplaces and homes, they are often designed to assist with decision-making. While much research has focused on how robots affect the quality of a group’s decisions, less is known about how a robot’s presence might alter the personal relationships between the humans on the team. The researchers sought to understand this dynamic by exploring how a robot’s agreement or disagreement impacts the sense of interpersonal connection people feel.

“Given the rise of large language models in recent years, we believe robots of different forms will soon be equipped with non-scripted verbal language to help people make decisions in various contexts. We conducted our research to call for careful consideration and control over the precise behaviors robots should use to provide feedback in the future,” said study author Ting-Han Lin, a computer science PhD student at the University of Chicago.

The investigation centered on two established psychological ideas. One, known as Balance Theory, suggests that people feel more positive toward one another when they are treated similarly by a third party, even if that treatment is negative. The other concept, the Influence of Negative Affect, proposes that a negative tone or criticism can damage the general atmosphere of an interaction and harm relationships.

To test these ideas, the researchers conducted two separate experiments, each involving pairs of participants who did not know each other. In both experiments, the pairs worked together to answer a series of eight personal questions, such as “What is the most important factor contributing to a life well-lived?” For each question, participants first gave their own individual answers before discussing and agreeing on a joint response.

A robot was present to mediate the task. After each person gave their initial answer, the robot would provide feedback. This feedback varied in two ways. First was its positivity, meaning the robot would either agree or disagree with the person’s statement. Second was its treatment of the pair, meaning the robot would either treat both people equally (agreeing with both or disagreeing with both) or unequally (agreeing with one and disagreeing with the other).

The first experiment involved 172 participants interacting with a highly human-like robot named NAO. This robot could speak, use gestures like nodding or shaking its head, and employed artificial intelligence to summarize a person’s response before giving its feedback. Its verbal disagreements were designed to grow in intensity, beginning with mild phrases and ending with statements like, “I am fundamentally opposed with your viewpoint.”

The results from this experiment showed that the positivity of the robot’s feedback had a strong effect on the participants’ relationship. When the NAO robot gave positive feedback, the two human participants reported feeling closer to each other. When the robot consistently gave negative feedback, the participants felt more distant from one another.

“A robot’s feedback to two people in a decision-making task can shape their closeness,” Lin told PsyPost.

This outcome supports the theory regarding the influence of negative affect. The robot’s consistent negativity seemed to create a less pleasant social environment, which in turn reduced the feeling of connection between the two people. The robot’s treatment of the pair, whether equal or unequal, did not appear to be the primary factor shaping their closeness in this context. Participants also rated the human-like robot as warmer and more competent when it was positive, though they found it more discomforting when it treated them unequally.

The second experiment involved 150 participants and a robot with a very low degree of human-like features. This robot resembled a simple, articulated lamp and could not speak. It communicated its feedback exclusively through minimal gestures, such as nodding for agreement or shaking its head from side to side for disagreement.

With this less-human robot, the findings were quite different. The main factor influencing interpersonal closeness was the robot’s treatment of the pair. When the robot treated both participants equally, they reported feeling closer to each other, regardless of whether the feedback was positive or negative. Unequal treatment, where the robot agreed with one person and disagreed with the other, led to a greater sense of distance between them.

This result aligns well with Balance Theory. The shared experience of being treated the same by the robot, either through mutual agreement or mutual disagreement, seemed to create a bond. The researchers also noted a surprising finding. When the lamp-like robot disagreed with both participants, they felt even closer than when it agreed with both, suggesting that the robot became a “common enemy” that united them.

“Heider’s Balance Theory dominates when a low anthropomorphism robot is present,” Lin said.

The researchers propose that the different outcomes are likely due to the intensity of the feedback delivered by each robot. The human-like NAO robot’s use of personalized speech and strong verbal disagreement was potent enough to create a negative atmosphere that overshadowed other social dynamics. Its criticism was taken more seriously, and its negativity was powerful enough to harm the human-human connection.

“The influence of negative affect prevails when a high anthropomorphism robot exists,” Lin said.

In contrast, the simple, non-verbal gestures of the lamp-like robot were not as intense. Because its disagreement was less personal and less powerful, it did not poison the overall interaction. This allowed the more subtle effects of balanced versus imbalanced treatment to become the main influence on the participants’ relationship. Interviews with participants supported this idea, as people interacting with the machine-like robot often noted that they did not take its opinions as seriously.

Across both experiments, the robot’s feedback did not significantly alter how the final joint decisions were made. Participants tended to incorporate each other’s ideas fairly evenly, regardless of the robot’s expressed opinion. This suggests the robot’s influence was more on the social and emotional level than on the practical outcome of the decision-making task.

The study has some limitations, including the fact that the two experiments were conducted in different countries with different participant populations. The first experiment used a diverse group of museum visitors in the United States, while the second involved university students in Israel. Future research could explore these dynamics in more varied contexts.

The study, “The impact of a robot’s agreement (or disagreement) on human-human interpersonal closeness in a two-person decision-making task,” was authored by Ting-Han Lin, Yuval Rubin Kopelman, Madeline Busse, Sarah Sebo, and Hadas Erel.

New research explores the biopsychology of common sexual behaviors

30 October 2025 at 22:00

Recent research provides new insight into the functions of common sexual behaviors, revealing how they contribute not just to physical pleasure but also to emotional bonding. A trio of studies, two published in the WebLog Journal of Reproductive Medicine and one in the International Journal of Clinical Research and Reports, examines the physiological and psychological dimensions of why men hold their partners’ legs and stimulate their breasts, what men gain from these acts, and how women experience them.

Researchers pursued these lines of inquiry because many frequently practiced sexual behaviors remain scientifically underexplored. While practices like a man holding a woman’s legs or performing oral breast stimulation are common, the specific reasons for their prevalence and their effects on both partners were not fully understood from an integrated perspective. The scientific motivation was to create a more comprehensive picture that combines biology, psychology, and social factors to explain what happens during these intimate moments.

“Human sexual behavior is often discussed socially, but many aspects of it lack meaningful scientific exploration,” said study author Rehan Haider of the University of Karachi. “We noticed a gap connecting physiological responses, evolutionary psychology, and relationship intimacy to why certain tactile behaviors are preferred during intercourse. Our goal was to examine these mechanisms in a respectful, evidence-based manner rather than rely on anecdote or cultural assumptions.”

The first study took a broad, mixed-methods approach to understand why men often hold women’s legs and engage in breast stimulation during intercourse. The researchers combined a review of existing literature with observational studies and self-reported surveys from adult heterosexual couples aged 18 to 50. This allowed them to assemble a model that connected male behaviors with female responses and relational outcomes.

The research team reported that 68 percent of couples practiced leg holding during intercourse. This position was found to facilitate deeper vaginal penetration and improve the alignment of the bodies, which in turn enhanced stimulation of sensitive areas like the clitoris and G-spot. Women in the study correlated this act with higher levels of sexual satisfaction.

The research also affirmed the significance of breast stimulation, noting that manual stimulation occurred in 60 percent of encounters and oral stimulation in 54 percent. This contact activates sensory pathways in the nipple-areolar complex, promoting the release of the hormones oxytocin and prolactin, which are associated with increased sexual arousal and emotional bonding. From a psychological standpoint, these behaviors appeared to reinforce feelings of intimacy, trust, and connection between partners.

“We were surprised by the consistency of emotional feedback among participants, particularly how strongly feelings of closeness and security were linked to these behaviors,” Haider told PsyPost. “It suggests an underestimated psychological component beyond pure physical stimulation.”

“The core message is that sexual touch preferences are not random—many are supported by biological reward pathways, emotional bonding hormones, and evolutionary reproductive strategies. Leg-holding and breast stimulation, for example, can enhance feelings of safety, intimacy, and arousal for both partners. Healthy communication and consent around such behaviors strengthen relational satisfaction.”

A second, complementary study focused specifically on the male experience of performing oral stimulation on a partner’s nipples. The goal was to understand the pleasure and psychological satisfaction men themselves derive from this act. To do this, researchers conducted a cross-sectional survey, recruiting 500 heterosexual men between the ages of 18 and 55. Participants completed a structured and anonymous questionnaire designed to measure the frequency of the behavior, their self-rated level of arousal from it, and its association with feelings of intimacy and overall sexual satisfaction.

The analysis of this survey data revealed a strong positive association between the frequency of performing nipple stimulation and a man’s own sense of sexual fulfillment and relational closeness. The results indicated that men do not engage in this behavior solely for their partner’s benefit. They reported finding the act to be both highly erotic and emotionally gratifying. The researchers propose that the behavior serves a dual function for men, simultaneously enhancing their personal arousal while reinforcing the psychological bond with their partner, likely through mechanisms linked to the hormone oxytocin, which plays a role in social affiliation and trust.

The third study shifted the focus to the female perspective, examining women’s physical and psychological responses to breast and nipple stimulation during penetrative intercourse. This investigation used a clinical and observational design, collecting data from 120 sexually active women aged 21 to 50. The methodology involved structured interviews, clinical feedback from counseling sessions, and the use of validated questionnaires, including the well-established Female Sexual Function Index (`FSFI`), a self-report tool used to assess key dimensions of female sexual function.

This research confirmed that stimulation of the breasts and nipples consistently contributed to a more positive sexual experience for women. Women with higher reported nipple sensitivity showed significantly better scores across the FSFI domains of arousal, orgasm, and satisfaction. Physically, this type of stimulation was associated with enhanced vaginal lubrication and clitoral responsiveness during intercourse.

Psychologically, the researchers found a connection between a woman’s perception of her breasts and her emotional experience. Women who described their breasts as “zones of intimacy” or “trust-enhancing touchpoints” reported a greater sense of emotional connection and reduced anxiety during sex. However, the study also identified that 23 percent of participants experienced discomfort during breast stimulation.

“This research does not imply that these behaviors are necessary or universally preferred,” Haider noted. “It’s also not about objectification. Rather, it focuses on how touch patterns can reinforce mutual trust, pleasure, and bonding when consensual and respectful. Not everyone will experience the same responses, and preferences vary widely. The study highlights trends—not prescriptions—and should be interpreted as an invitation for communication rather than a standard everyone must follow.”

While these studies offer a more detailed understanding of sexual behavior, the researchers acknowledge certain limitations. All three studies relied heavily on self-reported data, which can be influenced by memory recall and social desirability biases. The research was also primarily cross-sectional, capturing a snapshot in time, which can identify associations but cannot establish cause-and-effect relationships. For instance, it is unclear if frequent breast stimulation leads to higher intimacy or if more intimate couples simply engage in the behavior more often.

For future research, scientists suggest incorporating longitudinal designs that follow couples over an extended period to better understand the development of these behavioral patterns and their long-term effects on relationship satisfaction. There is also a need for more cross-cultural comparisons, as sexual scripts and preferences can vary significantly across different societies.

“Future work will explore female perspectives more deeply, neuroendocrine changes during different types of touch, and how cultural factors shape sexual comfort and preference,” Haider said. We’d like to compare findings across age groups and relationship durations as well. Sexual well-being is an important aspect of overall health, but it is rarely discussed scientifically. By approaching these topics with sensitivity and rigor, we hope to normalize evidence-based conversation and encourage couples to communicate openly.”

The studies, “Physiological Basis of Male Preference for Holding Women’s Legs and Breast Stimulation during Intercourse,” “Nipple Sucking and Male Sexual Response: Perceived Pleasure and Psychological Satisfaction,” and “Women’s Physical and Psychological Responses during Penetrative Sexual Intercourse: The Role of Breast and Nipple Sensitivity” were authored by Rehan Haider, Geetha Kumari Das, and Zameer Ahmed.

Scientists are discovering more and more about the spooky psychology behind our love of horror

30 October 2025 at 16:00

The human fascination with fear is a long-standing puzzle. From ghost stories told around a campfire to the latest blockbuster horror film, many people actively seek out experiences designed to frighten them. This seemingly contradictory impulse, where negative feelings like terror and anxiety produce a sense of enjoyment and thrill, has intrigued psychologists for decades. Researchers are now using a variety of tools, from brain scans to personality surveys, to understand this complex relationship.

Their work is revealing how our brains process fear, what personality traits draw us to the dark side of entertainment, and even how these experiences might offer surprising psychological benefits. Here is a look at twelve recent studies that explore the multifaceted psychology of horror, fear, and the paranormal.

(You can click on the subtitles to learn more about the studies.)

Your Brain on Horror: A New Theory Suggests We’re Training for Uncertainty

A new theory proposes that horror films appeal to us because they provide a safe, controlled setting for our brains to practice managing uncertainty. This idea is based on a framework known as predictive processing, which suggests the brain operates like a prediction engine. It constantly makes forecasts about what will happen next, and when reality doesn’t match its predictions, it generates a “prediction error” that it works to resolve.

This process doesn’t mean we only seek out calm, predictable situations. Instead, our brains are wired to find ideal opportunities for learning, which often exist at the edge of our understanding. We are drawn toward a “Goldlilocks zone” of manageable uncertainty that is neither too simple nor too chaotic. The rewarding feeling comes not just from being correct, but from the rate at which we reduce our uncertainty.

Horror films appear to be engineered to place us directly in this zone. They manipulate our predictive minds with a mix of the familiar and the unexpected. Suspenseful music and classic horror tropes build our anticipation, while jump scares suddenly violate our predictions. By engaging with this controlled chaos, we get to experience and resolve prediction errors in a low-stakes environment, which the brain can find inherently gratifying.

A Good Scare: Enjoying Horror May Be an Evolved Trait for Threat Simulation

Research from an evolutionary perspective suggests that our enjoyment of horror serves a practical purpose: it prepares us for real-world dangers. This “threat-simulation hypothesis” posits that engaging with scary media is an adaptive trait, allowing us to explore threatening scenarios and rehearse our responses from a position of safety. Through horror, we can learn about predators, hostile social encounters, and other dangers without facing any actual risk.

A survey of over 1,100 adults found that a majority of people consume horror media and more than half enjoy it. The study revealed that people who enjoy horror expect to experience a range of positive emotions like joy and surprise alongside fear. This supports the idea that the negative emotion of fear is balanced by positive feelings, a phenomenon some call “benign masochism.”

The findings also showed that sensation-seeking was a strong predictor of horror enjoyment, as was a personality trait related to intellect and imagination. It seems those who seek imaginative stimulation are particularly drawn to horror. By providing a vast space for emotional and cognitive play, frightening entertainment allows us to build and display mastery over situations that would be terrifying in real life.

The Thrill of the Kill: Fear and Realism Drive Horror Enjoyment

To better understand what makes a horror movie entertaining, researchers surveyed nearly 600 people about their reactions to short scenes from various horror subgenres. The study found that three key factors predicted both excitement and enjoyment: the intensity of fear the viewer felt, their curiosity about morbid topics, and how realistic they perceived the scenes to be.

The experience of fear itself was powerfully linked to both excitement and enjoyment, showing that the thrill of being scared is a central part of the appeal. Morbid curiosity also played a significant role, indicating that people with a natural interest in dark subjects are more likely to find horror entertaining. The perceived realism of a scene heightened the experience as well.

However, not all negative emotions contributed to the fun. Scenes that provoked high levels of disgust tended to decrease enjoyment, even if they were still exciting. This finding suggests that while fear can be a source of pleasure for horror fans, disgust often introduces an element that makes the experience less enjoyable overall.

Scary Fun: Nearly All Children Enjoy Playful Fear

Fear is not just for adults. A large-scale survey of 1,600 Danish parents has revealed that “recreational fear,” or the experience of activities that are both scary and fun, is a nearly universal part of childhood. An overwhelming 93% of children between the ages of 1 and 17 were reported to enjoy at least one type of scary yet fun activity, with 70% engaging in one weekly.

The study identified clear developmental trends in how children experience recreational fear. Younger children often find it in physical and imaginative play, such as being playfully chased or engaging in rough-and-tumble games. As they grow into adolescence, their interest shifts toward media-based experiences like scary movies, video games, and frightening online content. One constant across all ages was the enjoyment of activities involving high speeds, heights, or depths, like swings and amusement park rides.

These experiences are predominantly social. Young children typically engage with parents or siblings, while adolescents turn to friends. This social context may provide a sense of security that allows children to explore fear safely. The researchers propose that this type of play is beneficial, helping children learn to regulate their emotions, test their limits, and build psychological resilience.

Decoding Your Watchlist: Film Preferences May Reflect Personality

A study involving 300 college students suggests that your favorite movie genre might offer clues about your personality. Using the well-established Big Five personality model, researchers found consistent links between film preferences and traits like extraversion, conscientiousness, and neuroticism.

Fans of horror films tended to score higher in extraversion, agreeableness, and conscientiousness, suggesting they may be outgoing, cooperative, and organized. They also scored lower in neuroticism and openness, which could indicate they are less emotionally reactive and less drawn to abstract ideas. In contrast, those who favored drama scored higher in conscientiousness and neuroticism, while adventure film fans were more extraverted and spontaneous.

While these findings point to a relationship between personality and media choice, the study has limitations. The sample was limited to a specific age group and cultural background, so the results may not apply to everyone. The research also cannot determine whether personality shapes film choice or if the films we watch might influence our personality over time.

Dark Beats: Morbid Curiosity Linked to Enjoyment of Violent Music

Morbid curiosity, a trait defined by an interest in dangerous phenomena, may help explain why some people are drawn to music with violent themes, like death metal or certain subgenres of rap. A recent study found that people with higher levels of morbid curiosity were more likely to listen to and enjoy music with violent lyrics.

In an initial survey, researchers found that fans of music with violent themes scored higher on a scale of morbid curiosity than fans of other genres. A second experiment involved having participants listen to musical excerpts. The results showed that morbid curiosity predicted enjoyment of extreme metal with violent lyrics, but not rap music with violent lyrics, suggesting different factors may be at play for different genres.

The study authors propose that morbid curiosity is not a deviant trait, but an adaptive one that helps people learn about threatening aspects of life in a safe, simulated context. Music with violent themes can act as one of these simulations, allowing listeners to explore dangerous ideas and the emotions they evoke without any real-world consequences.

Pandemic Practice: Horror Fans Showed More Resilience During COVID-19

People who enjoy horror movies may have been better equipped to handle the psychological stress of the COVID-19 pandemic. A study conducted in April 2020 surveyed 322 U.S. adults about their genre preferences, morbid curiosity, and psychological state during the early days of the pandemic.

The researchers found that fans of horror movies reported less psychological distress than non-fans. They were less likely to agree with statements about feeling more depressed or having trouble sleeping since the pandemic began. Fans of “prepper” genres, such as zombie and apocalyptic films, also reported less distress and said they felt more prepared for the pandemic.

The study’s authors speculate that horror fans may have developed better emotion-regulation skills by repeatedly exposing themselves to frightening fiction in a controlled way. This “practice” with fear in a safe setting could have translated into greater resilience when faced with a real-world crisis.

A Frightening Prescription? Scary Fun May Briefly Shift Brain Activity in Depression

Engaging with frightening entertainment might temporarily alter brain network patterns associated with depression. A study found that in individuals with mild-to-moderate depression, a controlled scary experience was linked to a brief reduction in the over-connectivity between two key brain networks: the default mode network (active during self-focused thought) and the salience network (which detects important events).

This over-connectivity is thought to contribute to rumination, a cycle of negative thoughts common in depression. By demanding a person’s full attention, the scary experience appeared to pull focus away from this internal loop and onto the external threat. The greater this reduction in connectivity, the more enjoyment participants reported.

The study also found that individuals with moderate depression needed a more intense scare to reach their peak enjoyment compared to those with minimal symptoms. While the observed brain changes were temporary, the findings raise questions about the interplay between fear, pleasure, and emotion regulation.

Believe What You Watch: Some Horror Might Bolster Paranormal Beliefs

A recent study has found a connection between the type of horror media people watch and their beliefs in the paranormal. After surveying over 600 Belgian adults, researchers discovered that consumption of horror content claiming to be based on “true events” or presented as reality was associated with stronger paranormal beliefs.

Specifically, people who frequently watched paranormal reality TV shows and horror films marketed as being based on a true story were more likely to endorse beliefs in things like ghosts, spiritualism, and psychic powers. Other fictional horror genres, such as monster movies or psychological thrillers, did not show a similar connection.

This finding aligns with media effect theories suggesting that when content is perceived as more realistic or credible, it can have a stronger impact on a viewer’s attitudes. However, the study’s design means it is also possible that people who already believe in the paranormal are simply more drawn to this type of content.

Brainwaves of Believers: Paranormal Beliefs Linked to Distinct Neural Patterns

Individuals who strongly believe in paranormal phenomena may exhibit different brain activity and cognitive patterns compared to skeptics. A study using electroencephalography (EEG) to record the brain’s electrical activity found that paranormal believers had reduced power in certain brainwave frequencies, specifically in the alpha, beta, and gamma bands, particularly in the frontal, parietal, and occipital regions of the brain.

Participants also completed a cognitive task designed to measure inhibitory control, which is the ability to suppress impulsive actions. Paranormal believers made more errors on this task than skeptics, suggesting reduced inhibitory control. They also reported experiencing more everyday cognitive failures, such as memory slips and attention lapses.

The researchers found that activity in one specific frequency band, beta2 in the frontal lobe, appeared to mediate the relationship between paranormal beliefs and inhibitory control. This suggests that differences in brain function, particularly in regions involved in high-level cognitive processes, may be connected to a person’s conviction in the paranormal.

A Sixth Sense? Unusual Experiences Tied to a Trait Called Subconscious Connectedness

Unusual events like premonitions, vivid dreams, and out-of-body sensations are surprisingly common, and people who report them often share certain psychological traits. A series of three studies involving over 2,200 adults found a strong link between anomalous experiences and a trait called “subconscious connectedness,” which describes the degree to which a person’s conscious and subconscious minds influence each other.

People who scored high in subconscious connectedness reported having anomalous experiences far more frequently than those with low scores. In one national survey, 86% of participants said they had at least one type of anomalous experience more than once. The most commonly reported was déjà vu, followed by correctly sensing they were being stared at and having premonitions that came true.

These experiences were also associated with other traits, including absorption, dissociation, vivid imagination, and a tendency to trust intuition. While people who reported more anomalous experiences also tended to report more stress and anxiety, these associations were modest, suggesting such experiences are a normal part of human psychology for many.

Someone There? How Our Brains Create a ‘Feeling of Presence’ in the Dark

The eerie sensation that someone is nearby when you are alone may be a product of your brain trying to make sense of uncertainty. A study found that this “feeling of presence” is more likely to occur when people are in darkness with their senses dulled. Under these conditions, the brain may rely more on internal cues and expectations, sometimes generating the impression of an unseen agent.

In an experiment, university students sat alone in a darkened room for 30 minutes while wearing a sleeping mask and earplugs. The results showed that participants who reported higher levels of internal uncertainty were more likely to feel that another person was with them. This suggests that when sensory information is limited, the brain may interpret ambiguous bodily sensations or anxious feelings as evidence of an outside presence.

This cognitive process might be an evolutionary holdover. From a survival standpoint, it is safer to mistakenly assume a predator is hiding in the dark than to ignore a real one. This bias toward detecting agents could help explain why ghostly encounters and beliefs in invisible beings are so common across human cultures, especially in situations of isolation and vulnerability.

❌
❌