A recent study provides evidence that while a diet high in fat and sugar is associated with memory impairment, habitual caffeine consumption is unlikely to offer protection against these negative effects. These findings, which come from two related experiments, help clarify the complex interplay between diet, stimulants, and cognitive health in humans. The findings were published in Physiology & Behavior.
Researchers have become increasingly interested in the connection between nutrition and brain function. A growing body of scientific work, primarily from animal studies, has shown that diets rich in fat and sugar can impair memory, particularly functions related to the hippocampus, a brain region vital for learning and recall.
Human studies have started to align with these findings, linking high-fat, high-sugar consumption with poorer performance on memory tasks and with more self-reported memory failures. Given these associations, scientists are searching for potential protective factors that might lessen the cognitive impact of a poor diet.
Caffeine is one of the most widely consumed psychoactive substances in the world, and its effects on cognition have been studied extensively. While caffeine is known to improve alertness and reaction time, its impact on memory has been less clear. Some research in animal models has suggested that caffeine could have neuroprotective properties, potentially guarding against the memory deficits induced by a high-fat, high-sugar diet. These animal studies hinted that caffeine might work by reducing inflammation or through other brain-protective mechanisms. However, this potential protective effect had not been thoroughly investigated in human populations, a gap this new research aimed to address.
To explore this relationship, the researchers conducted two experiments. In the first experiment, they recruited 1,000 healthy volunteers between the ages of 18 and 45. Participants completed a series of online questionnaires designed to assess their dietary habits, memory, and caffeine intake. Their consumption of fat and sugar was measured using the Dietary Fat and free Sugar questionnaire, which asks about the frequency of eating various foods over the past year.
To gauge memory, participants filled out the Everyday Memory Questionnaire, a self-report measure where they rated how often they experience common memory lapses, such as forgetting names or misplacing items. Finally, they reported their daily caffeine consumption from various sources like coffee, tea, and soda.
The results from this first experiment confirmed a link between diet and self-perceived memory. Individuals who reported eating a diet higher in fat and sugar also reported experiencing more frequent everyday memory failures. The researchers then analyzed whether caffeine consumption altered this relationship. The analysis suggested a potential, though not statistically strong, moderating effect.
When the researchers specifically isolated the fat component of the diet, they found that caffeine consumption did appear to weaken the association between high fat intake and self-reported memory problems. At low levels of caffeine intake, a high-fat diet was strongly linked to memory complaints, but this link was not present for those with high caffeine intake. This provided preliminary evidence that caffeine might offer some benefit.
The second experiment was designed to build upon the initial findings with a more robust assessment of memory. This study involved 699 healthy volunteers, again aged 18 to 45, who completed the same questionnaires on diet, memory failures, and caffeine use. The key addition in this experiment was an objective measure of memory called the Verbal Paired Associates task. In this task, participants were shown pairs of words and were later asked to recall the second word of a pair when shown the first. This test provides a direct measure of episodic memory, which is the ability to recall specific events and experiences.
The findings from the second experiment once again showed a clear association between diet and memory. A higher intake of fat and sugar was linked to more self-reported memory failures, replicating the results of the first experiment. The diet was also associated with poorer performance on the objective Verbal Paired Associates task, providing stronger evidence that a high-fat, high-sugar diet is connected to actual memory impairment, not just the perception of it.
When the researchers examined the role of caffeine in this second experiment, the results were different from the first. This time, caffeine consumption did not moderate the relationship between a high-fat, high-sugar diet and either of the memory measures. In other words, individuals who consumed high amounts of caffeine were just as likely to show diet-related memory deficits as those who consumed little or no caffeine.
This lack of a protective effect was consistent for both self-reported memory failures and performance on the objective word-pair task. The findings from this more comprehensive experiment did not support the initial suggestion that caffeine could shield memory from the effects of a poor diet.
The researchers acknowledge certain limitations in their study. The data on diet and caffeine consumption were based on self-reports, which can be subject to recall errors. The participants were also relatively young and generally healthy, and the effects of diet on memory might be more pronounced in older populations or those with pre-existing health conditions. Since the study was conducted online, it was not possible to control for participants’ caffeine intake right before they completed the memory tasks, which could have influenced performance.
For future research, the scientists suggest using more objective methods to track dietary intake. They also recommend studying different populations, such as older adults or individuals with obesity, where the links between diet, caffeine, and memory may be clearer. Including a wider array of cognitive tests could also help determine if caffeine has protective effects on other brain functions beyond episodic memory, such as attention or executive function. Despite the lack of a protective effect found here, the study adds to our understanding of how lifestyle factors interact to influence cognitive health.
A new study provides evidence that a person’s innate vulnerability to stress-induced sleep problems can intensify how much a racing mind disrupts their sleep over time. While daily stress affects everyone’s sleep to some degree, this trait appears to make some people more susceptible to fragmented sleep. The findings were published in the Journal of Sleep Research.
Scientists have long understood that stress can be detrimental to sleep. One of the primary ways this occurs is through pre-sleep arousal, a state of heightened mental or physical activity just before bedtime. Researchers have also identified a trait known as sleep reactivity, which describes how susceptible a person’s sleep is to disruption from stress. Some individuals have high sleep reactivity, meaning their sleep is easily disturbed by stressors, while others have low reactivity and can sleep soundly even under pressure.
Despite knowing these factors are related, the precise way they interact on a daily basis was not well understood. Most previous studies relied on infrequent, retrospective reports or focused on major life events rather than common, everyday stressors. The research team behind this new study sought to get a more detailed picture. They aimed to understand how sleep reactivity might alter the connection between daily stress, pre-sleep arousal, and objectively measured sleep patterns in a natural setting.
“Sleep reactivity refers to an individual’s tendency to experience heightened sleep disturbances when faced with stress. Those with high sleep reactivity tend to show increased pre-sleep arousal during stressful periods and are at greater risk of developing both acute and chronic insomnia,” explained study authors Ju Lynn Ong and Stijn Massar, who are both research assistant professors at the National University of Singapore Yong Loo Lin School of Medicine.
“However, most prior research on stress, sleep, and sleep reactivity has relied on single, retrospective assessments, which may fail to capture the immediate and dynamic effects of daily stressors on sleep. Another limitation is that previous studies often examined either the cognitive or physiological components of pre-sleep arousal in isolation. Although these two forms of arousal are related, they may differ in their predictive value and underlying mechanisms, highlighting the importance of evaluating both concurrently.”
“To address these gaps, the current study investigated how day-to-day fluctuations in stress relate to sleep among university students over a two-week period and whether pre-sleep cognitive and physiological arousal mediate this relationship—particularly in individuals with high sleep reactivity.”
The research team began by recruiting a large group of full-time university students. They had the students complete a questionnaire called the Ford Insomnia Response to Stress Test, which is designed to measure an individual’s sleep reactivity. From this initial pool, the researchers selected two distinct groups for a more intensive two-week study: 30 students with the lowest scores, indicating low sleep reactivity, and 30 students with the highest scores, representing high sleep reactivity.
Over the following 14 days, these 60 participants were monitored using several methods. They wore an actigraphy watch on their wrist, which uses motion sensors to provide objective data on sleep patterns. This device measured their total sleep time, the amount of time it took them to fall asleep, and the time they spent awake after initially drifting off. Participants also wore an ŌURA ring, which recorded their pre-sleep heart rate as an objective indicator of physiological arousal.
Alongside these objective measures, participants completed daily surveys on their personal devices. Each evening before going to bed, they rated their perceived level of stress. Upon waking the next morning, they reported on their pre-sleep arousal from the previous night. These reports distinguished between cognitive arousal, such as having racing thoughts or worries, and somatic arousal, which includes physical symptoms like a pounding heart or muscle tension.
The first part of the analysis examined within-individual changes, which looks at how a person’s sleep on a high-stress day compared to their own personal average. The results showed that on days when participants felt more stressed than usual, they also experienced a greater degree of pre-sleep cognitive arousal. This increase in racing thoughts was, in turn, associated with getting less total sleep and taking longer to fall asleep that night. This pattern was observed in both the high and low sleep reactivity groups.
This finding suggests that experiencing a more stressful day than usual is likely to disrupt anyone’s sleep to some extent, regardless of their underlying reactivity. It appears to be a common human response for stress to activate the mind at bedtime, making sleep more difficult. The trait of sleep reactivity did not seem to alter this immediate, day-to-day effect.
“We were surprised to find that at the daily level, all participants did in fact exhibit a link between higher perceived stress and poorer sleep the following night, regardless of their level of sleep reactivity,” Ong and Massar told PsyPost. “This pattern may reflect sleep disturbances as a natural—and potentially adaptive—response to stress.”
The researchers then turned to between-individual differences, comparing the overall patterns of people in the high-reactivity group to those in the low-reactivity group across the entire two-week period. In this analysis, a key distinction became clear. Sleep reactivity did in fact play a moderating role, amplifying the negative effects of stress and arousal.
Individuals with high sleep reactivity showed a much stronger connection between their average stress levels, their average pre-sleep cognitive arousal, and their sleep quality. For these highly reactive individuals, having higher average levels of cognitive arousal was specifically linked to spending more time awake after initially falling asleep. In other words, their predisposition to stress-related sleep disturbance made their racing thoughts more disruptive to maintaining sleep throughout the night.
The researchers also tested whether physiological arousal played a similar role in connecting stress to poor sleep. They examined both the participants’ self-reports of physical tension and their objectively measured pre-sleep heart rate. Neither of these measures of physiological arousal appeared to be a significant middleman in the relationship between stress and sleep, for either group. The link between stress and sleep disruption in this study seemed to operate primarily through mental, not physical, arousal.
“On a day-to-day level, both groups exhibited heightened pre-sleep cognitive arousal and greater sleep disturbances in response to elevated daily stress,” the researchers explained. “However, when considering the study period as a whole, individuals with high sleep reactivity consistently reported higher average levels of stress and pre-sleep cognitive arousal, which in turn contributed to more severe sleep disruptions compared to low-reactive sleepers. Notably, these stress → pre-sleep arousal → sleep associations emerged only for cognitive arousal, not for somatic arousal—whether assessed through self-reports or objectively measured via pre-sleep heart rate.”
The researchers acknowledged some limitations of their work. The study sample consisted of young university students who were predominantly female and of Chinese descent, so the results may not be generalizable to other demographic groups or age ranges. Additionally, the study excluded individuals with diagnosed sleep disorders, meaning the findings might differ in a clinical population. The timing of the arousal survey, completed in the morning, also means it was a retrospective report that could have been influenced by the night’s sleep. It is also important to consider the practical size of these effects.
While statistically significant, the changes were modest: a day with stress levels 10 points higher than usual was linked to about 2.5 minutes less sleep, and the amplified effect in high-reactivity individuals amounted to about 1.2 additional minutes of wakefulness during the night for every 10-point increase in average stress.
Future research could build on these findings by exploring the same dynamics in more diverse populations. The study also highlights pre-sleep cognitive arousal as a potential target for intervention, especially for those with high sleep reactivity. Investigating whether therapies like cognitive-behavioral therapy for insomnia can reduce this mental activation could offer a path to preventing temporary, stress-induced sleep problems from developing into chronic conditions.
The debate over the most effective models for early childhood education is a longstanding one. While the benefits of preschool are widely accepted, researchers have observed that the academic advantages gained in many programs tend to diminish by the time children finish kindergarten, a phenomenon often called “fade-out.” Some studies have even pointed to potential negative long-term outcomes from certain public preschool programs, intensifying the search for approaches that provide lasting benefits.
This situation prompted researchers to rigorously examine the Montessori method, a well-established educational model that has been in practice for over a century. Their new large-scale study found that children offered a spot in a public Montessori preschool showed better outcomes in reading, memory, executive function, and social understanding by the end of kindergarten.
The research also revealed that this educational model costs public school districts substantially less over three years compared to traditional programs. The findings were published in the Proceedings of the National Academy of Sciences.
The Montessori method is an educational approach developed over a century ago by Maria Montessori. Its classrooms typically feature a mix of ages, such as three- to six-year-olds, learning together. The environment is structured around child-led discovery, where students choose their own activities from a curated set of specialized, hands-on materials. The teacher acts more as a guide for individual and small-group lessons rather than a lecturer to the entire class.
The Montessori model, which has been implemented in thousands of schools globally, had not previously been evaluated in a rigorous, national randomized controlled trial. This study was designed to provide high-quality evidence on its impact in a public school setting.
“There have been a few small randomized controlled trials of public Montessori outcomes, but they were limited to 1-2 schools, leaving open the question of whether the more positive results were due to something about those schools aside from the Montessori programming,” said study author Angeline Lillard, the Commonwealth Professor of Psychology at the University of Virginia.
“This national study gets around that by using 24 different schools, which each had 3-16 Montessori Primary (3-6) classrooms. In addition, the two prior randomized controlled trials that had trained Montessori teachers (making them more valid) compromised the randomized controlled trial in certain ways, including not using intention-to-treat designs that are preferred by some.”
To conduct the research, the research team took advantage of the admissions lotteries at 24 oversubscribed public Montessori schools across the United States. When a school has more applicants than available seats, a random lottery gives each applicant an equal chance of admission. This process creates a natural experiment, allowing for a direct comparison between the children who were offered a spot (the treatment group) and those who were not (the control group). Nearly 600 children and their families consented to participate.
The children were tracked from the start of preschool at age three through the end of their kindergarten year. Researchers administered a range of assessments at the beginning of the study and again each spring to measure academic skills, memory, and social-emotional development. The primary analysis was a conservative type called an “intention-to-treat” analysis, which measures the effect of simply being offered a spot in a Montessori program, regardless of whether the child actually attended or for how long.
The results showed no significant differences between the two groups after the first or second year of preschool. But by the end of kindergarten, a distinct pattern of advantages had emerged for the children who had been offered a Montessori spot. This group demonstrated significantly higher scores on a standardized test of early reading skills. They also performed better on a test of executive function, which involves skills like planning, self-control, and following rules.
The Montessori group also showed stronger short-term memory, as measured by their ability to recall a sequence of numbers. Their social understanding, or “theory of mind,” was also more advanced, suggesting a greater capacity to comprehend others’ thoughts, feelings, and beliefs. The estimated effects for these outcomes were considered medium to large for this type of educational research.
The study found no significant group differences in vocabulary or a math assessment, although the results for math trended in a positive direction for the Montessori group.
In a secondary analysis, the researchers estimated the effects only for the children who complied with their lottery assignment, meaning those who won and attended Montessori compared to those who lost and did not. As expected, the positive effects on reading, executive function, memory, and social understanding were even larger in this analysis.
“For example, a child who scored at the 50th percentile in reading in a traditional school would have been at the 62nd percentile had they won the lottery to attend Montessori; had they won and attended Montessori, they would have scored at the 71st percentile,” Lillard told PsyPost.
Alongside the child assessments, the researchers performed a detailed cost analysis. They followed a method known as the “ingredients approach,” which accounts for all the resources required to run a program. This included teacher salaries and training, classroom materials, and facility space for both Montessori and traditional public preschool classrooms. One-time costs, such as the specialized Montessori materials and extensive teacher training, were amortized over their expected 25-year lifespan.
The analysis produced a surprising finding. Over the three-year period from ages three to six, public Montessori programs were estimated to cost districts $13,127 less per child than traditional programs. The main source of this cost savings was the higher child-to-teacher ratio in Montessori classrooms for three- and four-year-olds. This is an intentional feature of the Montessori model, designed to encourage peer learning and independence. These savings more than offset the higher upfront costs for teacher training and materials.
“I thought Montessori would cost the same, once one amortized the cost of teacher training and materials,” Lillard said. “Instead, we calculated that (due to intentionally higher ratios at 3 and 4, which predicted higher classroom quality in Montessori) Montessori cost less.”
“Even when including a large, diverse array of schools, public Montessori had better outcomes. These finding were robust to many different approaches to the data. And, the cost analysis showed these outcomes were obtained at significantly lower cost than was spent on traditional PK3 through kindergarten programs in public schools.”
But as with all research, there are limitations. The research included only families who applied to a Montessori school lottery, so the findings might not be generalizable to the broader population. The consent rate to participate in the study was relatively low, at about 21 percent of families who were contacted. Families who won a lottery spot were also more likely to consent than those who lost, which could potentially introduce bias into the results.
“Montessori is not a trademarked term, so anyone can call anything Montessori,” Lillard noted. “We required that most teachers be trained by the two organizations with the most rigorous training — AMI or the Association Montessori Internationale, which Dr. Maria Montessori founded to carry on her work, and AMS or the American Montessori Society, which has less rigorous teacher-trainer preparation and is shorter, but is still commendable. Our results might not extend to all schools that call themselves Montessori. In addition, we had low buy-in as we recruited for this study in summer 2021 when COVID-19 was still deeply concerning. We do not know if the results apply to families that did not consent to participation.”
The findings are also limited to the end of kindergarten. Whether the observed advantages for the Montessori group persist, grow, or fade in later elementary grades is a question for future research. The study authors expressed a strong interest in following these children to assess the long-term impacts of their early educational experiences.
“My collaborators at the American Institutes for Research and the University of Pennsylvania and University of Virginia are deeply appreciative of the schools, teachers, and families who participated, and to our funders, the Institute for Educational Sciences, Arnold Ventures, and the Brady Education Foundation,” Lillard added.
A new study has found that individuals with ADHD have a higher risk of being convicted of a crime, and reveals this connection also extends to their family members. The research suggests that shared genetics are a meaningful part of the explanation for this link. Published in Biological Psychology, the findings show that the risk of a criminal conviction increases with the degree of genetic relatedness to a relative with ADHD.
The connection between ADHD and an increased likelihood of criminal activity is well-documented. Past research indicates that individuals with ADHD are two to three times more likely to be arrested or convicted of a crime. Scientists have also established that both ADHD and criminality have substantial genetic influences, with heritability estimates around 70-80% for ADHD and approximately 50% for criminal behavior. This overlap led researchers to hypothesize that shared genetic factors might partly explain the association between the two.
While some previous studies hinted at a familial connection, they were often limited to specific types of crime or a small number of relative types. The current research aimed to provide a more complete picture. The investigators sought to understand how the risk for criminal convictions co-aggregates, or clusters, within families across a wide spectrum of relationships, from identical twins to cousins. They also wanted to examine potential differences in these patterns between men and women.
“ADHD is linked to higher rates of crime, but it’s unclear why. We studied families to see whether shared genetic or environmental factors explain this connection, aiming to better understand how early support could reduce risk,” said study author Sofi Oskarsson, a researcher and senior lecturer in criminology at Örebro University.
To conduct the investigation, researchers utilized Sweden’s comprehensive national population registers. They analyzed data from a cohort of over 1.5 million individuals born in Sweden between 1987 and 2002. ADHD cases were identified through clinical diagnoses or prescriptions for ADHD medication recorded in national health registers. Information on criminal convictions for any crime, violent crime, or non-violent crime was obtained from the National Crime Register, with the analysis beginning from an individual’s 15th birthday, the age of criminal responsibility in Sweden.
The study design allowed researchers to estimate the risk of a criminal conviction for an individual based on whether a relative had ADHD. By comparing these risks across different types of relatives who share varying amounts of genetic material—identical twins (100%), fraternal twins and full siblings (average 50%), half-siblings (average 25%), and cousins (average 12.5%)—the team could infer the potential role of shared genes and environments.
The results first confirmed that individuals with an ADHD diagnosis had a substantially higher risk of being convicted of a crime compared to those without ADHD. The risk was particularly elevated for violent crimes.
The analysis also revealed a significant gender difference: while men with ADHD had higher absolute numbers of convictions, women with ADHD had a greater relative increase in risk compared to women without the disorder. For violent crime, the risk was over eight times higher for women with ADHD, while it was about five times higher for men with ADHD.
“Perhaps not a surprise given what we know today about ADHD, but the stronger associations found among women were very interesting and important,” Oskarsson told PsyPost. “ADHD is not diagnosed as often in females (or is mischaracterized), so the higher relative risk in women suggest that when ADHD is present, it may reflect a more severe or concentrated set of risk factors.”
The central finding of the study was the clear pattern of familial co-aggregation. Having a relative with ADHD was associated with an increased personal risk for a criminal conviction. This risk followed a gradient based on genetic relatedness.
The highest risk was observed in individuals whose identical twin had ADHD, followed by fraternal twins and full siblings. The risk was progressively lower for half-siblings and cousins. This pattern, where the association weakens as genetic similarity decreases, points toward the influence of shared genetic factors.
“Close relatives of people with ADHD were much more likely to have criminal convictions, especially twins, supporting a genetic contribution,” Oskarsson explained. “But the link is not deterministic, most individuals with ADHD or affected relatives are not convicted, emphasizing shared risk, not inevitability.”
The study also found that the stronger relative risk for women was not limited to individuals with ADHD. A similar pattern appeared in some familial relationships, specifically among full siblings and full cousins, where the association between a relative’s ADHD and a woman’s conviction risk was stronger than for men. This suggests that the biological and environmental mechanisms connecting ADHD and crime may operate differently depending on sex.
“People with ADHD are at a higher risk of criminality, but this risk also extend to their relatives,” Oskarsson said. “This pattern suggest that some of the link between ADHD and crime stems from shared genetic and/or environmental factors. Importantly, this does not mean that ADHD causes crime, but that the two share underlying vulnerabilities. Recognizing and addressing ADHD early, especially in families, could reduce downstream risks and improve outcomes.”
As with any study, the researchers acknowledge some limitations. The study’s reliance on official medical records may primarily capture more severe cases of ADHD, and conviction data does not account for all criminal behavior. Because the data comes from Sweden, a country with universal healthcare, the findings may not be directly generalizable to countries with different social or legal systems. The authors also note that the large number of statistical comparisons means the overall consistency of the patterns is more important than any single result.
Future research could explore these associations in different cultural and national contexts to see if the patterns hold. Further investigation is also needed to identify the specific genetic and environmental pathways that contribute to the shared risk between ADHD and criminal convictions. These findings could help inform risk assessment and prevention efforts, but the authors caution that such knowledge must be applied carefully to avoid stigmatization.
“I want to know more about why ADHD and criminality are connected, which symptoms or circumstances matter most, and whether early support for individuals and families can help break that link,” Oskarsson added. “This study underscores the importance of viewing ADHD within a broader family and societal context. Early support for ADHD doesn’t just help the individual, it can have ripple effects that extend across families and communities.”
A new study has found that a robot’s feedback during a collaborative task can influence the feeling of closeness between the human participants. The research, published in Computers in Human Behavior, indicates that this effect changes depending on the robot’s appearance and how it communicates.
As robots become more integrated into workplaces and homes, they are often designed to assist with decision-making. While much research has focused on how robots affect the quality of a group’s decisions, less is known about how a robot’s presence might alter the personal relationships between the humans on the team. The researchers sought to understand this dynamic by exploring how a robot’s agreement or disagreement impacts the sense of interpersonal connection people feel.
“Given the rise of large language models in recent years, we believe robots of different forms will soon be equipped with non-scripted verbal language to help people make decisions in various contexts. We conducted our research to call for careful consideration and control over the precise behaviors robots should use to provide feedback in the future,” said study author Ting-Han Lin, a computer science PhD student at the University of Chicago.
The investigation centered on two established psychological ideas. One, known as Balance Theory, suggests that people feel more positive toward one another when they are treated similarly by a third party, even if that treatment is negative. The other concept, the Influence of Negative Affect, proposes that a negative tone or criticism can damage the general atmosphere of an interaction and harm relationships.
To test these ideas, the researchers conducted two separate experiments, each involving pairs of participants who did not know each other. In both experiments, the pairs worked together to answer a series of eight personal questions, such as “What is the most important factor contributing to a life well-lived?” For each question, participants first gave their own individual answers before discussing and agreeing on a joint response.
A robot was present to mediate the task. After each person gave their initial answer, the robot would provide feedback. This feedback varied in two ways. First was its positivity, meaning the robot would either agree or disagree with the person’s statement. Second was its treatment of the pair, meaning the robot would either treat both people equally (agreeing with both or disagreeing with both) or unequally (agreeing with one and disagreeing with the other).
The first experiment involved 172 participants interacting with a highly human-like robot named NAO. This robot could speak, use gestures like nodding or shaking its head, and employed artificial intelligence to summarize a person’s response before giving its feedback. Its verbal disagreements were designed to grow in intensity, beginning with mild phrases and ending with statements like, “I am fundamentally opposed with your viewpoint.”
The results from this experiment showed that the positivity of the robot’s feedback had a strong effect on the participants’ relationship. When the NAO robot gave positive feedback, the two human participants reported feeling closer to each other. When the robot consistently gave negative feedback, the participants felt more distant from one another.
“A robot’s feedback to two people in a decision-making task can shape their closeness,” Lin told PsyPost.
This outcome supports the theory regarding the influence of negative affect. The robot’s consistent negativity seemed to create a less pleasant social environment, which in turn reduced the feeling of connection between the two people. The robot’s treatment of the pair, whether equal or unequal, did not appear to be the primary factor shaping their closeness in this context. Participants also rated the human-like robot as warmer and more competent when it was positive, though they found it more discomforting when it treated them unequally.
The second experiment involved 150 participants and a robot with a very low degree of human-like features. This robot resembled a simple, articulated lamp and could not speak. It communicated its feedback exclusively through minimal gestures, such as nodding for agreement or shaking its head from side to side for disagreement.
With this less-human robot, the findings were quite different. The main factor influencing interpersonal closeness was the robot’s treatment of the pair. When the robot treated both participants equally, they reported feeling closer to each other, regardless of whether the feedback was positive or negative. Unequal treatment, where the robot agreed with one person and disagreed with the other, led to a greater sense of distance between them.
This result aligns well with Balance Theory. The shared experience of being treated the same by the robot, either through mutual agreement or mutual disagreement, seemed to create a bond. The researchers also noted a surprising finding. When the lamp-like robot disagreed with both participants, they felt even closer than when it agreed with both, suggesting that the robot became a “common enemy” that united them.
“Heider’s Balance Theory dominates when a low anthropomorphism robot is present,” Lin said.
The researchers propose that the different outcomes are likely due to the intensity of the feedback delivered by each robot. The human-like NAO robot’s use of personalized speech and strong verbal disagreement was potent enough to create a negative atmosphere that overshadowed other social dynamics. Its criticism was taken more seriously, and its negativity was powerful enough to harm the human-human connection.
“The influence of negative affect prevails when a high anthropomorphism robot exists,” Lin said.
In contrast, the simple, non-verbal gestures of the lamp-like robot were not as intense. Because its disagreement was less personal and less powerful, it did not poison the overall interaction. This allowed the more subtle effects of balanced versus imbalanced treatment to become the main influence on the participants’ relationship. Interviews with participants supported this idea, as people interacting with the machine-like robot often noted that they did not take its opinions as seriously.
Across both experiments, the robot’s feedback did not significantly alter how the final joint decisions were made. Participants tended to incorporate each other’s ideas fairly evenly, regardless of the robot’s expressed opinion. This suggests the robot’s influence was more on the social and emotional level than on the practical outcome of the decision-making task.
The study has some limitations, including the fact that the two experiments were conducted in different countries with different participant populations. The first experiment used a diverse group of museum visitors in the United States, while the second involved university students in Israel. Future research could explore these dynamics in more varied contexts.
Recent research provides new insight into the functions of common sexual behaviors, revealing how they contribute not just to physical pleasure but also to emotional bonding. A trio of studies, two published in the WebLog Journal of Reproductive Medicine and one in the International Journal of Clinical Research and Reports, examines the physiological and psychological dimensions of why men hold their partners’ legs and stimulate their breasts, what men gain from these acts, and how women experience them.
Researchers pursued these lines of inquiry because many frequently practiced sexual behaviors remain scientifically underexplored. While practices like a man holding a woman’s legs or performing oral breast stimulation are common, the specific reasons for their prevalence and their effects on both partners were not fully understood from an integrated perspective. The scientific motivation was to create a more comprehensive picture that combines biology, psychology, and social factors to explain what happens during these intimate moments.
“Human sexual behavior is often discussed socially, but many aspects of it lack meaningful scientific exploration,” said study author Rehan Haider of the University of Karachi. “We noticed a gap connecting physiological responses, evolutionary psychology, and relationship intimacy to why certain tactile behaviors are preferred during intercourse. Our goal was to examine these mechanisms in a respectful, evidence-based manner rather than rely on anecdote or cultural assumptions.”
The first study took a broad, mixed-methods approach to understand why men often hold women’s legs and engage in breast stimulation during intercourse. The researchers combined a review of existing literature with observational studies and self-reported surveys from adult heterosexual couples aged 18 to 50. This allowed them to assemble a model that connected male behaviors with female responses and relational outcomes.
The research team reported that 68 percent of couples practiced leg holding during intercourse. This position was found to facilitate deeper vaginal penetration and improve the alignment of the bodies, which in turn enhanced stimulation of sensitive areas like the clitoris and G-spot. Women in the study correlated this act with higher levels of sexual satisfaction.
The research also affirmed the significance of breast stimulation, noting that manual stimulation occurred in 60 percent of encounters and oral stimulation in 54 percent. This contact activates sensory pathways in the nipple-areolar complex, promoting the release of the hormones oxytocin and prolactin, which are associated with increased sexual arousal and emotional bonding. From a psychological standpoint, these behaviors appeared to reinforce feelings of intimacy, trust, and connection between partners.
“We were surprised by the consistency of emotional feedback among participants, particularly how strongly feelings of closeness and security were linked to these behaviors,” Haider told PsyPost. “It suggests an underestimated psychological component beyond pure physical stimulation.”
“The core message is that sexual touch preferences are not random—many are supported by biological reward pathways, emotional bonding hormones, and evolutionary reproductive strategies. Leg-holding and breast stimulation, for example, can enhance feelings of safety, intimacy, and arousal for both partners. Healthy communication and consent around such behaviors strengthen relational satisfaction.”
A second, complementary study focused specifically on the male experience of performing oral stimulation on a partner’s nipples. The goal was to understand the pleasure and psychological satisfaction men themselves derive from this act. To do this, researchers conducted a cross-sectional survey, recruiting 500 heterosexual men between the ages of 18 and 55. Participants completed a structured and anonymous questionnaire designed to measure the frequency of the behavior, their self-rated level of arousal from it, and its association with feelings of intimacy and overall sexual satisfaction.
The analysis of this survey data revealed a strong positive association between the frequency of performing nipple stimulation and a man’s own sense of sexual fulfillment and relational closeness. The results indicated that men do not engage in this behavior solely for their partner’s benefit. They reported finding the act to be both highly erotic and emotionally gratifying. The researchers propose that the behavior serves a dual function for men, simultaneously enhancing their personal arousal while reinforcing the psychological bond with their partner, likely through mechanisms linked to the hormone oxytocin, which plays a role in social affiliation and trust.
The third study shifted the focus to the female perspective, examining women’s physical and psychological responses to breast and nipple stimulation during penetrative intercourse. This investigation used a clinical and observational design, collecting data from 120 sexually active women aged 21 to 50. The methodology involved structured interviews, clinical feedback from counseling sessions, and the use of validated questionnaires, including the well-established Female Sexual Function Index (`FSFI`), a self-report tool used to assess key dimensions of female sexual function.
This research confirmed that stimulation of the breasts and nipples consistently contributed to a more positive sexual experience for women. Women with higher reported nipple sensitivity showed significantly better scores across the FSFI domains of arousal, orgasm, and satisfaction. Physically, this type of stimulation was associated with enhanced vaginal lubrication and clitoral responsiveness during intercourse.
Psychologically, the researchers found a connection between a woman’s perception of her breasts and her emotional experience. Women who described their breasts as “zones of intimacy” or “trust-enhancing touchpoints” reported a greater sense of emotional connection and reduced anxiety during sex. However, the study also identified that 23 percent of participants experienced discomfort during breast stimulation.
“This research does not imply that these behaviors are necessary or universally preferred,” Haider noted. “It’s also not about objectification. Rather, it focuses on how touch patterns can reinforce mutual trust, pleasure, and bonding when consensual and respectful. Not everyone will experience the same responses, and preferences vary widely. The study highlights trends—not prescriptions—and should be interpreted as an invitation for communication rather than a standard everyone must follow.”
While these studies offer a more detailed understanding of sexual behavior, the researchers acknowledge certain limitations. All three studies relied heavily on self-reported data, which can be influenced by memory recall and social desirability biases. The research was also primarily cross-sectional, capturing a snapshot in time, which can identify associations but cannot establish cause-and-effect relationships. For instance, it is unclear if frequent breast stimulation leads to higher intimacy or if more intimate couples simply engage in the behavior more often.
For future research, scientists suggest incorporating longitudinal designs that follow couples over an extended period to better understand the development of these behavioral patterns and their long-term effects on relationship satisfaction. There is also a need for more cross-cultural comparisons, as sexual scripts and preferences can vary significantly across different societies.
“Future work will explore female perspectives more deeply, neuroendocrine changes during different types of touch, and how cultural factors shape sexual comfort and preference,” Haider said. We’d like to compare findings across age groups and relationship durations as well. Sexual well-being is an important aspect of overall health, but it is rarely discussed scientifically. By approaching these topics with sensitivity and rigor, we hope to normalize evidence-based conversation and encourage couples to communicate openly.”
The human fascination with fear is a long-standing puzzle. From ghost stories told around a campfire to the latest blockbuster horror film, many people actively seek out experiences designed to frighten them. This seemingly contradictory impulse, where negative feelings like terror and anxiety produce a sense of enjoyment and thrill, has intrigued psychologists for decades. Researchers are now using a variety of tools, from brain scans to personality surveys, to understand this complex relationship.
Their work is revealing how our brains process fear, what personality traits draw us to the dark side of entertainment, and even how these experiences might offer surprising psychological benefits. Here is a look at twelve recent studies that explore the multifaceted psychology of horror, fear, and the paranormal.
(You can click on the subtitles to learn more about the studies.)
A new theory proposes that horror films appeal to us because they provide a safe, controlled setting for our brains to practice managing uncertainty. This idea is based on a framework known as predictive processing, which suggests the brain operates like a prediction engine. It constantly makes forecasts about what will happen next, and when reality doesn’t match its predictions, it generates a “prediction error” that it works to resolve.
This process doesn’t mean we only seek out calm, predictable situations. Instead, our brains are wired to find ideal opportunities for learning, which often exist at the edge of our understanding. We are drawn toward a “Goldlilocks zone” of manageable uncertainty that is neither too simple nor too chaotic. The rewarding feeling comes not just from being correct, but from the rate at which we reduce our uncertainty.
Horror films appear to be engineered to place us directly in this zone. They manipulate our predictive minds with a mix of the familiar and the unexpected. Suspenseful music and classic horror tropes build our anticipation, while jump scares suddenly violate our predictions. By engaging with this controlled chaos, we get to experience and resolve prediction errors in a low-stakes environment, which the brain can find inherently gratifying.
Research from an evolutionary perspective suggests that our enjoyment of horror serves a practical purpose: it prepares us for real-world dangers. This “threat-simulation hypothesis” posits that engaging with scary media is an adaptive trait, allowing us to explore threatening scenarios and rehearse our responses from a position of safety. Through horror, we can learn about predators, hostile social encounters, and other dangers without facing any actual risk.
A survey of over 1,100 adults found that a majority of people consume horror media and more than half enjoy it. The study revealed that people who enjoy horror expect to experience a range of positive emotions like joy and surprise alongside fear. This supports the idea that the negative emotion of fear is balanced by positive feelings, a phenomenon some call “benign masochism.”
The findings also showed that sensation-seeking was a strong predictor of horror enjoyment, as was a personality trait related to intellect and imagination. It seems those who seek imaginative stimulation are particularly drawn to horror. By providing a vast space for emotional and cognitive play, frightening entertainment allows us to build and display mastery over situations that would be terrifying in real life.
To better understand what makes a horror movie entertaining, researchers surveyed nearly 600 people about their reactions to short scenes from various horror subgenres. The study found that three key factors predicted both excitement and enjoyment: the intensity of fear the viewer felt, their curiosity about morbid topics, and how realistic they perceived the scenes to be.
The experience of fear itself was powerfully linked to both excitement and enjoyment, showing that the thrill of being scared is a central part of the appeal. Morbid curiosity also played a significant role, indicating that people with a natural interest in dark subjects are more likely to find horror entertaining. The perceived realism of a scene heightened the experience as well.
However, not all negative emotions contributed to the fun. Scenes that provoked high levels of disgust tended to decrease enjoyment, even if they were still exciting. This finding suggests that while fear can be a source of pleasure for horror fans, disgust often introduces an element that makes the experience less enjoyable overall.
Fear is not just for adults. A large-scale survey of 1,600 Danish parents has revealed that “recreational fear,” or the experience of activities that are both scary and fun, is a nearly universal part of childhood. An overwhelming 93% of children between the ages of 1 and 17 were reported to enjoy at least one type of scary yet fun activity, with 70% engaging in one weekly.
The study identified clear developmental trends in how children experience recreational fear. Younger children often find it in physical and imaginative play, such as being playfully chased or engaging in rough-and-tumble games. As they grow into adolescence, their interest shifts toward media-based experiences like scary movies, video games, and frightening online content. One constant across all ages was the enjoyment of activities involving high speeds, heights, or depths, like swings and amusement park rides.
These experiences are predominantly social. Young children typically engage with parents or siblings, while adolescents turn to friends. This social context may provide a sense of security that allows children to explore fear safely. The researchers propose that this type of play is beneficial, helping children learn to regulate their emotions, test their limits, and build psychological resilience.
A study involving 300 college students suggests that your favorite movie genre might offer clues about your personality. Using the well-established Big Five personality model, researchers found consistent links between film preferences and traits like extraversion, conscientiousness, and neuroticism.
Fans of horror films tended to score higher in extraversion, agreeableness, and conscientiousness, suggesting they may be outgoing, cooperative, and organized. They also scored lower in neuroticism and openness, which could indicate they are less emotionally reactive and less drawn to abstract ideas. In contrast, those who favored drama scored higher in conscientiousness and neuroticism, while adventure film fans were more extraverted and spontaneous.
While these findings point to a relationship between personality and media choice, the study has limitations. The sample was limited to a specific age group and cultural background, so the results may not apply to everyone. The research also cannot determine whether personality shapes film choice or if the films we watch might influence our personality over time.
Morbid curiosity, a trait defined by an interest in dangerous phenomena, may help explain why some people are drawn to music with violent themes, like death metal or certain subgenres of rap. A recent study found that people with higher levels of morbid curiosity were more likely to listen to and enjoy music with violent lyrics.
In an initial survey, researchers found that fans of music with violent themes scored higher on a scale of morbid curiosity than fans of other genres. A second experiment involved having participants listen to musical excerpts. The results showed that morbid curiosity predicted enjoyment of extreme metal with violent lyrics, but not rap music with violent lyrics, suggesting different factors may be at play for different genres.
The study authors propose that morbid curiosity is not a deviant trait, but an adaptive one that helps people learn about threatening aspects of life in a safe, simulated context. Music with violent themes can act as one of these simulations, allowing listeners to explore dangerous ideas and the emotions they evoke without any real-world consequences.
People who enjoy horror movies may have been better equipped to handle the psychological stress of the COVID-19 pandemic. A study conducted in April 2020 surveyed 322 U.S. adults about their genre preferences, morbid curiosity, and psychological state during the early days of the pandemic.
The researchers found that fans of horror movies reported less psychological distress than non-fans. They were less likely to agree with statements about feeling more depressed or having trouble sleeping since the pandemic began. Fans of “prepper” genres, such as zombie and apocalyptic films, also reported less distress and said they felt more prepared for the pandemic.
The study’s authors speculate that horror fans may have developed better emotion-regulation skills by repeatedly exposing themselves to frightening fiction in a controlled way. This “practice” with fear in a safe setting could have translated into greater resilience when faced with a real-world crisis.
Engaging with frightening entertainment might temporarily alter brain network patterns associated with depression. A study found that in individuals with mild-to-moderate depression, a controlled scary experience was linked to a brief reduction in the over-connectivity between two key brain networks: the default mode network (active during self-focused thought) and the salience network (which detects important events).
This over-connectivity is thought to contribute to rumination, a cycle of negative thoughts common in depression. By demanding a person’s full attention, the scary experience appeared to pull focus away from this internal loop and onto the external threat. The greater this reduction in connectivity, the more enjoyment participants reported.
The study also found that individuals with moderate depression needed a more intense scare to reach their peak enjoyment compared to those with minimal symptoms. While the observed brain changes were temporary, the findings raise questions about the interplay between fear, pleasure, and emotion regulation.
A recent study has found a connection between the type of horror media people watch and their beliefs in the paranormal. After surveying over 600 Belgian adults, researchers discovered that consumption of horror content claiming to be based on “true events” or presented as reality was associated with stronger paranormal beliefs.
Specifically, people who frequently watched paranormal reality TV shows and horror films marketed as being based on a true story were more likely to endorse beliefs in things like ghosts, spiritualism, and psychic powers. Other fictional horror genres, such as monster movies or psychological thrillers, did not show a similar connection.
This finding aligns with media effect theories suggesting that when content is perceived as more realistic or credible, it can have a stronger impact on a viewer’s attitudes. However, the study’s design means it is also possible that people who already believe in the paranormal are simply more drawn to this type of content.
Individuals who strongly believe in paranormal phenomena may exhibit different brain activity and cognitive patterns compared to skeptics. A study using electroencephalography (EEG) to record the brain’s electrical activity found that paranormal believers had reduced power in certain brainwave frequencies, specifically in the alpha, beta, and gamma bands, particularly in the frontal, parietal, and occipital regions of the brain.
Participants also completed a cognitive task designed to measure inhibitory control, which is the ability to suppress impulsive actions. Paranormal believers made more errors on this task than skeptics, suggesting reduced inhibitory control. They also reported experiencing more everyday cognitive failures, such as memory slips and attention lapses.
The researchers found that activity in one specific frequency band, beta2 in the frontal lobe, appeared to mediate the relationship between paranormal beliefs and inhibitory control. This suggests that differences in brain function, particularly in regions involved in high-level cognitive processes, may be connected to a person’s conviction in the paranormal.
Unusual events like premonitions, vivid dreams, and out-of-body sensations are surprisingly common, and people who report them often share certain psychological traits. A series of three studies involving over 2,200 adults found a strong link between anomalous experiences and a trait called “subconscious connectedness,” which describes the degree to which a person’s conscious and subconscious minds influence each other.
People who scored high in subconscious connectedness reported having anomalous experiences far more frequently than those with low scores. In one national survey, 86% of participants said they had at least one type of anomalous experience more than once. The most commonly reported was déjà vu, followed by correctly sensing they were being stared at and having premonitions that came true.
These experiences were also associated with other traits, including absorption, dissociation, vivid imagination, and a tendency to trust intuition. While people who reported more anomalous experiences also tended to report more stress and anxiety, these associations were modest, suggesting such experiences are a normal part of human psychology for many.
The eerie sensation that someone is nearby when you are alone may be a product of your brain trying to make sense of uncertainty. A study found that this “feeling of presence” is more likely to occur when people are in darkness with their senses dulled. Under these conditions, the brain may rely more on internal cues and expectations, sometimes generating the impression of an unseen agent.
In an experiment, university students sat alone in a darkened room for 30 minutes while wearing a sleeping mask and earplugs. The results showed that participants who reported higher levels of internal uncertainty were more likely to feel that another person was with them. This suggests that when sensory information is limited, the brain may interpret ambiguous bodily sensations or anxious feelings as evidence of an outside presence.
This cognitive process might be an evolutionary holdover. From a survival standpoint, it is safer to mistakenly assume a predator is hiding in the dark than to ignore a real one. This bias toward detecting agents could help explain why ghostly encounters and beliefs in invisible beings are so common across human cultures, especially in situations of isolation and vulnerability.
A new study reports that a diet rich in omega-3 fatty acids during pregnancy can prevent some of the lasting neuropsychiatric effects of prenatal THC exposure in rats. The findings, published in Molecular Psychiatry, suggest these protective effects are much more pronounced in male offspring, highlighting a significant sex-based difference in the outcomes.
The rationale behind the investigation stems from the increasing use of cannabis during pregnancy, coupled with a public perception that it is relatively safe. Scientific evidence, however, suggests that prenatal exposure to THC, the main psychoactive component in cannabis, can pose risks to a developing fetus. THC can cross the placenta and directly affect the fetal brain, interfering with the endocannabinoid system, a complex network of signals that helps guide proper brain formation.
This natural signaling system is built from fatty acids, which are lipids. The authors behind the new study hypothesized that since THC disrupts this lipid-based system, a dietary intervention focused on beneficial lipids like omega-3 fatty acids might offer a protective effect. Omega-3s are known to be fundamental for building healthy brain cells and circuits, making them a logical candidate for counteracting some of THC’s disruptive influence.
“Cannabis use during pregnancy is rising and there are misperceptions about its safety for the developing fetal brain. There is also a big knowledge gap about how prenatal cannabis use can impact critical brain developmental systems like the omega-3 fatty acid pathway, which is critical for healthy brain development and mental health outcomes,” said study author Steven R. Laviolette, a professor and director of the Addiction Research Group at the University of Western Ontario.
To explore this, the research team used a rat model. Pregnant rats were divided into four groups. Two groups received a standard control diet, and two received a diet enriched with omega-3 fatty acids. Within each dietary group, half of the dams were given daily injections of THC during gestation, while the other half received a harmless vehicle injection. This created four experimental conditions for the offspring: a control group, a group exposed only to omega-3s, a group exposed only to THC, and a group exposed to both THC and the omega-3 diet.
The researchers then followed the offspring into adulthood, conducting a comprehensive series of tests to assess their behavior, brain function, and brain chemistry. The first observation was related to birth weight. Offspring exposed to THC had significantly lower birth weights, but this effect was prevented in the pups whose mothers were on the omega-3 diet.
Behavioral testing in adulthood revealed clear, sex-specific outcomes. Males exposed to THC showed heightened anxiety-like behaviors in various tests. This anxiety was absent in the THC-exposed males that also received the omega-3 diet, suggesting the diet had a preventative effect. Females did not show the same anxiety-like behaviors from THC exposure.
The researchers also examined cognitive function through tests of social interaction, spatial working memory, and the ability to recognize objects in a specific order. In these tasks, prenatal THC exposure led to deficits in both male and female offspring. The omega-3 diet successfully prevented these cognitive problems in males. For females, the benefits were limited; the diet helped restore social motivation but did not improve their performance on the other memory tasks.
“We were surprised by 1) how severe the THC-induced abnormalities in omega-3-6 levels were in the brain and 2) how males and female offspring were differentially impacted by these effects, demonstrating that male vs. female offspring show differential sensitivity to maternal cannabis exposure,” Laviolette told PsyPost.
To understand the brain activity behind these behaviors, the team recorded electrical signals from neurons in three interconnected brain regions: the prefrontal cortex, the nucleus accumbens, and the hippocampus. They found that THC altered the normal firing patterns of brain cells differently in males and females. In the prefrontal cortex, THC caused hyperactivity in both sexes. The omega-3 diet restored normal activity in males but was less effective in females.
In the hippocampus, a region important for memory and mood, THC had opposite effects on activity in the two sexes. It made neurons in males underactive, while making neurons in females overactive. The omega-3 diet successfully corrected this imbalance in both sexes, returning neuronal activity to normal levels. The communication patterns between brain regions, which rely on coordinated rhythmic electrical waves, were also disrupted by THC. Again, the omega-3 diet helped normalize these communication rhythms more effectively in males than in females.
The deepest level of analysis looked at the molecular makeup of the brain, focusing on the lipids and proteins that are the building blocks of brain function. The results here were particularly revealing. THC exposure caused widespread disruptions in the balance of fatty acids and other lipid molecules in all three brain regions studied.
Even in the males whose behavior and brain activity appeared to be normalized by the omega-3 diet, these fundamental lipid imbalances persisted into adulthood. This suggests that while the dietary intervention could prevent outward symptoms, it did not completely fix the underlying chemical disruption caused by THC.
“While our dietary intervention prevented some of the negative impacts of fetal cannabis exposure, it did not fully restore normal fatty acid levels in the brain,” Laviolette said. “Thus, further research is needed to determine the precise balance of omega-3 (e.g. DHA vs. EPA) in order to block these negative outcomes.”
“Our findings are not to suggest that adding omega-3 supplements during pregnancy can prevent the negative effects of maternal cannabis exposure. Cannabis use during pregnancy is always dangerous and can have unintended negative effects on the developing child’s brain.”
Similarly, THC altered the levels of important proteins involved in brain cell communication and structure. The omega-3 diet helped correct many of these protein changes in males, but the effects were far less consistent in females. The findings collectively point to a scenario where the omega-3 diet provides a substantial buffering effect against THC-induced damage in the male brain, but the female brain seems to respond very differently to both the initial THC exposure and the dietary intervention.
“The major finding is that we found that exposure to THC during fetal brain development can strongly disrupt the normal balance between the omega-3 and omega-6 fatty acid pathways in the developing brain,” Laviolette explained. “These pathways need to be balanced in order to control processes like inflammation and oxidative stress, which are linked to increased risk for many cognitive and psychiatric problems in children.”
“We found that if we intervened with a high omega-3 dietary intervention during pregnancy, we were able to prevent many of the negative outcomes from maternal cannabis use. Importantly, this is not to suggest that taking omega-3 along with cannabis is a safer option, rather, it demonstrates that cannabis can strongly interfere with the developing brains normal balance of the omega-3-6 signaling pathways and that it would be necessary to restore healthy omega-3 fatty acid levels to block some of these dangerous side-effects of maternal cannabis use.”
“We also found that maternal cannabis use impacts three major brain areas, the hippocampus, prefrontal cortex and striatum, all of which had disruptions in normal fatty acid signaling levels and male and female offspring showed cognitive deficits in later life that were associated with pathology in these brain areas,” Laviolette said.
The study has some limitations. The research was conducted in rats, and while these models are informative for understanding basic neurobiology, the findings do not automatically translate to humans. The specific mechanisms, such as the diet’s effect on inflammation in the placenta, were not directly measured and require more investigation.
Looking ahead, researchers plan to further explore the biological reasons for the profound differences between male and female responses. They also hope to investigate whether providing omega-3 supplementation later in life, such as during childhood or adolescence, could help reverse or prevent problems that emerge long after birth.
The study, “Perinatal omega-3 sex-selectively mitigates neuropsychiatric impacts of prenatal THC in the cortico-striatal-hippocampal circuit,” was authored by Mohammed H. Sarikahya, Samantha L. Cousineau, Marta De Felice, Hanna J. Szkudlarek, Kendrick Lee, Aleksandra Doktor, Amanda Alcaide, Marieka V. DeVuono, Anubha Dembla, Karen Wong, Mathanke Balarajah, Sebastian Vanin, Miray Youssef, Kuralay Zhaksylyk, Madeline Machado, Haseeb Mahmood, Susanne Schmid, Ken K.-C. Yeung, Daniel B. Hardy, Walter Rushlow & Steven R. Laviolette.
A study published in the Journal of Marriage and Family examined the connection between fathers taking paternity leave and the developmental progress of their young children in Singapore. The researchers found that when fathers took two weeks or more of paternity leave, it was associated with increased involvement in childcare, stronger father-child bonds, and improved family dynamics. These factors, in turn, were linked to better academic performance and fewer behavioral challenges in children as they grew from preschool into early primary school.
Previous research , mostly from Western countries, has found that paternity leave was connected to fathers being more involved in childcare and to stronger family ties. However, there was less understanding of how this policy directly influenced the development of young children, especially over a longer period. This gap in knowledge was particularly notable in Asian societies, where paternity leave policies are often newer and offer shorter durations compared to European nations.
In Asia, many regions have only recently introduced paternity leave policies, or they do not have them at all. The length of leave available to fathers in these countries is generally shorter. For example, some countries offer only a few days, while others, like South Korea and Japan, have expanded leave to up to a year.
“Many Asian societies, including Singapore, are facing the challenges of raising fertility rates and the related issues of gender inequality within the family. Some western governments (especially Nordic countries) had introduced longer parental leave to alleviate parents’ work-life conflict and encourage fathers’ participation in childcare decades ago,” said study author Wei-Jun Jean Yeung, a professor and chair of the Family, Children, and Youth Research Cluster at the National University of Singapore.
“In Asian countries, while maternity leave has been widely provided, paternity leave is either relatively short compared to Nordic countries, or non-existing. We believe paternity leave is very important because it helps fathers build stronger bonds with their children and improve couples’ relationships, which could indirectly reduce gender inequality and potentially affect couples’ intention to have a child.”
“However, no study has comprehensively examined how paternity leave affects family relationships and early childhood development. This gap led us to start our research on the topic. This paper is our second study, following our first one published in 2022. We believe the results will be useful for Singapore and other Asian countries, particularly East Asian countries such as South Korea, Japan, and China, which also shares more prevalent patriarchal norms and ‘ultra-low’ fertility levels.”
The research was guided by two main theoretical perspectives: family systems theory and social capital theory. Family systems theory suggests that a family operates as a connected unit, where the actions and experiences of one member, such as a father’s involvement in childcare, can influence other parts of the family, including children’s development and the relationships between parents.
Social capital theory posits that strong relationships and bonds within a family, such as those between parents and children, contribute positively to a child’s development. Paternity leave is seen as a way to enhance this family social capital by giving fathers time to become more competent and involved caregivers.
The researchers analyzed data from the Singapore Longitudinal Early Development Study (SG-LEADS), which collected information from a large, representative sample of Singaporean children and their primary caregivers in two waves: 2018/2019 and 2021. The study focused on children who were born after May 1, 2013, which is when Singapore’s paternity leave policy began.
The final sample included 3895 children who lived with two parents and whose primary caregiver was their mother. For analyses focusing on developmental outcomes, the sample was further narrowed to children aged three and above who had reported data on both behavioral problems and academic achievements in both waves.
To measure children’s development, the study used the Children’s Behavior Problems Index (BPI) for children aged three and above, which assesses externalizing behaviors like aggression and internalizing behaviors like anxiety. Academic achievements were measured using test scores for letter-word identification and applied problems from the Woodcock-Johnson Test of Achievement. The key independent variable was paternity leave-taking, categorized based on whether fathers took no leave, one week of leave, or two weeks or more of leave, as reported by the mothers.
The researchers also examined several factors as potential intermediaries. Fathers’ involvement was measured by mothers’ reports of how much fathers participated in childcare activities like bathing, changing diapers, and playing. Father-child closeness was assessed by mothers’ statements about how close their child felt to their father. Family dynamics was a broader concept encompassing family conflict, marital satisfaction, and parenting aggravation, all reported by mothers.
The results showed that taking two weeks or more of paternity leave was associated with higher scores in children’s letter-word identification when they were three to six years old, and again when they were five to eight years old. This suggests a direct and lasting benefit for verbal skills.
For children’s applied problems, which measure numeracy skills, taking two weeks or more of leave was positively related to scores when children were three to six years old. Taking one week of leave was linked to better applied problems scores when children were five to eight years old, after accounting for earlier scores. This indicates some direct benefits for numerical abilities as well.
The researchers also found positive connections between paternity leave and the intermediary factors. Specifically, taking two weeks or more of paternity leave was linked to greater fathers’ involvement in childcare activities, stronger father-child closeness, and more positive family dynamics.
Fathers’ involvement, in turn, was positively related to father-child closeness, and both of these were associated with better family dynamics. While fathers’ involvement and father-child closeness did not directly influence children’s verbal academic scores, father-child closeness was directly related to children’s applied problems scores when they were three to six years old.
For children’s behavioral outcomes, paternity leave did not have a direct effect. Instead, its impact was entirely indirect. Taking two weeks or more of paternity leave was associated with fewer behavioral problems in children when they were three to six years old, and also later when they were five to eight years old, primarily through improved family dynamics. This suggests that paternity leave helps reduce children’s behavioral challenges by fostering a more supportive and cohesive family environment.
“Paternity leave is good for family relations and for children’s development,” Yeung told PsyPost. “It has the potential to improve spousal relations and parent-child relation. Our results show that 2 weeks or longer paternity leave was linked to greater fathers’ involvement in childcare, closer father-child relationships, and enhanced family dynamics (i.e, family members have fewer conflicts, mothers have higher marital satisfaction and feel less stressed about raising children). It can also have long-term benefits for children’s cognitive development and social-emotional well-being during early childhood.
“However, paternity leave should be at least two weeks or longer. We found one-week paternity leave does not have a positive impact on family dynamics and child development. It is possible that one week is too short for fathers to build a routine, learn the many new skills needed to care for a baby, and figure out how to work together with the mother. Two weeks gives fathers and mothers more time to adjust emotionally and practically, and to enjoy time with their new baby.”
“We should encourage countries to provide government-subsidized paternity leave that is at least two weeks long, and enable fathers to take paternity leave, because of its potential benefits to family and child well-being.”
The researchers controlled for a range of other influences, such as parents’ education, income, age, children’s age and gender, and household living arrangements, including the presence of domestic helpers or grandparents.
“A common misinterpretation of the results is that fathers who are more likely to take paternity leave are of higher socioeconomic status (SES), and it is the higher SES that makes their children do better cognitively and behaviorally,” Yeung said. “In our study, we have used rigorous methodology to address this selectivity issue, including using data from a nationally representative longitudinal study and taking into account a large number of parents’ and family characteristics to “isolate” the net impact of paternity leave taking on children’s developmental outcomes. ”
But there are still some limitations to consider. The study did not have information on fathers’ gender attitudes or their involvement before the child’s birth, which could influence their decision to take leave and their subsequent parenting behaviors. The measures for fathers’ involvement and family relationships were based on mothers’ reports, which might introduce some bias.
Future research could benefit from including perspectives from both parents. The measure of fathers’ involvement could also be expanded to include engagement in children’s educational and social activities more broadly. The researchers also acknowledge that while they used robust methods to account for pre-existing differences between fathers who took leave and those who did not, it cannot definitively prove a causal link due to the potential for unmeasured factors to play a role.
A new study has found that young adults who exhibit higher levels of manipulative, self-centered, and callous personality traits tend to report having lower quality family interactions. The research, published in the Journal of Professional & Applied Psychology, suggests a distinct connection between these so called “Dark Triad” traits and the health of family dynamics.
Researchers have long been interested in how personality develops, often focusing on widely recognized models of personality. Recently, attention has shifted toward understanding the less socially desirable aspects of human nature, collectively known as the Dark Triad, which includes Machiavellianism, narcissism, and psychopathy. These traits are associated with behaviors that can strain social bonds, yet their specific impact within the family unit has been a less explored area.
The study’s authors wanted to examine this connection in a specific cultural and demographic context. They focused on young adults in Pakistan, a country where a large portion of the population falls within the 18 to 25 age range. This period is a formative time when an individual’s personality and perspective are still evolving, heavily influenced by their immediate environment, especially the family. By investigating this group, the researchers aimed to add a non-Western perspective to a field of study that has predominantly been centered on European and North American populations.
“The motivation for this study stemmed from the fact that this area remains largely understudied in Pakistan, leaving a significant research gap,” said study author Quratul Ain Arshad, who is currently a Bachelor of Laws student at the University of London.
“This topic represents a real-world issue that has not received the attention it deserves. I have personally observed several families affected by these dark traits, struggling to cope due to a lack of awareness and understanding. Through this research, I aimed to shed light on this issue so that individuals can better recognize what is happening to them and those around them and seek the help and guidance they need.”
To conduct their investigation, the researchers recruited a sample of 300 young adults between the ages of 18 and 25 from various universities and corporate offices in Lahore, Pakistan. Participation was voluntary, and the confidentiality of the responses was protected. Each participant completed two self-report questionnaires designed to measure different psychological constructs.
The first questionnaire was the Short Dark Triad scale, which assesses the three core traits. Machiavellianism is characterized by a manipulative and cynical worldview, narcissism involves a sense of grandiosity and entitlement, and psychopathy is marked by impulsivity and a lack of empathy. The second questionnaire was a modified version of the Family Assessment Device, which measures the quality of family interactions across several dimensions. These dimensions include problem solving, communication, assigned roles, emotional responsiveness, emotional involvement, and behavior control.
After collecting the data, the research team performed a statistical analysis to determine if there was a relationship between the scores for Dark Triad traits and the scores for family functioning. This type of analysis reveals whether two variables tend to move together, either in the same direction or in opposite directions. The study specifically tested four hypotheses about these potential connections.
The primary finding confirmed the researchers’ main prediction. There was a clear negative relationship between overall scores on the Dark Triad scale and the overall quality of family interaction. This indicates that as an individual’s levels of Machiavellianism, narcissism, and psychopathy increased, their reported level of healthy family functioning tended to decrease. This suggests that these aversive personality traits are indeed connected to difficulties within the family environment.
When the researchers examined the traits individually, the results were more nuanced. The connection between Machiavellianism and a family’s general functioning was found to be very weak and not statistically meaningful. This suggests that a person’s tendency toward manipulation may not have a direct, measurable link to their perception of the family’s overall effectiveness.
A different pattern emerged for psychopathy. This trait was found to have a modest but statistically significant negative relationship with what is known as “affective responsiveness,” which is a family’s capacity to respond to situations with appropriate emotions. In simple terms, young adults with higher psychopathy scores were more likely to come from families they perceived as being less emotionally attuned.
The final hypothesis looked at the link between narcissism and “affective involvement,” which refers to the extent to which family members show interest and care for one another. Much like the finding for Machiavellianism, this connection was also very weak and not considered statistically significant. This outcome suggests that a person’s level of narcissism may not be directly tied to the degree of emotional investment they perceive within their family.
“The key takeaway from this study is the importance of self-awareness,” Arshad told PsyPost. “Every individual should strive to understand their own personality traits and reflect on their behaviors. By doing so, they can not only improve themselves but also better support those around them who may exhibit these traits.”
The study did have some limitations. The findings are based on self-report questionnaires, which means participants’ responses could have been influenced by a desire to present themselves or their families in a positive light. The sample was also drawn exclusively from one city in Pakistan and was limited to young adults, which means the results might not be generalizable to other age groups or cultures.
For future research, the authors suggest that longitudinal studies, which follow individuals over a long period, could provide deeper insight into how Dark Triad traits and family dynamics influence each other over time. Using multiple methods of assessment, beyond just self-reports, could also help create a more complete picture of these complex interactions. Such work could help in designing interventions aimed at improving family relationships and promoting healthier personality development.
“The size of the sample used in this study is not big enough to represent the total young adult population in Pakistan, but this study is significant in understanding how these traits shape interactions on a microlevel,” Arshad said. “The effect of this study is such that it will help researchers dig towards the developmental aspects of these traits and also conduct longitudinal studies in future to understand the implications of the Dark Triad traits in both older and younger populations than young adults.”
A new study has found that a student team’s collective emotional intelligence is a significant predictor of its success in collaborative problem-solving. Specifically, the abilities to understand and manage emotions were linked to both better teamwork processes and a higher quality final product. The findings, which also examined the role of personality, were published in the Journal of Intelligence.
While individual intelligence and personality traits like conscientiousness are known to predict individual success, much less is understood about what drives performance when students are required to work together in teams. This form of learning, known as collaborative problem solving, is increasingly common in modern education, prompting a need to identify the skills and dispositions that help groups succeed.
The study’s authors aimed to investigate how two sets of characteristics, emotional intelligence and the Big Five personality traits, might influence the performance of high school students working in small groups.
“This study was actually part of a larger project, called PEERSolvers, in which we were looking for scientifically supported ways to enhance the quality of students’ collaborative problem solving,” said study author Ana Altaras, a full professor in the Department of Psychology at the University of Belgrade.
“This naturally led us to explore the role played by emotional intelligence and personality in student collaborations. Having previously conducted two systematic reviews (Altaras et al., 2025; Jolić Marjanović et al., 2024), we knew that both emotional intelligence and the Big Five personality traits indeed act as ‘deep-level composition variables’ shaping the processes and outcomes of teamwork in higher-education and professional contexts.”
“We also knew that both variable sets contribute to the prediction of individual students’ school performance. However, we also saw an obvious research gap when it comes to exploring their joint effects on the performance of student teams in high school. Hence, we digged into this topic.”
The researchers recruited 162 tenth-grade students from twelve secondary schools. The students first completed assessments to measure their emotional intelligence and personality. Emotional intelligence was evaluated using the Mayer-Salovey-Caruso Emotional Intelligence Test, a performance-based test that measures a person’s actual ability to perceive, use, understand, and manage emotions. Personality was assessed with the Big Five Inventory, a questionnaire that measures neuroticism, extraversion, openness, agreeableness, and conscientiousness.
Following the initial assessments, the students were organized into 54 teams of three. Each team was then tasked with solving a complex social problem over a 2.5-hour session. The problems were open-ended and required creative thinking, covering topics such as regulating adolescent media use or balancing economic development with ecological protection. The entire collaborative session for each team was video-recorded, and each team submitted a final written solution.
Trained observers analyzed the video recordings to rate the quality of each team’s collaborative processes. They assessed four distinct aspects of teamwork: the exchange of ideas and information, the emotional atmosphere and level of respect, how the team managed its tasks and time, and how it managed interpersonal relationships and conflicts. In a separate analysis, a different set of evaluators rated the quality of the team’s final written solution based on criteria like realism, creativity, and the strength of its arguments.
The researchers found that emotional intelligence was a strong predictor of team performance. Teams with higher average scores in understanding and managing emotions showed superior teamwork processes. This improvement in collaboration, in turn, was associated with producing a better final solution. The ability to understand emotions also appeared to have a direct positive effect on the quality of the written solution. This suggests that knowledge about human emotions was directly applicable to solving the complex social problems presented in the task.
“Looking at the results of our study, emotional intelligence–particularly its ‘strategic branches’ or the ability to understand and manage emotions–had a lot to do with students’ performance in collaborative problem solving,” Altaras told PsyPost. “Student teams with higher team-average emotional intelligence engaged in a more constructive exchange of ideas, had a friendlier way of communicating, and were more efficient in managing both task and relationship-related challenges throughout the problem-solving process. Ultimately, these teams also came up with better solutions to the problems at hand. In sum, students’ emotional intelligence seems to contribute substantially to the quality of their collaborative problem solving.”
The role of personality traits was more nuanced and produced some unexpected results. As expected, the personality trait of openness to experience was positively associated with the quality of the final solution. This connection is likely due to the creative and open-ended nature of the problem-solving task.
But teams with a higher average level of neuroticism, a trait associated with anxiety and stress, were actually better at managing their tasks. The researchers propose that a tendency toward distress may have prompted these teams to plan their approach more diligently. In contrast, teams with higher average extraversion were less effective at relationship management, perhaps because they were less inclined to formally address group tensions.
“Contrary to our expectations, we found only few statistically significant associations between the Big Five personality traits and the quality of students’ collaboration,” Altaras said. “Moreover, the effects that did surface as significant–a positive effect of neuroticism on task management and a negative effect of extraversion on relationship management–seem counterintuitive in terms of their direction.”
When the researchers examined emotional intelligence and personality together in a combined model, emotional intelligence emerged as the more consistent and powerful predictor of overall performance. The contribution of personality was largely limited to the link between neuroticism and task management, suggesting emotional skills were more influential in this context.
As with all research, the study does have some limitations. The sample size was relatively small due to the intensive nature of analyzing hours of video footage. The teams were also composed of students of the same gender, which might not fully represent the dynamics of mixed-gender groups common in schools. Additionally, the study did not measure the students’ general academic intelligence, which could also be a factor in their performance.
“In our defense, emotional intelligence has already been shown to have incremental predictive value in so many instances–including the prediction of students’ individual school performance–that we would not expect it to lose much of its predictive weight when analyzed concurrently with academic abilities,” Altaras noted. “Still, the picture would be more complete had we been able to also test participants’ academic intelligence and include this variable as another potential predictor of their performance in collaborative problem solving.”
For future research, the authors suggest exploring these dynamics in larger and more diverse student groups. It would also be informative to see if these findings hold when teams are faced with different kinds of problems, such as those that are less social and more technical in nature. Examining these factors could provide a more complete picture of the interplay between ability, personality, and group success in educational settings.
“Within the PEERSolvers project, we have already developed a training (PDF) that targets, among other things, students’ emotional intelligence abilities and knowledge of personality differences, hoping to enhance the quality of their collaborative problem solving in this manner,” Altaras said. “In an experimental study, the training was shown to make a difference–i.e., to have a positive effect on students’ performance in collaborative problem solving (Krstić et al., 2025)–and we are now looking forward to having it more widely implemented in schools. When it comes to further research, we will certainly continue to explore the role of emotional intelligence abilities in the educational context, considering the performance and well-being of both students and teachers.”
New research suggests that the desire for a psychologically rich life, one filled with varied and perspective-altering experiences, is a significant driver behind why people choose activities that are intentionally unpleasant or challenging. The series of studies, published in the journal Psychology & Marketing, indicates that this preference is largely fueled by a motivation for personal growth.
Researchers have long been interested in why people sometimes opt for experiences that are not traditionally pleasurable, such as watching horror movies, eating intensely sour foods, or enduring grueling physical challenges. This behavior, known as counterhedonic consumption, seems to contradict the basic human drive to seek pleasure and avoid pain. While previous explanations have pointed to factors like sensation-seeking or a desire to accumulate a diverse set of life experiences, researchers proposed a new motivational framework to explain this phenomenon.
They theorized that some individuals are driven by a search for psychological richness, a dimension of well-being distinct from happiness or a sense of meaning. A psychologically rich life is characterized by novelty, complexity, and experiences that shift one’s perspective. The researchers hypothesized that this drive could lead people to embrace discomfort, not for the discomfort itself, but for the personal transformation and growth such experiences might offer.
To investigate this idea, the researchers conducted a series of ten studies involving a total of 2,275 participants. In an initial study, participants were presented with a poster for a haunted house pass and asked how likely they would be to try it. They also completed questionnaires measuring their desire for a psychologically rich life, as well as their desire for a happy or meaningful life and their tendency toward sensation-seeking.
The results showed a positive relationship between the search for psychological richness and a preference for the haunted house experience. This connection remained even when accounting for the other factors.
To see if this finding extended beyond fear-based activities, a subsequent study presented participants with a detailed description of an intensely sour chicken dish. Again, individuals who scored higher on the scale for psychological richness expressed a greater likelihood of ordering the dish.
A third study solidified these findings in a choice-based scenario, asking participants to select between a “blissful garden” experience and a “dark maze” designed to be disorienting. Those with a stronger desire for psychological richness were more likely to choose the dark maze, a finding that held even after controlling for general risk-taking tendencies.
Having established a consistent link, the research team sought to determine causality. In another experiment, they temporarily prompted one group of participants to focus on psychological richness by having them write about what it means to make choices based on a desire for interesting and perspective-changing outcomes. A control group wrote about their daily life. Afterward, both groups were asked about their interest in a horror movie streaming service.
The group primed to think about psychological richness showed a significantly higher preference for the service, suggesting that this mindset can directly cause an increased interest in counterhedonic experiences.
The next step was to understand the psychological process behind this link. The researchers proposed that a focus on self-growth was the key mechanism. One study tested this by again presenting the sour food scenario and then asking participants to what extent their choice was motivated by a desire for self-discovery and personal development. A statistical analysis revealed that the desire for self-growth fully explained the connection between a search for psychological richness and the preference for the sour dish.
To ensure self-growth was the primary driver, another study tested it against an alternative explanation: the desire to create profound memories. While a rich life might involve creating interesting stories to tell, the results showed that self-growth was the significant factor explaining the choice for the sour dish, whereas the desire for profound memories was not.
Further strengthening the causal claim, another experiment first manipulated participants’ focus on psychological richness and then measured their self-growth motivation. The results showed that the manipulation increased a focus on self-growth, which in turn increased the preference for the counterhedonic food item.
A final, more nuanced experiment provided further support for the self-growth mechanism. In this study, the researchers manipulated self-growth motivation directly. One group was asked to write about making choices that foster personal growth, while a control group was not. In the control condition, the expected pattern emerged: people higher in the search for psychological richness were more interested in the sour dish.
However, in the group where self-growth was made salient, preferences for the sour dish increased across the board. This effectively reduced the predictive power of a person’s baseline level of psychological richness, indicating that when the need for self-growth is met, the underlying trait becomes less of a deciding factor.
The research has some limitations. Many of the studies relied on hypothetical scenarios and self-reported preferences, which may not perfectly reflect real-world consumer behavior. The researchers suggest that future work could use field experiments to observe actual choices in natural settings. They also note that cultural differences could play a role, as some cultures may place a higher value on experiences of discomfort as a pathway to wisdom or personal development. Exploring these boundary conditions could provide a more complete picture of this motivational system.
A new study finds that single adults in both the United States and Japan report lower well-being than their married peers. The research suggests that the influence of family support and strain on this health and satisfaction gap differs significantly between the two cultures. The findings were published in the journal Personal Relationships.
Researchers conducted this study to better understand the experiences of single adults outside of Western contexts. Much of the existing research has focused on places like the United States, where singlehood is becoming more common and accepted. In these individualistic cultures, some studies suggest single people may even have stronger connections with family and friends than married individuals.
However, in many Asian cultures, including Japan, marriage is often seen as a more essential part of life and family. This can create a different set of social pressures for single people. The researchers wanted to investigate whether these cultural differences would alter how family relationships, both positive and negative, are connected to the well-being of single and married people in the U.S. and Japan.
“I’ve always been curious about relationship transitions and singlehood lies in this awkward space where people are unsure if it really counts as an actual ‘relationship stage’ per se,” said study author Lester Sim, an assistant professor of psychology at Singapore Management University.
“Fortunately, the field is starting to recognize singlehood as an important period and it’s becoming more common, yet people still seem to judge singles pretty harshly. I find that kind of funny in a way, because it often reflects how we judge ourselves through others. Coming from an Asian background, I also wondered if these attitudes toward singlehood might play out differently across cultures, especially since family ties are so central in Asian contexts. That curiosity really sparked this project.”
To explore this, the research team analyzed data from two large, nationally representative studies: the Midlife in the U.S. (MIDUS) study and the Midlife in Japan (MIDJA) study. The combined sample included 4,746 participants who were 30 years of age or older. The researchers focused specifically on individuals who identified as either “married” or “never married,” and they took additional steps to exclude participants who were in a cohabiting or romantic relationship despite being unmarried.
Participants in both studies answered questions at two different points in time. The first wave of data included their marital status, their perceptions of family support, and their experiences of family strain. Family support was measured with items asking how much they felt their family cared for them or how much they could open up to family about their worries. Family strain was assessed with questions about how often family members criticized them or let them down.
At the second wave of data collection, participants reported on their well-being. This included rating their overall physical health on a scale from 0 to 10 and their satisfaction with life through a series of six questions about different life domains. The researchers then used a statistical approach to see how marital status at the first time point was related to well-being at the second time point, and whether family support and strain helped explain that relationship.
Across the board, the results showed that single adults in both the United States and Japan reported poorer physical health and lower life satisfaction compared to their married counterparts. This finding aligns with a large body of previous research suggesting that marriage is generally associated with better health outcomes.
When the researchers examined the role of family dynamics, they found distinct patterns in each country. For American participants, being married was associated with receiving more family support and experiencing less family strain. Both of these family factors were, in turn, linked to higher well-being. This suggests that for Americans, the well-being advantage of being married is partially explained by having more supportive and less tense family relationships.
The pattern observed in the Japanese sample was quite different. Single Japanese adults did report experiencing more family strain than married Japanese adults. Yet, this higher level of family strain did not have a significant connection to their physical health or life satisfaction later on.
“Family relationships matter a lot for everyone, whether you’re single or married, but in different ways across cultures,” Sim told PsyPost. “We found that singles in both the US and Japan reported lower well-being, in part because they experienced more family strain and less support (differentially across cultures). So even though singlehood is becoming more common, it still carries social and emotional costs. I think this shows how important it is to build more inclusive environments where singles feel equally supported and valued.”
Another notable finding from the Japanese sample was that there was no significant difference in the amount of family support reported by single and married individuals. While family support did predict higher life satisfaction for Japanese participants, it did not serve as a pathway explaining the well-being gap between single and married people in the way it did for Americans.
“I honestly thought the patterns would differ more across cultures,” Sim said. “I expected singles in Western countries to feel more accepted, and singles in Asia to rely more on family support and report greater strain; but neither of the latter findings turned out to be the case. It seems that, across the board, social norms around marriage still shape how people experience singlehood and well-being.”
The researchers acknowledged some limitations of their work. The definition of “single” was based on available survey questions and could be refined in future studies with more direct inquiries about relationship status.
“We focused only on familial support and strain because family is such a big part of East Asian culture,” Sim noted. “But singlehood is complex: friendships, loneliness, voluntary versus involuntary singlehood, and how satisfied people feel being single all matter too. We didn’t examine these constructs in the current study because there is existing work on this topic, so I wanted to bring more focus onto the family (especially with the cross-cultural focus). Future work should dig into those other layers and examine how they interact to shape the singlehood experience.”
It would also be beneficial to explore these dynamics across different age groups, as the pressures and supports related to marital status may change over a person’s lifespan. Such work would help create a more comprehensive picture of how singlehood is experienced around the world.
“I want to keep exploring how culture shapes the meanings people attach to relationships and singlehood,” Sim explained. “Long term, I hope this work helps shift the narrative away from the idea that marriage is the default route to happiness, and shift toward recognizing that there are many valid ways to live a good life.”
“Being single isn’t a problem to be fixed. It’s a meaningful, often intentional part of many people’s lives. The more we understand that, the closer we get to supporting well-being for everyone, not just those who are married.”
A new clinical trial has found that adding repeated intravenous ketamine infusions to standard care for hospitalized patients with serious depression did not provide a significant additional benefit. The study, which compared ketamine to a psychoactive placebo, suggests that previous estimates of the drug’s effectiveness might have been influenced by patient and clinician expectations. These findings were published in the journal JAMA Psychiatry.
Ketamine, originally developed as an anesthetic, has gained attention over the past two decades for its ability to produce rapid antidepressant effects in individuals who have not responded to conventional treatments. Unlike standard antidepressants that can take weeks to work, a single infusion of ketamine can sometimes lift mood within hours. A significant drawback, however, is that these benefits are often short-lived, typically fading within a week.
This has led to the widespread practice of administering a series of infusions to sustain the positive effects. A central challenge in studying ketamine is its distinct psychological effects, such as feelings of dissociation or detachment from reality. When compared to an inactive placebo like a saline solution, it is very easy for participants and researchers to know who received the active drug, potentially creating strong expectancy effects that can inflate the perceived benefits.
To address this, the researchers designed their study to use an “active” placebo, a drug called midazolam, which is a sedative that produces noticeable effects of its own, making it a more rigorous comparison.
“Ketamine has attracted a lot of interest as a rapidly-acting antidepressant but it has short-lived effects. Therefore, its usefulness is quite limited. Despite this major limitation, ketamine is increasingly being adopted as an off-label treatment for depression, especially in the USA,” said study author Declan McLoughlin, a professor at Trinity College Dublin.
“We hypothesized that repeated ketamine infusions may have more sustained benefit. So far this has been evaluated in only a small number of trials. Another problem is that few ketamine trials have used an adequate control condition to mask the obvious dissociative effects of ketamine, e.g. altered consciousness and perceptions of oneself and one’s environment.”
“To try address some of these issues, we conducted an independent investigator-led randomized trial (KARMA-Dep 2) to evaluate antidepressant efficacy, safety, cost-effectiveness, and quality of life during and after serial ketamine infusions when compared to a psychoactive comparison drug midazolam. Trial participants were randomized to receive up to eight infusions of either ketamine or midazolam, given over four weeks, in addition to all other aspects of usual inpatient care.”
The trial, conducted at an academic hospital in Dublin, Ireland, aimed to see if adding twice-weekly ketamine infusions to the usual comprehensive care provided to inpatients could improve depression outcomes. Researchers enrolled adults who had been voluntarily admitted to the hospital for moderate to severe depression. These participants were already receiving a range of treatments, including medication, various forms of therapy, and psychoeducation programs.
In this randomized, double-blind study, 65 participants were assigned to one of two groups. One group received intravenous ketamine infusions twice a week for up to four weeks, while the other group received intravenous midazolam on the same schedule. The doses were calculated based on body weight. The double-blind design meant that neither the patients, the clinicians rating their symptoms, nor the main investigators knew who was receiving which substance. Only the anesthesiologist administering the infusion knew the assignment, ensuring patient safety without influencing the results.
The primary measure of success was the change in participants’ depression scores, assessed using a standard clinical tool called the Montgomery-Åsberg Depression Rating Scale. This assessment was conducted at the beginning of the study and again 24 hours after the final infusion. The researchers also tracked other outcomes, such as self-reported symptoms, rates of response and remission, cognitive function, side effects, and overall quality of life.
After analyzing the data from 62 participants who completed the treatment phase, the study found no statistically significant difference in the main outcome between the two groups. Although patients in both groups showed improvement in their depressive symptoms during their hospital stay, the group receiving ketamine did not fare significantly better than the group receiving midazolam. The average reduction in depression scores was only slightly larger in the ketamine group, a difference that was small and could have been due to chance.
Similarly, there were no significant advantages for ketamine on secondary measures, including self-reported depression symptoms, cognitive performance, or long-term quality of life. While the rate of remission from depression was slightly higher in the ketamine group (about 44 percent) compared to the midazolam group (30 percent), this difference was not statistically robust. The treatments were found to be generally safe, though ketamine produced more dissociative experiences during the infusion, while midazolam produced more sedation.
“We found no significant difference between the two groups on our primary outcome measure (i.e. depression severity assessed with the commonly used Montgomery-Åsberg Depression Rating Scale (MADRS)),” McLoughlin told PsyPost. “Nor did we find any difference between the two groups on any other secondary outcome or cost-effectiveness measure. Under rigorous clinical trial conditions, adjunctive ketamine provided no additional benefit to routine inpatient care during the initial treatment phase or the six-month follow-up period.”
A key finding emerged when the researchers checked how well the “blinding” had worked. They discovered that it was not very successful. From the very first infusion, the clinicians rating patient symptoms were able to guess with high accuracy who was receiving ketamine.
Patients in the ketamine group also became quite accurate at guessing their treatment over time. This functional unblinding complicates the interpretation of the results, as the small, nonsignificant trend favoring ketamine could be explained by the psychological effect of knowing one is receiving a treatment with a powerful reputation.
“Our initial hypothesis was that repeated ketamine infusions for people hospitalised with depression would improve mood outcomes,” McLoughlin said. “However, contrary to our hypothesis, we found this not to be the case. We suspect that functional unblinding (due to its obvious dissociative effects) has amplified the placebo effects of ketamine in previous trials. This is a major, often unacknowledged, problem with many recent trials in psychiatry evaluating ketamine, psychedelic, and brain stimulation therapies. Our trial highlights the importance of reporting the success, or lack thereof, of blinding in clinical trials.”
The study’s authors acknowledged some limitations. The research was unable to recruit its planned number of participants, partly due to logistical challenges created by the COVID-19 pandemic. This smaller sample size reduced the study’s statistical power, making it harder to detect a real, but modest, difference between the treatments if one existed. The primary limitation, however, remains the challenge of blinding.
The results from this trial suggest that when tested under more rigorous conditions, the antidepressant benefit of repeated ketamine infusions may be smaller than suggested by earlier studies that used inactive placebos. The researchers propose that expectations for both patients and clinicians may play a substantial role in ketamine’s perceived effects. This highlights the need to recalibrate expectations for ketamine in clinical practice and for more robustly designed trials in psychiatry.
Looking forward, the researchers emphasize the importance of reporting negative or null trial results to provide a balanced view of a treatment’s capabilities. They also expressed concern about a separate in the field: the promotion of ketamine as an equally effective alternative to electroconvulsive therapy, or ECT.
“Scrutiny of the scientific literature shows that this includes methodologically flawed trials and invalid meta-analyses,” McLoughlin said. “We discuss this in some detail in a Comment piece just published in Lancet Psychiatry. Unfortunately, such errors have been accepted as scientific evidence and are already creeping into international clinical guidelines. There is a thus a real risk of patients and clinicians being steered towards a less effective treatment, particularly for patients with severe, sometimes life-threatening, depression.”
A new study indicates that Donald Trump’s frequent shrugging is a deliberate communication tool used to establish common ground with his audience and express negative evaluations of his opponents and their policies. The research, published in the journal Visual Communication, suggests these gestures are a key component of his populist performance style, helping him appear both ordinary and larger-than-life.
Researchers have become increasingly interested in the communication style of right-wing populism, which extends beyond spoken words to include physical performance. While a significant amount of analysis has focused on Donald Trump’s language, particularly on social media platforms, his live performances at rallies have received less systematic attention. The body is widely recognized as being important to political performance, but the specific gestures used are not always well understood.
This new research on shrugging builds on a previous study by one of the authors that examined Trump’s use of pointing gestures. That analysis found that Trump uses different kinds of points to serve distinct functions, such as pointing outwards to single out opponents, pointing inwards to emphasize his personal commitment, and pointing downwards to connect his message to the immediate location of his audience. The current study continues this investigation into his non-verbal communication by focusing on another of his signature moves, the shrug.
(1) Political scientists frequently refer to the more animated bodily performance of right wing populist politicians like Trump compared to non-populist leaders. We wanted to study one gesture – the shrug – that seemed to be implicated here. (2) Trump’s shrug gestures have been noted by the media previously and described as his “signature move”. We wanted to study this gesture in more detail to examine its precise forms and the way he uses it to fulfil rhetorical goals.”
“(3) To meet a gap: while a great deal has been written about Donald Trump’s speech and his use of language online, much less has been written about the gestures that accompany his speech in live settings. This is despite the known importance of gesture in political communication.”
To conduct their analysis, the researchers examined video footage of two of Trump’s campaign rallies from the 2016 primary season. The events, one in Dayton, Ohio, and the other in Buffalo, New York, amounted to approximately 110 minutes of data. The researchers adopted a conservative approach, identifying 187 clear instances of shrugging gestures across the two events.
Each shrug was coded based on its physical form and its communicative function. For the form, they classified shrugs based on the orientation of the forearms and the position of the hands relative to the body. They also noted whether the shrug was performed with one or two hands and whether it was a simple gesture or a more complex, animated movement. To understand the function, they analyzed the spoken words accompanying each shrug to determine the meaning being conveyed.
Hart was surprised “just how often Trump shrugs – 1.7 times per minute in the campaign rallies analyzed. Trump is a prolific shrugger and this is one way his communication style breaks with traditional forms of political communication.”
The analysis of the physical forms of the shrugs provided evidence for what has been described as a strong “corporeal presence.” Trump tended to favor expansive shrugs, with his hands positioned outside his shoulder width, a form that physically occupies more space.
The second most frequent type was the “lateral” shrug, where his arms extend out to his sides, sometimes in a highly theatrical, showman-like manner. This use of large, exaggerated gestures appears to contribute to a performance style more commonly associated with live entertainment than with traditional politics.
The researchers also noted that nearly a third of his shrugs were complex, meaning they involved animated, oscillating movements. These gestures create a dynamic and sometimes caricatured performance. While these expansive and animated shrugs help create an extraordinary, entertaining persona, the very act of shrugging is an informal, everyday gesture. This combination seems to allow Trump to simultaneously signal both his ordinariness and his exceptionalism.
When examining the functions of the shrugs, the researchers found that the most common meaning was not what many people might expect. While shrugs are often associated with expressing ignorance (“I don’t know”) or indifference (“I don’t care”), these were not their primary uses in Trump’s speeches. Instead, the most frequent function, accounting for over 44 percent of instances, was to signal common ground or obviousness. Trump often uses a shrug to present a statement as a self-evident truth that he and his audience already share.
For example, he would shrug when asking rhetorical questions like “We love our police. Do we love our police?” The gesture suggests the answer is obvious and that everyone in the room is in agreement. He also used these shrugs to present his own political skills as a given fact or to frame the shortcomings of his opponents as plainly evident to all. This use of shrugging appears to be a powerful tool for building a sense of shared knowledge and values with his supporters.
“Most people think of shrugs as conveying ‘I don’t know’ or ‘I don’t care,” Hart told PsyPost. “While Trump uses shrugs to convey these meanings, more often he uses shrugs to indicate that something is known to everyone or obviously the case. This is one of the ways he establishes common ground and aligns himself with his audience, indicating that he and they hold a shared worldview.”
The second most common function was to express what the researchers term “affective distance.” This involves conveying negative emotions like disapproval, dissatisfaction, or dismay towards a particular state of affairs. When discussing trade deals he considered terrible or military situations he found lacking, a shrug would often accompany his words. In these cases, the gesture itself, rather than the explicit language, carried the negative emotional evaluation of the topic.
Shrugs that conveyed “epistemic distance,” meaning ignorance, doubt, or disbelief, accounted for about 17 percent of the total. A notable use of this function occurred during what is known as “constructed dialogue,” where Trump would re-enact conversations. In one instance, he used a mocking shrug while impersonating a political opponent to portray them as clueless and incompetent, a performance that drew laughter from the crowd.
The least common function was indifference, or the classic “I don’t care” meaning. Though infrequent, these shrugs served a strategic purpose. When shrugging alongside a phrase like “I understand that it might not be presidential. Who cares?,” Trump used the gesture to dismiss the conventions of traditional politics. This helps him position himself as an outsider who is not bound by the same rules as the political establishment.
The findings highlight that “what politicians do with their hands and other body parts is an important part of their message and their brand,” Hart told PsyPost. However, he emphasizes that “gestures are not ‘body language.’ They do not accidentally give away one’s emotional state. Gestures are built in to the language system and are part of the way we communicate. They carry part of the information speakers intend to convey and that information forms part of the message audiences take away.”
The study does have some limitations. Its analysis is focused exclusively on Donald Trump, so it remains unclear whether this pattern of shrugging is unique to his style or a broader feature of right-wing populist communication. Future research could compare his gestural profile to that of other populist and non-populist leaders.
Additionally, the study centered on one specific gesture, and a more complete picture would require analyzing the full range of a politician’s non-verbal repertoire. The authors also suggest that future work could examine other elements, like facial expressions and the timing of gestures, in greater detail.
Despite these limitations, the research provides a detailed look at how a seemingly simple gesture can be a sophisticated and versatile rhetorical tool. Trump’s shrugs appear to be a central part of a performance style that transgresses political norms, creates entertainment value, and forges a strong connection with his base. The findings indicate the importance of looking beyond a politician’s words to understand the full, embodied performance through which they communicate their message.
“We hope to look at other gestures of Trump to build a bigger picture of how he uses his body to distinguish himself from other politicians and to imbue his performances with entertainment value,” Hart said. This might include, for example, his use of chopping or slicing gestures. I also hope to explore the gestural performances of other right wing populist politicians in Europe to see how their gestures compare. ”
A new study examining nine consecutive birth years in Sweden indicates that the dramatic rise in clinical diagnoses of autism spectrum disorder is not accompanied by an increase in autism-related symptoms in the population. The research, published in the journal Psychiatry Research, also found that while parent-reported symptoms of ADHD remained stable in boys, there was a small but statistically significant increase in symptoms among girls.
Autism spectrum disorder, or ASD, is a neurodevelopmental condition characterized by differences in social communication and interaction, along with restricted or repetitive patterns of behavior and interests. Attention-Deficit/Hyperactivity Disorder, or ADHD, is another neurodevelopmental condition marked by persistent patterns of inattention, hyperactivity, and impulsivity that can interfere with functioning or development. Over the past two decades, the number of clinical diagnoses for both conditions has increased substantially in many Western countries, particularly among teenagers and young adults.
This trend has raised questions about whether the underlying traits associated with these conditions are becoming more common in the general population. Researchers sought to investigate this possibility by looking beyond clinical diagnoses to the level of symptoms reported by parents.
“The frequency of clinical diagnoses of ASD and ADHD has increased substantially over the past decades across the world,” said study author Olof Arvidsson, a PhD student at the Gillberg Neuropsychiatry Centre at Gothenburg University and resident physician in Child and Adolescent Psychiatry.
“The largest prevalence increase has been among teenagers and young adults. Therefore, we wanted to investigate if symptoms of ASD and ADHD in the population had increased over time in 18-year-olds. In this study we used data from a twin study in Sweden in which parents reported on symptoms of ASD and ADHD when their children turned 18 and investigated whether symptoms had increased between year 2011 to 2019.”
To conduct their analysis, the researchers utilized data from a large, ongoing project called the Child and Adolescent Twin Study in Sweden. This study follows twins born in Sweden to learn more about mental and physical health. For this specific investigation, researchers focused on information collected from the parents of nearly 10,000 twins born between 1993 and 2001. When the twins reached their 18th birthday, their parents were asked to complete a web-based questionnaire about their children’s behaviors and traits.
Parents answered a set of 12 questions designed to measure symptoms related to autism. These items correspond to the diagnostic criteria for ASD. For ADHD, parents completed a 17-item checklist covering problems associated with inattention and executive function, which are core components of ADHD.
Using this data, the researchers employed statistical methods to analyze whether the average symptom scores changed across the nine different birth years, from 1993 to 2001. They also looked at the percentage of individuals who scored in the highest percentiles, representing those with the most significant number of traits.
The analysis showed no increase in the average level of parent-reported autism symptoms among 18-year-olds across the nine-year span. This stability was observed for both boys and girls. Similarly, when the researchers examined the proportion of individuals with the highest symptom scores, defined as those in the top five percent, they found no statistically significant change over time. This suggests that the prevalence of autism-related traits in the young adult population remained constant during this period.
The results for ADHD presented a more nuanced picture. Among boys, the data indicated that parent-reported ADHD symptoms were stable. There was no significant change in either the average symptom scores or in the percentage of boys scoring in the top 10 percent. For girls, however, the study identified a small but statistically detectable increase in ADHD symptoms over the nine birth years. This trend was apparent in both the average symptom scores and in the proportion of girls who scored in the top 10 percent for ADHD traits.
Despite being statistically significant, the researchers note that the magnitude of this increase in girls was small. The year of birth explained only a very small fraction of the variation in ADHD symptom scores. The results suggest that while there may be a slight upward trend in certain ADHD symptoms among adolescent girls, it is not nearly large enough to account for the substantial increase in clinical ADHD diagnoses reported in this group. The study provides evidence that the steep rise in both autism and ADHD diagnoses is likely influenced by factors other than a simple increase in the symptoms themselves.
“Across the nine birth years examined, there was no sign of increasing symptoms of ASD in the population, despite rising diagnoses,” Arvidsson told PsyPost. “For ADHD, there was no increase among boys. However, in 18-year-old girls we saw a very small but statistically significant increase in ADHD symptoms. The increase in absolute numbers was small in relation to the increase in clinical diagnoses.”
The researchers propose several alternative explanations for the growing number of diagnoses. Increased public and professional awareness may lead more people to seek assessments. Diagnostic criteria for both conditions have also widened over the years, potentially including individuals who would not have met the threshold in the past. Another factor may be a change in perception, where certain behaviors are now seen as more impairing than they were previously. This aligns with other research indicating that parents today tend to report higher levels of dysfunction associated with the same number of symptoms compared to a decade ago.
Changes in societal demands, particularly in educational settings that place a greater emphasis on executive functioning and complex social skills, could also contribute. In some cases, a formal diagnosis may be a prerequisite for accessing academic support and resources, creating an incentive for assessment. For the slight increase in ADHD symptoms among girls, the authors suggest it could reflect better recognition of how ADHD presents in females, or perhaps an overlap with symptoms of anxiety and depression, which have also been on the rise in this demographic.
“The takeaway is that the increases in clinical diagnoses of both ASD and ADHD need to be explained by other factors than increasing symptoms in the population, such as increased awareness and increased perceived impairment related to ASD and ADHD symptoms,” Arvidsson said. “Taken together we also hope to curb any worries about a true increase in ASD or ADHD.”
The study has some limitations. The response rate for the parental questionnaires was about 41 percent. While the researchers checked for potential biases and found that their main conclusions about the trends over time were likely unaffected, a higher participation rate would strengthen the findings. Additionally, the questionnaire for ADHD primarily measured symptoms of inattention and did not include items on hyperactivity. The results, therefore, mainly speak to the inattentive aspects of ADHD.
Future research could explore these trends with different measures and in different populations. The researchers also plan to investigate trends in clinical diagnoses more closely to better understand resource allocation for healthcare systems.
“We want to better understand trends of clinical diagnoses, such as trends of incidence of diagnoses in different groups,” Arvidsson said. “With increasing clinical diagnoses of ASD and ADHD and the resulting impact on the healthcare system as well as on the affected patients, it is important to characterize these trends in order to motivate an increased allocation of resources.”
Recent research published in the journal Evolution and Human Behavior offers new insights into how broad environmental conditions may shape “dark” personality traits on a national level. The study suggests that harsh or unpredictable ecological factors experienced during childhood, such as natural disasters or skewed sex ratios, are linked to higher average levels of traits like narcissism in adulthood. These findings indicate that forces largely outside of an individual’s control could play a key role in the development of antisocial personality profiles across different cultures.
The “Dark Triad” consists of three distinct but related personality traits: narcissism, Machiavellianism, and psychopathy. Individuals with high levels of narcissism often display grandiosity, entitlement, and a constant need for admiration. Machiavellianism is characterized by a cynical, manipulative approach to social interaction and a focus on self-interest over moral principles. Psychopathy involves high impulsivity, thrill-seeking behavior, and a lack of empathy or remorse for others.
While these traits are often viewed as undesirable, evolutionary perspectives suggest they may represent adaptive strategies in certain environments. Psychological research frequently focuses on immediate social causes for these traits, such as family upbringing or individual trauma. However, this new study aimed to broaden that lens by examining macro-level ecological factors that affect entire populations.
“First, there is limited understanding how ecological factors predict personality at all, let alone the Dark Triad. That is, most research focuses on personal, familial, or sociological predictors, but these are embedded in larger ecological systems. If the Dark Triad traits are mere pathologies of defunkt parenting or income inequality, one would not predict sensitivity to ecological factors in determining people’s adult Dark Triad scores let alone sex differences therein.”
“Second, most research on the Dark Triad traits focuses on individual-level variance but here we examined what you might call a culture of each trait and what might account for it. Third, and, less interestingly perhaps, the team happened to meet, get along, have the skills needed, and had access to the data to examine this.”
The researchers employed a theoretical framework known as life history theory to guide their investigation. This theory proposes that organisms, including humans, unconsciously adjust their reproductive and survival strategies based on the harshness and predictability of their environment. In dangerous or unstable environments, “faster” life strategies (characterized by greater risk-taking, short-term mating, and higher aggression) tend to be more advantageous for evolutionary fitness.
To test this idea, the researchers utilized existing personality data from 11,504 participants across 48 different countries. The data for these national averages were collected around 2016 using the “Dirty Dozen,” a widely used twelve-item questionnaire designed to briefly measure the three Dark Triad traits. The researchers then paired these personality scores with historical ecological data from the World Bank and other international databases.
They specifically examined ecological conditions during three developmental windows: early childhood (years 2000–2004), mid-childhood (years 2005–2009), and adolescence (years 2010–2015). The ecological indicators included population density, life expectancy (survival to age 65), and the operational sex ratio, which measures the balance of men to women in society. They also included data on the frequency of natural disasters, the prevalence of major infectious disease outbreaks, and levels of income inequality.
“When considering what makes people different from around the world, it is lazy to say ‘culture,'” Jonason told PsyPost. “Culture is a system that results from higher-order conditions like access to resources and ecological threats. If you want to understand why someone differs from you, you must consider more than just her/his immediate–and obvious–circumstances.”
The analysis used advanced statistical techniques known as spatial autoregressive models. These models allowed the researchers to not only test the direct associations within a country but also to account for “spillover” effects from neighboring nations. This approach recognizes that countries do not exist in isolation and may be influenced by the conditions and cultures of sharing borders.
The results indicated that different ecological factors were associated with distinct Dark Triad traits. Countries that had more male-biased sex ratios during the participants’ childhoods tended to have higher average levels of adult narcissism. The researchers suggest that an excess of males may intensify intrasexual competition, prompting men to adopt grander, more self-promoting behaviors to attract mates.
Conversely, a higher prevalence of infectious diseases during childhood and adolescence was associated with lower national levels of Machiavellianism and psychopathy. In environments with a high disease burden, strict adherence to social norms and greater group cohesion are often necessary for survival. In such contexts, manipulative or antisocial behaviors that disrupt group harmony might be less adaptive and therefore less common.
The study also found that ecological conditions might influence the magnitude of personality differences between men and women. Exposure to natural disasters during developmental years was consistently linked to larger sex differences across all three Dark Triad traits in adulthood. High-threat environments may cause men and women to adopt increasingly divergent survival and reproductive strategies, thereby widening the psychological gap between the sexes.
Furthermore, the research provided evidence for regional clustering of these personality profiles. Conditions in neighboring countries frequently predicted a focal country’s personality scores. For example, higher income inequality or natural disaster impact in bordering nations was associated with higher narcissism or Machiavellianism in the country being studied.
This suggests that dark personality traits may diffuse across borders. This could happen through mechanisms such as migration, shared regional economic challenges, or cultural transmission. The findings highlight the importance of considering regional contexts when studying national character.
“Do not assume that good parenting, safe schools, and successful social experiences are all that matter in determining who goes dark,” Jonason explained. “Larger factors, well beyond our control, have influence as well. By removing the human from the equation, we can better see how people are subject to forces well beyond their will, self-reports, and even situated in larger socioecological systems.”
As with all research, the study has some limitations that should be considered when interpreting these results. The personality data were largely derived from university students, who may not be fully representative of their national populations. Additionally, because the study relied on historical aggregate data, it cannot establish a definitive causal link between these ecological factors and individual personality development. It is possible that other unmeasured variables contribute to these associations.
Future research could aim to replicate these findings using more diverse and representative samples from the general population. The researchers also express an interest in investigating the specific psychological and cognitive mechanisms that might link broad environmental conditions to individual differences in motives and morals. Understanding these mechanisms could provide a clearer picture of how macro-level forces shape the human mind.
“We hope to pursue projects that try to understand the specific conditions that allow for not just personality, but also motives, morals, and mate preferences to be calibrated to local conditions providing more robust tests of not just cross-national differences, but, also, what are the cognitive mechanisms and perceptions that drive those differences,” Jonason said. “This is assuming we get some grant money to do so!”
“This is a study attempting to understand how lived experiences in people’s mileu can correlate with their personality and sex differences therein. This is an important step forward because while manipulating the conditions in people’s lives is nearly impossible, we can get a strong glimpse of how conditions in people’s generalized past can cause adaptive responses to help them solve important tasks like securing status and mates–two motivations highly valued by those high in the Dark Triad traits.”
Researchers in Japan have documented the case of a teenager whose psychotic symptoms consistently appeared before her menstrual period and resolved immediately after. A case report published in Psychiatry and Clinical Neurosciences Reports indicates that a medication typically used to treat seizures and bipolar disorder was effective after standard antipsychotic and antidepressant drugs failed to provide relief. This account offers a detailed look at a rare and often misunderstood condition.
The condition is known as menstrual psychosis, which is characterized by the sudden onset of psychotic symptoms in an individual who is otherwise mentally well. These episodes are typically brief and occur in a cyclical pattern that aligns with the menstrual cycle. The presence of symptoms like delusions or hallucinations distinguishes menstrual psychosis from more common conditions such as premenstrual syndrome or premenstrual dysphoric disorder, which primarily involve mood-related changes. Menstrual psychosis is considered exceptionally rare, with fewer than 100 cases identified in the medical literature.
The new report, authored by Atsuo Morisaki and colleagues at the Tokyo Metropolitan Children’s Medical Center, details the experience of a 17-year-old Japanese girl who sought medical help after about two years of recurring psychological distress. Her initial symptoms included intense anxiety, a feeling of being watched, and auditory hallucinations where she heard a classmate’s voice. She also developed the belief that conversations around her were about herself. She had no prior psychiatric history or family history of mental illness.
Initially, she was diagnosed with schizophrenia and prescribed antipsychotic medication, which did not appear to alleviate her symptoms. Upon being transferred to a new medical center, her treatment was changed, but her condition persisted. While hospitalized, her medical team observed a distinct pattern. In the days leading up to her first menstrual period at the hospital, she experienced a depressive mood and restlessness. This escalated to include delusional thoughts and the feeling that “voices and sounds were entering my mind.” These symptoms disappeared completely four days later, once her period ended.
This cycle repeated itself the following month. About twelve days before her second menstruation, she again became restless. Nine days before, she reported the sensation that her thoughts were “leaking out” during phone calls. She also experienced auditory hallucinations and believed her thoughts were being broadcast to others. Her antipsychotic dosage was increased, but the symptoms continued until her menstruation ended, at which point they once again resolved completely.
A similar pattern emerged before her third period during hospitalization. Fourteen days prior, she developed a fearful, delusional mood. She reported that “gazes and voices are entering my head” and her diary entries showed signs of disorganized thinking. An increase in her medication dosage seemed to have no effect. As her period began, the symptoms started to fade, and they were gone by the time it was over. This consistent, cyclical nature of her psychosis, which did not respond to conventional treatments, led her doctors to consider an alternative diagnosis and treatment plan.
Observing this clear link between her symptoms and her menstrual cycle, the medical team initiated treatment with carbamazepine. This medication is an anticonvulsant commonly used to manage seizures and is also prescribed as a mood stabilizer for bipolar disorder. The dosage was started low and gradually increased. Following the administration of carbamazepine, her psychotic symptoms resolved entirely. She was eventually able to discontinue the antipsychotic and antidepressant medications. During follow-up appointments as an outpatient, her symptoms had not returned.
The exact biological mechanisms behind menstrual psychosis are not well understood. Some scientific theories suggest a link to the sharp drop in estrogen that occurs during the late phase of the menstrual cycle. Estrogen influences several brain chemicals, including dopamine, and a significant reduction in estrogen might lead to a state where the brain has too much dopamine activity, which has been associated with psychosis. However, since psychotic episodes can occur at various points in the menstrual cycle, fluctuating estrogen levels alone do not seem to fully explain the condition.
The choice of carbamazepine was partly guided by the patient’s age and the potential long-term side effects of other mood stabilizers. The authors of the report note that carbamazepine may work by modulating the activity of various channels and chemical messengers in the brain, helping to stabilize neuronal excitability. While there are no previous reports of carbamazepine being used specifically for menstrual psychosis, it has shown some effectiveness in other cyclical psychiatric conditions, suggesting it may influence the underlying mechanisms that produce symptoms tied to biological cycles.
It is important to understand the nature of a case report. Findings from a single patient cannot be generalized to a larger population. This report does not establish that carbamazepine is a definitive treatment for all individuals with menstrual psychosis. The positive outcome observed in this one person could be unique to her specific biology and circumstances.
However, case reports like this one serve a significant function in medical science, especially for uncommon conditions. They can highlight patterns that might otherwise be missed and introduce potential new avenues for treatment that warrant further investigation. By documenting this experience, the authors provide information that may help other clinicians recognize this rare disorder and consider a wider range of therapeutic options. This account provides a foundation for future, more systematic research into the causes of menstrual psychosis and the potential effectiveness of medications like carbamazepine.
A new study published in the Journal of Affective Disorders provides evidence that a brief but structured physical exercise program can help reduce stress levels in adolescents diagnosed with attention-deficit/hyperactivity disorder. The researchers found that after just three weeks of moderate to vigorous physical activity, participants reported lower levels of stress and showed a measurable increase in salivary cortisol, a hormone linked to the body’s stress response.
Adolescence is widely recognized as a time of dramatic psychological and biological development. For teens with ADHD, this period often comes with heightened emotional challenges. In addition to the typical symptoms of inattention and hyperactivity, many adolescents with the condition also struggle with internal feelings such as anxiety and depression. These emotional difficulties can interfere with daily functioning at school and at home, placing them at greater risk for long-term mental health problems.
Although stimulant medications are commonly used to manage symptoms, they often cause side effects such as sleep problems and mood shifts. Due to these complications, many families and young people stop using medication or seek alternative approaches. One such approach gaining traction is physical exercise. Prior research suggests that structured activity may benefit brain function and emotional regulation. However, most studies have focused on children rather than adolescents, and few have examined whether exercise influences cortisol, a stress hormone thought to be dysregulated in young people with ADHD.
Cortisol plays an important role in how the body manages stress. Low levels of cortisol in the morning have been found in children and adolescents with ADHD, and this pattern has been associated with fatigue, anxiety, and greater symptom severity. The researchers behind the new study wanted to know whether a short physical exercise intervention could influence both subjective stress levels and objective stress markers like cortisol in teens with ADHD.
“Adolescents with ADHD face stress-related challenges and appear to display atypical cortisol patterns, yet most exercise studies focus on younger children and rarely include biological stress markers,” explained study author Cindy Sit, a professor of sports science and physical education at The Chinese University of Hong Kong.
“We wanted to test a practical, low-risk intervention that schools and families could feasibly implement and to examine both perceived stress and a physiological marker (salivary cortisol) within a randomized controlled trial design. In short, we aimed to examine whether a brief, feasible program could help regulate stress in this under-researched group through non-pharmacological methods.”
The researchers recruited 82 adolescents, aged 12 to 17, who had been diagnosed with ADHD. Some of the participants also had a diagnosis of autism spectrum disorder, which often co-occurs with ADHD. The teens were randomly assigned to one of two groups. One group participated in a structured physical exercise program lasting three weeks. The other group served as a control and continued with their normal routines.
The exercise group attended two 90-minute sessions each week, totaling 540 minutes over the course of the program. These sessions included a variety of activities designed not only to improve physical fitness but also to engage cognitive functions such as memory, reaction time, and problem-solving. Exercises included circuit training as well as games that required strategic thinking and teamwork. Participants were guided to maintain moderate to vigorous intensity throughout much of the sessions, and their heart rates were monitored to ensure appropriate effort.
To measure outcomes, the researchers used both self-report questionnaires and biological samples. Stress, depression, and anxiety levels were assessed through a validated scale. Cortisol was measured using saliva samples collected in the afternoon before and after the intervention, as well as three months later.
The findings showed that immediately following the exercise program, participants in the exercise group reported lower levels of stress compared to their baseline scores. At the same time, their cortisol levels increased.
The increase in cortisol following exercise was interpreted not as a sign of increased stress but as a reflection of more typical hormonal activity. The researchers noted that this pattern aligns with the idea of exercise as a “positive stressor” that helps train the body to respond more effectively to real-life challenges. Importantly, the teens felt less stressed, even as their cortisol levels rose.
“The combination of lower perceived stress alongside an immediate rise in cortisol was striking,” Sit told PsyPost. “It supports the idea that exercise can feel stress-relieving while still producing a normal physiological stress response that may help calibrate the HPA axis. We also noted a baseline positive association between anxiety and cortisol in the control group only, which warrants further investigation.”
However, by the three-month follow-up, the improvements in self-reported stress had faded, and cortisol levels had returned to their initial levels. There were no significant changes in self-reported depression or anxiety in either group at any point.
“A short, three-week exercise program (90-minute sessions twice a week at moderate to vigorous intensity) reduced perceived stress in adolescents with ADHD immediately after the program,” Sit said. “Cortisol levels increased right after the intervention, consistent with a healthy, short-term activation of the stress system during exertion (often called ‘good stress’). The positive effects on perceived stress did not last for three months without continued physical exercise, and we did not observe short-term changes in depression or anxiety. This suggests that ongoing participation is necessary to sustain these benefits.”
Although the results suggest benefits from the short-term exercise program, there are some limitations to consider. Most of the participants were male, and this gender imbalance could affect how the findings apply to a broader group of adolescents. The study also relied on self-report questionnaires to assess stress, anxiety, and depression, which can be affected by personal bias. Additionally, there was no “active” control group, meaning the control participants were not given an alternate activity that involved social interaction or structure, which might have helped isolate the effects of the exercise itself.
Future studies might benefit from longer intervention periods to examine whether extended participation can produce lasting changes. Collecting saliva samples multiple times during the day could also help map out how cortisol behaves in response to both daily routines and interventions. Incorporating interviews or observer-based assessments could provide a more complete understanding of emotional changes, especially in teens who have difficulty expressing their feelings through questionnaires.
“Our team is currently conducting a large randomized controlled trial testing physical‑activity interventions for people with intellectual disability, with co‑primary outcomes of mood and physical strength,” Sit explained. “The broader aim is to develop scalable, low‑cost programs that can be implemented in schools, day services, and community settings. Ultimately, we aim to increase access for underserved populations so that structured movement becomes a feasible part of everyday care and improves their quality of life.”
“We see exercise as a useful adjunct, not a replacement, for standard ADHD care,” she added. “In practice, that involves incorporating structured movement alongside evidence-based treatments (e.g., medication, psychoeducation, behavioural supports) and working with families, schools, and healthcare providers. Exercise is accessible and generally has low risk; it can assist with stress regulation, sleep, attention, and fitness. However, it should be individualized and monitored, especially for individuals with special needs like ADHD, to support rather than replace routine care.”
A new study in the Archives of Sexual Behavior suggests that how people react to sexual versus emotional infidelity is shaped by more than just biological sex. While heterosexual men were more distressed by sexual betrayal and women by emotional betrayal, the findings indicate that traits like masculinity, femininity, and sexual attraction also influence these responses in flexible ways.
For several decades, psychologists have observed that men and women tend to react differently to infidelity. Men are more likely to be disturbed by sexual infidelity, while women are more upset by emotional cheating. Evolutionary psychologists have suggested that this might reflect reproductive pressures. For men, the risk of raising another man’s child might have favored the development of stronger reactions to sexual betrayal. For women, the loss of a partner’s emotional commitment could mean fewer resources and support for offspring, making emotional infidelity more threatening.
But this difference is not universal. Studies have shown that it becomes much less pronounced among sexual minorities. Gay men and lesbian women often report similar levels of distress over emotional and sexual infidelity, rather than showing a clear difference based on biological sex. This has raised the question of whether the difference between men and women is really just about being male or female—or whether other psychological traits might be involved.
The researchers behind the current study wanted to examine this question in more detail. They were interested in whether traits often associated with masculinity or femininity might influence how people respond to infidelity. They also wanted to test whether sexual orientation, measured not just as a label but as a continuum of attraction to men and women, could account for some of the variation in jealousy responses.
“We have for many years found robust sex difference in jealousy, but we have also been interested in any factors that could influence this pattern. Other researchers discovered that sexual orientation might influence that pattern. We also were influence by David Schmitt’s ideas on sexual dials vs. switches — how masculinization/feminization might be much better described as dimensional than categorical, including sexual orientation and jealousy triggers,” said study author Leif Edward Ottesen Kennair, a professor at the Norwegian University of Science and Technology.
For their study, the researchers collected data from 4,465 adults in Norway, ranging in age from 16 to 80. The sample included people who identified as heterosexual, gay, lesbian, bisexual, and pansexual. Participants were recruited through social media advertisements and LGBTQ+ websites. Each person completed a survey about their responses to hypothetical infidelity scenarios, along with questions about their childhood behavior, personality traits, sexual attraction, and self-perceived masculinity or femininity.
To measure jealousy, the participants were asked to imagine different types of infidelity. In one example, they were asked whether it would be more upsetting if their partner had sex with someone else, or if their partner developed a deep emotional connection with another person. Their answers were used to calculate a jealousy score that reflected how much more distressing they found sexual versus emotional betrayal.
The results supported some long-standing findings. Heterosexual men were much more likely than heterosexual women to be disturbed by sexual infidelity. In fact, nearly 59 percent of heterosexual men said sexual betrayal was more upsetting, compared to only 31 percent of heterosexual women. This pattern was consistent with past research.
But among sexual minorities, the sex difference mostly disappeared. Gay men and lesbian women responded in ways that were more alike, with both groups tending to be more upset by emotional infidelity. Bisexual men and women also reported similar responses. This suggests that sexual orientation plays a key role in how people experience jealousy.
The researchers then examined sexual attraction as a continuous variable. Rather than looking only at how people labeled themselves, they measured how strongly participants were attracted to men and to women. Among men, those who were exclusively attracted to women showed the highest levels of sexual jealousy. Men who had even a small degree of attraction to other men reported less distress about sexual infidelity.
The researchers also measured four different psychological traits related to masculinity and femininity. These included whether participants preferred system-oriented thinking or empathizing, whether they had gender-typical interests as children, whether they preferred male- or female-dominated occupations, and how masculine or feminine they saw themselves. These traits were used to create a broader measure of psychological gender.
In men, higher levels of psychological masculinity were linked to both a stronger attraction to women and a greater tendency to be disturbed by sexual infidelity. But the connection between masculinity and jealousy seemed to depend on whether the man was attracted to women. Masculinity influenced jealousy only when it was also linked to strong gynephilic attraction—that is, attraction to women.
Among women, masculinity was related to sexual orientation, but not to jealousy responses. This suggests that masculinity and femininity may play different roles in shaping sexual psychology for men and women.
Kennair told PsyPost that these findings suggest “that sexual orientation might be best measured dimensionally (as involving both gynephilia and androphilia), that sexual orientation influences sex differences (in this case, jealousy triggers), and that gendering and sex differences are not primarily categorical processes but dimensional processes that are largely influenced by biological sex, but absolutely not categorically determined in an either/or switch pattern. Rather, they function more like interconnected dimensional dials.”
A surprising finding came from a smaller group: bisexual men who were partnered with women. “In the current study, we found that bisexual men with a female partner were still more triggered by emotional than sexual infidelity,” Kennair explained. “Bisexual men should also be concerned about who the father of their partner’s children really is, from an evolutionary perspective, but it seems that only the highly gynephilic men are primarily triggered by sexual infidelity. This needs further investigation and theorizing.”
But the study, like all research, has some caveats. The participants were recruited online, which means the sample might not fully represent the broader population. In addition, the jealousy scenarios were hypothetical, and people’s real-life reactions might differ from what they imagine.
The study raises some new and unresolved questions. One puzzle is why sexual jealousy in men seems to drop off so steeply with even a small degree of androphilic attraction. From an evolutionary standpoint, any man who invested in raising a child would have faced reproductive costs if his partner had been unfaithful, regardless of his own sexual orientation. Yet the findings suggest that the mechanism for sexual jealousy may be tightly linked to sexual attraction to women, rather than simply being male or being partnered with a woman.
It also remains unclear why women’s jealousy responses are less influenced by sexual orientation or masculinity. The results suggest that emotional jealousy is a more stable pattern among women, while sexual jealousy in men appears more sensitive to individual differences in orientation and psychological traits.
“I think this is a first empirical establishment of the dials approach,” Kennair said. “I think it might be helpful to investigate this approach with other phenomena. Also, the research cannot address the developmental and biological processes underlying the psychological level we addressed in the paper. The causal pathways therefore need further investigation. And theorizing.”
He hopes that “maybe in the current polarized discussion of identity and sex/gender, people will find the dimensional and empirical approach of this paper a tool to communicate better than the categorical approaches let us do.”
Watching a powerful movie may do more than stir emotions. According to a study published in the journal Communication Research, emotionally moving films that explore political or moral issues may encourage viewers to think more deeply about those topics and even engage politically. The researchers found that German television theme nights combining fictional drama with related factual programs were associated with higher levels of information seeking, perceived knowledge, and consideration of political actions related to the issues portrayed.
There is a longstanding debate about whether entertainment harms or helps democracy. Some scholars worry that media such as movies and reality shows distract citizens from more serious political content. But recent research has begun to suggest that certain types of entertainment might actually contribute to political awareness and engagement.
“We were curious about effects of entertainment media on political interest and engagement. Can watching a movie and walking in the shoes of people affected by a political issue raise viewers’ awareness about the issue and motivate them to take action to address the issue?” explained study author Anne Bartsch, a professor at Leipzig University.
“From about a decade of experimental research, we know that moving and thought-provoking media experiences can stimulate empathy and prosocial behavior, including political engagement. In this study, we used television theme nights as an opportunity to replicate these findings ‘in the wild.’ Theme nights are a popular media format in Germany that combines entertainment and information programs about a political issue and attracts a large enough viewership to conduct representative survey research. This opportunity to study political effects of naturally occurring media use was quite unique.”
The researchers conducted three studies around two German television theme nights. The first theme night focused on the arms trade, while the second dealt with physician-assisted suicide. Each theme night included a full-length fictional film followed by an informational program. Across the three studies, more than 2,800 people took part through telephone and online surveys.
In the first study, researchers surveyed a nationally representative sample of 905 German adults by phone after the arms trade theme night. Participants were asked whether they watched the movie, the documentary, or both. They were also asked about their emotional reactions, whether they had thought deeply about the issue, and what actions they had taken afterward.
People who had seen the movie reported feeling more emotionally moved and were more likely to report having reflected on the issue. These viewers also reported greater interest in seeking more information, higher levels of both perceived and factual knowledge, and more willingness to engage in political actions related to arms trade, such as signing petitions or considering the issue when voting.
Statistical analysis indicated that the emotional experience of feeling moved led to deeper reflection, which then predicted greater knowledge and political engagement. However, there was no significant difference in how often viewers talked about the issue with others, compared to non-viewers. Surprisingly, emotional reactions did not appear to encourage discussion on social media, and may have slightly reduced it.
In the second study, the researchers repeated the survey online with a different sample of 877 participants following the same theme night. The results were largely consistent. Again, those who watched the movie felt more moved, thought more about the issue, and were more engaged. In this study, feeling moved was also linked to more frequent interpersonal discussion.
The third study examined the theme night about physician-assisted suicide. Over 1,000 people took part in the online survey. As with the earlier studies, viewers who watched the movie reported being emotionally affected and more reflective. These experiences were linked to higher interest in the topic, greater perceived knowledge, and a higher likelihood of discussing the issue or participating politically. Watching the movie also predicted stronger interest in the subsequent political talk show.
Across all three studies, the researchers found that emotional and reflective experiences were key pathways leading from entertainment to political engagement. People who felt moved by the movies were more likely to think about the issues they portrayed. These thoughts were, in turn, connected to learning more about the issue, talking with others, and taking or considering political action.
The findings suggest that serious entertainment can function as a catalyst, helping viewers process complex social issues and motivating them to become more engaged citizens.
“We found that moving and thought-provoking entertainment can have politically mobilizing effects, including issue interest, political participation, information seeking, learning, and discussing the issue with others,” Bartsch told PsyPost. “This is interesting because entertainment often gets a bad rap, as superficial, escapist pastime. Our findings suggest that it depends on the type of entertainment and the thoughts and feelings it provokes. Some forms of entertainment, it seems, can make a valuable complementary contribution to political discourse, in particular for audiences that rarely consume traditional news.”
Although the findings were consistent across different samples and topics, the authors note some limitations. Most importantly, the studies were correlational, meaning they cannot establish that the movies directly caused people to seek information or take political action. It is possible that people who are already interested in politics are more likely to watch such films and respond emotionally to them.
The researchers also caution that while theme nights seem to offer an effective combination of entertainment and information, these findings might not easily transfer to other types of media or digital platforms. Watching a movie on television with millions of others at the same time may create a shared cultural moment that is less common in today’s fragmented media landscape.
“Our findings cannot be generalized to all forms of entertainment, of course,” Bartsch noted. “Many entertainment formats are apolitical ‘feel-good’ content – which is needed for mood management as well. What is more concerning is that entertainment can also be instrumentalized to spread misinformation, hate and discrimination.”
Future studies could use experimental methods to better isolate cause and effect, and could also explore how similar effects might occur with streaming platforms or social media. Researchers might also investigate how hedonic, or lighter, forms of entertainment interact with political content, and how emotional reactions unfold over time after watching a movie.
“Our study underscores the value of ‘old school’ media formats like television theme nights that can attract large audiences and provide input for shared media experiences and discussions,” Bartsch said. “With the digital transformation of media, however, it is important to explore how entertainment changes in the digital age. For example, we are currently studying parasocial opinion leadership on social media and AI generated content.”