Working Out While Losing Weight Keeps Muscles 'Young', Study Finds
A mystery from our evolutionary past.
A mystery from our evolutionary past.
Here's what we know.
A recent study published in the Journal of Experimental Social Psychology suggests that the rising popularity of extreme political candidates may be driven by how voters link their personal identities to their political opinions. The research provides evidence that when people feel an issue defines who they are as individuals, they tend to adopt more radical positions and favor politicians who do the same.
The researchers conducted this series of investigations to explore the psychological reasons why voters might prefer extreme candidates over moderate ones from their own party. Previous explanations have focused on structural factors like the way primary elections are organized or changes in the pool of people running for office.
But the authors behind the new research sought to better understand whether a voter’s internal connection to an issue is a significant factor. They focused on a concept called identity relevance, which is the degree to which an attitude signals to others and to oneself the kind of person someone is or aspires to be.
“Elected officials in the United States are increasingly extreme. The ideological extremity of members of Congress from both parties has steadily grown since the 1970s, reaching a 50-year high in 2022,” said study author Mohamed Hussein of Columbia University.
“State legislatures show similar trends. A recent analysis of more than 84,000 candidates running for state office revealed that extreme candidates are winning at higher rates than at any time in the last 30 years. We were interested in understanding why extreme candidates are increasingly elected.”
“So far, research in this area has focused on structural factors (e.g., the structure of primary elections),” Hussein explained. “In our work, we wanted to pivot the conversation to more psychological factors. Specifically, we tested if the identity relevance of people’s attitudes causes them to be drawn to extreme candidates. ”
The researchers conducted a series of studies to test their hypothesis. In the first study, 399 participants who identified as Democrats read about a fictional candidate named Sam Becker who was running for a seat in the House of Representatives. Some participants read that Becker held moderate views on climate change, while others read that he held extreme views. The researchers measured how much the participants felt their own attitudes on climate change were relevant to their identity.
The results suggests that as identity relevance increased, the participants reported having more extreme personal views on the issue. Those with high identity relevance showed a preference for the extreme version of Sam Becker and a dislike for the moderate version. This study provides initial evidence that the more someone sees an issue as a reflection of their character, the more they favor radical politicians.
The second study involved 349 participants and used a more complex choice task to see if these patterns held across different topics. Participants were shown pairs of candidates with varying ages, genders, and professional backgrounds. One candidate in each pair held a moderate position on a social issue, while the other held an extreme position.
The researchers tested five separate issues: abortion, gun control, immigration, climate change, and transgender rights. The data suggests that across all these topics, higher identity relevance predicted a greater likelihood of choosing the extreme candidate. Additionally, participants with high identity relevance reported being more receptive to hearing the views of the extreme candidate.
In the third study, the researchers aimed to see if they could change a person’s identity relevance by shifting their perception of what their political party valued. They recruited 584 Democrats and asked them to read a news article about the priorities of the Democratic National Committee. One group read that the party was prioritizing corn subsidies, a topic that is generally not a core identity issue for most voters.
The results suggests that when participants believed their party viewed corn subsidies as a priority, they began to see the issue as more relevant to their own identity. This shift in identity relevance led them to adopt more extreme personal views on the topic. Consequently, these participants showed a higher preference for candidates who supported radical changes to agricultural subsidies.
This experiment also allowed the researchers to rule out other factors that might influence candidate choice. They measured whether participants felt more certain, more moral, or more knowledgeable about the issue. The analysis provides evidence that identity relevance influences candidate choice primarily through its effect on attitude extremity rather than through these other psychological states.
The fourth study sought to prove that this effect can occur even when people have no factual information about a topic. The researchers presented 752 participants with a fictitious ballot initiative called Prop DW. The participants were told nothing about what the proposal would actually do.
Some participants were told their political party had taken a position on Prop DW, while others were told the party had no stance. Even without knowing the details of the policy, those who believed their party had a stance reported that Prop DW felt more identity-relevant. These individuals developed more extreme attitudes and favored candidates who took extreme positions on the made-up issue.
This finding suggests that the psychological pull toward extremity is not necessarily based on a deep understanding of policy. Instead, it seems to be a reaction to the social and personal significance assigned to the topic. It also suggests that people can form strong, radical opinions on matters they do not fully understand if they feel those matters define their social group.
Studies five and six moved away from group dynamics to see if individual reflection could trigger the same results. The researchers used a digital tool that allowed 514 participants to have a live conversation with a large language model. In one condition, the computer program was instructed to help participants reflect on how their views on corn subsidies related to their core values and sense of self.
This reflection process led to a measurable increase in identity relevance. Participants who reflected on their identity reported a higher desire for clarity, which means they wanted their opinions to be certain and distinct. This desire for clarity pushed them toward more extreme views and a higher probability of choosing an extreme candidate.
The final study involving 807 participants replicated this effect with a more rigorous comparison group. In this version, the control group also discussed corn subsidies with the language model but was not prompted to think about their personal identity. The results provides evidence that only the participants who specifically linked the issue to their identity showed a significant shift toward extremity.
The researchers note that this effect was symmetric across political parties. Both Democrats and Republicans showed the same pattern of moving toward extreme candidates when an issue felt relevant to their identity. This suggests that the psychological mechanism is a general feature of human behavior rather than a trait specific to one side of the political aisle.
“Across six studies with over 3,000 participants, we found that the more people see their political attitudes as tied to identity, the more likely they are to choose extreme, versus moderate, candidates,” Hussein told PsyPost. “The more central fighting climate change felt to the identity of participants, the more they liked the extreme Sam and the more they disliked the moderate Sam. Put simply, identity relevance increased liking of extreme candidates but decreased liking of moderate ones.”
“These results were remarkably robust. Across studies we tested a range of issues including climate change, abortion, immigration, transgender rights, gun control, and corn subsidies . We even created a fictitious issue (“Prop DW”) that participants had no information about. Across issues, we found that when we framed the issue as central to their identity, people formed more extreme views on it and then preferred extreme candidates who promised bolder action. Even on a made-up issue, identity relevance pushed people toward extremes.”
“These results were also robust regardless of how we talked about candidate extremity,” Hussein continued. “In addition to having candidates describe themselves as extreme, we also signaled extremity in different ways. In some studies, the candidates endorsed different policies, some that were moderate and others that were extreme.”
“In other studies, we held the policy constant but changed the level of action that candidates supported (e.g., increasing a subsidy by a small amount compared to a large amount). Lastly, in some studies, we explicitly labeled candidates as ‘moderate’ or ‘extreme’ on an issue. Regardless of how candidate extremity was described to participants, the results held.”
But there are some potential misinterpretations and limitations to consider regarding this research. One limitation is that the studies were conducted within the specific political context of the United States. The American two-party system might encourage a greater need for distinct, polarized identities compared to countries with multiple competing parties.
Future research could explore whether these findings apply to people in other nations with different electoral structures. It would also be useful to investigate whether certain personality types are more prone to linking their identity to political issues. Some individuals may naturally seek more self-definition through their opinions than others.
Another direction for future study involves finding ways to decrease political tension. If identity relevance is a primary driver of the preference for extreme candidates, it suggests that finding ways to de-emphasize the personal significance of political stances might lead to more moderate dialogue. Interventions that help people feel secure in their identity without needing to hold radical opinions could potentially reduce social polarization.
“Politics has always been personal, but it’s becoming more identity-defining than ever,” Hussein said. “And when politics becomes identity-relevant, our research suggests that extremity gains in appeal. Illuminating this psychological process helps us understand today’s political landscape and provides a roadmap for how to change it. Our results suggest that if we can loosen the grip of identity on politics, the appeal of extreme candidates might start to wane.”
The study, “Why do people choose extreme candidates? The role of identity relevance,” was authored by Mohamed A. Hussein, Zakary L. Tormala, and S. Christian Wheeler.

Experienced musicians tend to possess an advantage in short-term memory for musical patterns and a small advantage for visual information, according to a large-scale international study. The research provides evidence that the memory benefit for verbal information is much smaller than previously thought, suggesting that some earlier findings may have overrepresented this link. These results, which stem from a massive collaborative effort involving 33 laboratories, were published in the journal Advances in Methods and Practices in Psychological Science.
The study was led by Massimo Grassi and a broad team of researchers who sought to address inconsistencies in past scientific literature. For many years, scientists have used musicians as a model for understanding how intense, long-term practice changes the brain and behavior. While many smaller studies suggested that musical training boosts various types of memory, these individual projects often lacked the statistical power to provide a reliable estimate of the effect.
The researchers aimed to establish a community-driven standard for future studies by recruiting a much larger group of participants than typical experiments in this field. They also wanted to explore whether other factors, such as general intelligence or personality traits, might explain why musicians often perform better on cognitive tests. By using a shared protocol across dozens of locations, the team intended to provide a more definitive answer regarding the scope of the musical memory advantage.
To achieve this goal, the research team recruited 1,200 participants across 15 different countries. This group consisted of 600 experienced musicians and 600 nonmusicians who were matched based on their age, gender, and level of general education. The musicians in the study were required to have at least 10 years of formal training and be currently active in their practice.
The nonmusicians had no more than two years of training and had been musically inactive for at least five years. This strict selection process ensured that the two groups represented clear ends of the musical expertise spectrum. Each participant completed the same set of tasks in a laboratory setting to maintain consistency across the 33 different research units.
The primary measures included three distinct short-term memory tasks involving musical, verbal, and visuospatial stimuli. In the musical task, participants listened to a melody and then judged whether a second melody was identical or different. The verbal task required participants to view a sequence of digits on a screen and recall them in the correct order.
For the visuospatial task, participants watched dots appear in a grid and then had to click on those positions in the sequence they were shown. Additionally, the researchers measured fluid intelligence using the Raven Advanced Progressive Matrices and crystallized intelligence through a vocabulary test. They also assessed executive functions with a letter-matching task and collected data on personality and socioeconomic status.
The researchers found that musicians performed significantly better than nonmusicians on the music-related memory task. This difference was large, which suggests that musical expertise provides a substantial benefit when dealing with information within a person’s specific domain of skill. This finding aligns with the idea that long-term training makes individuals much more efficient at processing familiar types of data.
In contrast, the advantage for verbal memory was very small. This suggests that the benefits of music training do not easily transfer to the memorization of words or numbers. The researchers noted that some previous studies showing a larger verbal benefit may have used auditory tasks, where musicians could use their superior hearing skills to gain an edge.
For visuospatial memory, the study found a small but statistically significant advantage for the musicians. This provides evidence that musical training might have a slight positive association with memory for locations and patterns. While this effect was not as large as the music-specific memory gain, it suggests a broader cognitive difference between the two groups.
The statistical models used by the researchers revealed that general intelligence and executive functions were consistent predictors of memory performance across all tasks. When these factors were taken into account, the group difference for verbal memory largely disappeared. This suggests that the minor verbal advantage seen in musicians may simply reflect their slightly higher average scores on general intelligence tests.
Musicians also tended to score higher on the personality trait of open-mindedness. This trait describes a person’s curiosity and willingness to engage with new experiences or complex ideas. The study suggests that personality and family background are important variables that often distinguish those who pursue long-term musical training from those who do not.
Data from the study also indicated that musicians often come from families with a higher socioeconomic status. This factor provides evidence that access to resources and a stimulating environment may play a role in both musical achievement and cognitive development. These background variables complicate the question of whether music training directly causes better memory or if high-performing individuals are simply more likely to become musicians.
As with all research, there are some limitations. Because the study was correlational, it cannot confirm that musical training is the direct cause of the memory advantages. It remains possible that children with naturally better memory or higher intelligence are more likely to enjoy music lessons and stick with them for over a decade.
Additionally, the study focused on young adults within Western musical cultures. The results might not apply to children, elderly individuals, or musicians trained in different cultural traditions. Future research could expand on these findings by tracking individuals over many years to see how memory changes as they begin and continue their training.
The team also noted that the study only measured short-term memory. Other systems, such as long-term memory or the ability to manipulate information in the mind, were not the primary focus of this specific experiment. Future collaborative projects could use similar large-scale methods to investigate these other areas of cognition.
The multilab approach utilized here helps correct for the publication bias that often favors small studies with unusually large effects. By pooling data from many locations, the researchers provided a more realistic and nuanced view of how expertise relates to general mental abilities. This work sets a new benchmark for transparency and reliability in the field of music psychology.
Ultimately, the study suggests that while musicians do have better memory, the advantage is most prominent when they are dealing with music itself. The idea that learning an instrument provides a major boost to all types of memory appears to be an oversimplification. Instead, the relationship between music and the mind is a complex interaction of training, personality, and general cognitive traits.
The study, “Do Musicians Have Better Short-Term Memory Than Nonmusicians? A Multilab Study,” was authored by Massimo Grassi, Francesca Talamini, Gianmarco Altoè, Elvira Brattico, Anne Caclin, Barbara Carretti, Véronique Drai-Zerbib, Laura Ferreri, Filippo Gambarota, Jessica Grahn, Lucrezia Guiotto Nai Fovino, Marco Roccato, Antoni Rodriguez-Fornells, Swathi Swaminathan, Barbara Tillmann, Peter Vuust, Jonathan Wilbiks, Marcel Zentner, Karla Aguilar, Christ B. Aryanto, Frederico C. Assis Leite, Aíssa M. Baldé, Deniz Başkent, Laura Bishop, Graziela Kalsi, Fleur L. Bouwer, Axelle Calcus, Giulio Carraturo, Victor Cepero-Escribano, Antonia Čerič, Antonio Criscuolo, Léo Dairain, Simone Dalla Bella, Oscar Daniel, Anne Danielsen, Anne-Isabelle de Parcevaux, Delphine Dellacherie, Tor Endestad, Juliana L. d. B. Fialho, Caitlin Fitzpatrick, Anna Fiveash, Juliette Fortier, Noah R. Fram, Eleonora Fullone, Stefanie Gloggengießer, Lucia Gonzalez Sanchez, Reyna L. Gordon, Mathilde Groussard, Assal Habibi, Heidi M. U. Hansen, Eleanor E. Harding, Kirsty Hawkins, Steffen A. Herff, Veikka P. Holma, Kelly Jakubowski, Maria G. Jol, Aarushi Kalsi, Veronica Kandro, Rosaliina Kelo, Sonja A. Kotz, Gangothri S. Ladegam, Bruno Laeng, André Lee, Miriam Lense, César F. Lima, Simon P. Limmer, Chengran K. Liu, Paulina d. C. Martín Sánchez, Langley McEntyre, Jessica P. Michael, Daniel Mirman, Daniel Müllensiefen, Niloufar Najafi, Jaakko Nokkala, Ndassi Nzonlang, Maria Gabriela M. Oliveira, Katie Overy, Andrew J. Oxenham, Edoardo Passarotto, Marie-Elisabeth Plasse, Herve Platel, Alice Poissonnier, Neha Rajappa, Michaela Ritchie, Italo Ramon Rodrigues Menezes, Rafael Román-Caballero, Paula Roncaglia, Farrah Y. Sa’adullah, Suvi Saarikallio, Daniela Sammler, Séverine Samson, E. G. Schellenberg, Nora R. Serres, L. R. Slevc, Ragnya-Norasoa Souffiane, Florian J. Strauch, Hannah Strauss, Nicholas Tantengco, Mari Tervaniemi, Rachel Thompson, Renee Timmers, Petri Toiviainen, Laurel J. Trainor, Clara Tuske, Jed Villanueva, Claudia C. von Bastian, Kelly L. Whiteford, Emily A. Wood, Florian Worschech, and Ana Zappa.

Blink, and you might lose it!



An analysis of data from the National Health and Nutrition Examination Survey (2009–2020) found that men with higher sunlight affinity tend to have fewer depressive symptoms. They also reported sleeping problems less often; however, their sleep durations tended to be shorter. The paper was published in PLOS ONE.
In the context of this study, “sunlight affinity” is a novel metric proposed by the authors to measure an individual’s tendency to seek and enjoy natural sunlight. It combines psychological factors (preference for sun) and behavioral factors (actual time spent outdoors). Generally, the drive for sunlight is influenced by biological factors such as circadian rhythms, melatonin suppression, and serotonin regulation.
People with high sunlight affinity tend to report improved mood, energy, and alertness during bright daytime conditions. This tendency is partly genetic but is also shaped by developmental experiences, climate, and cultural habits. Conversely, low sunlight affinity may be associated with discomfort in bright light or a preference for dim environments.
While sunlight affinity is conceptually related to light sensitivity and seasonal affective patterns, it is not a clinical diagnosis. In environmental psychology, it helps explain preferences for outdoor activities and sunlit living or working spaces. In health contexts, moderate sunlight affinity can promote behaviors that support vitamin D synthesis and circadian stability.
Lead author Haifeng Liu and his colleagues investigated the associations between sunlight affinity and symptoms of depression and sleep disorders in U.S. men. They examined two dimensions of sunlight affinity: sunlight preference (how often a person chooses to stay in the sun versus the shade) and sunlight exposure duration (how much time a person spends outdoors). They noted that while recent findings indicate light therapy might help reduce depressive symptoms and improve sleep, the impact of natural sunlight exposure remains underexplored.
The authors analyzed data from 7,306 men who participated in the National Health and Nutrition Examination Survey (NHANES) between 2009 and 2020. NHANES is a continuous, nationally representative U.S. program that combines interviews, physical examinations, and laboratory tests to assess the health, nutritional status, and disease prevalence of the civilian, noninstitutionalized population.
The researchers selected males aged 20–59 years who provided all the necessary data. These individuals completed assessments for sunlight affinity, depressive symptoms (using the Patient Health Questionnaire-9), sleep disorder symptoms (answering, “Have you ever told a doctor or other health professional that you have trouble sleeping?”), and sleep duration.
Sunlight affinity was assessed by asking participants, “When you go outside on a very sunny day for more than one hour, how often do you stay in the shade?” and by asking them to report the time they spent outdoors between 9:00 AM and 5:00 PM over the previous 30 days.
The results showed that participants with a higher preference for sunlight and longer exposure durations tended to have fewer depressive symptoms. They also reported sleep problems less often, though their total sleep time tended to be shorter.
“This study revealed that sunlight affinity was inversely associated with depression and trouble sleeping and positively associated with short sleep in males. Further longitudinal studies are needed to confirm causality,” the authors concluded.
The study sheds light on the links between sunlight affinity, depression, and sleep problems. However, it should be noted that the cross-sectional study design does not allow for causal inferences. Additionally, all study data were based on self-reports, leaving room for reporting bias to have affected the results.
The paper, “Associations of sunlight affinity with depression and sleep disorders in American males: Evidence from NHANES 2009–2020,” was authored by Haifeng Liu, Jia Yang, Tiejun Liu, and Weimin Zhao.


Researchers have discovered that Alzheimer’s disease may be reversible in animal models through a treatment that restores the brain’s metabolic balance. This study, published in the journal Cell Reports Medicine, demonstrates that restoring levels of a specific energy molecule allows the brain to repair damage and recover cognitive function even in advanced stages of the illness. The results suggest that the cognitive decline associated with the condition is not an inevitable permanent state but rather a result of a loss of brain resilience.
For more than a century, people have considered Alzheimer’s disease an irreversible illness. Consequently, research has focused on preventing or slowing it, rather than recovery. Despite billions of dollars spent on decades of research, there has never been a clinical trial of any drug to reverse and recover from the condition. This new research challenges that long held dogma.
The study was led by Kalyani Chaubey, a researcher at the Case Western Reserve University School of Medicine. She worked alongside senior author Andrew A. Pieper, who is a professor at Case Western Reserve and director of the Brain Health Medicines Center at Harrington Discovery Institute. The team included scientists from University Hospitals and the Louis Stokes Cleveland VA Medical Center.
The researchers focused on a molecule called nicotinamide adenine dinucleotide, known as NAD+. This molecule is essential for cellular energy and repair across the entire body. Scientists have observed that NAD+ levels decline naturally as people age, but this loss is much more pronounced in those with neurodegenerative conditions. Without proper levels of this metabolic currency, cells become unable to execute the processes required for proper functioning and survival.
Previous research has established a foundation for this approach. A 2018 study in the Proceedings of the National Academy of Sciences showed that supplementing with NAD+ precursors could normalize neuroinflammation and DNA damage in mice. That earlier work suggested that a depletion of this molecule sits upstream of many other symptoms like tau protein buildup and synaptic dysfunction.
In 2021, another study published in the same journal found that restoring this energy balance could reduce cell senescence, which is a state where cells stop dividing but do not die. This process is linked to the chronic inflammation seen in aging brains.
Additionally, an international team led by researchers at the University of Oslo recently identified a mechanism where NAD+ helps correct errors in how brain cells process genetic information. That study, published in Science Advances, identified a specific protein called EVA1C as a central player in helping the brain manage damaged proteins.
Despite these promising leads, many existing supplements can push NAD+ to supraphysiologic levels. High levels that exceed what is natural for the body have been linked to an increased risk of cancer in some animal models. The Case Western Reserve team wanted to find a way to restore balance without overshooting the natural range.
They utilized a compound called P7C3-A20, which was originally developed in the Pieper laboratory. This compound is a neuroprotective agent that helps cells maintain their proper balance of NAD+ under conditions of overwhelming stress. It does not elevate the molecule to levels that are unnaturally high.
To test the potential for reversal, the researchers used two distinct mouse models. The first, known as 5xFAD, is designed to develop heavy amyloid plaque buildup and human like tau changes. The second model, PS19, carries a human mutation in the tau protein that causes toxic tangles and the death of neurons. These models allow scientists to study the major biological hallmarks of the human disease.
The researchers first confirmed that brain energy balance deteriorates as the disease progresses. In mice that were two months old and pre-symptomatic, NAD+ levels were normal. By six months, when the mice showed clear signs of cognitive trouble, their levels had dropped by 30 percent. By twelve months, when the disease was very advanced, the deficit reached 45 percent.
The core of the study involved a group of mice designated as the advanced disease stage cohort. These animals did not begin treatment until they were six months old. At this point, they already possessed established brain pathology and measurable cognitive decline. They received daily injections of the treatment until they reached one year of age.
The results showed a comprehensive recovery of function. In memory tests like the Morris water maze, where mice must remember the location of a submerged platform, the treated animals performed as well as healthy controls. Their spatial learning and memory were restored to normal levels despite their genetic mutations.
The mice also showed improvements in physical coordination. On a rotating rod test, which measures motor learning, the advanced stage mice regained their ability to balance and stay on the device. Their performance was not statistically different from healthy mice by the end of the treatment period.
The biological changes inside the brain were equally notable. The treatment repaired the blood brain barrier, which is the protective seal around the brain’s blood vessels. In Alzheimer’s disease, this barrier often develops leaks that allow harmful substances into the brain tissue. Electronic microscope images showed that the treatment had sealed these gaps and restored the health of supporting cells called pericytes.
The researchers also tracked a specific marker called p-tau217. This is a form of the tau protein that is now used as a standard clinical biomarker in human patients. The team found that levels of this marker in the blood were reduced by the treatment. This finding provides an objective way to confirm that the disease was being reversed.
Speaking about the discovery, Pieper noted the importance of the results for future medicine. “We were very excited and encouraged by our results,” he said. “Restoring the brain’s energy balance achieved pathological and functional recovery in both lines of mice with advanced Alzheimer’s. Seeing this effect in two very different animal models, each driven by different genetic causes, strengthens the new idea that recovery from advanced disease might be possible in people with AD when the brain’s NAD+ balance is restored.”
The team also performed a proteomic analysis, which is a massive screen of all the proteins in the brain. They identified 46 specific proteins that are altered in the same way in both human patients and the sick mice. These proteins are involved in tasks like waste management, protein folding, and mitochondrial function. The treatment successfully returned these protein levels to their healthy state.
To ensure the mouse findings were relevant to humans, the scientists studied a unique group of people. These individuals are known as nondemented with Alzheimer’s neuropathology. Their brains are full of amyloid plaques, yet they remained cognitively healthy throughout their lives. The researchers found that these resilient individuals naturally possessed higher levels of the enzymes that produce NAD+.
This human data suggests that the brain has an intrinsic ability to resist damage if its energy balance remains intact. The treatment appears to mimic this natural resilience. “The damaged brain can, under some conditions, repair itself and regain function,” Pieper explained. He emphasized that the takeaway from this work is a message of hope.
The study also included tests on human brain microvascular endothelial cells. These are the cells that make up the blood brain barrier in people. When these cells were exposed to oxidative stress in the laboratory, the treatment protected them from damage. It helped their mitochondria continue to produce energy and prevented the cells from dying.
While the results are promising, there are some limitations to the study. The researchers relied on genetic mouse models, which represent the rare inherited forms of the disease. Most people suffer from the sporadic form of the condition, which may have more varied causes. Additionally, human brain samples used for comparison represent a single moment in time, which makes it difficult to establish a clear cause and effect relationship.
Future research will focus on moving this approach into human clinical trials. The scientists want to determine if the efficacy seen in mice will translate to human patients. They also hope to identify which specific aspects of the brain’s energy balance are the most important for starting the recovery process.
The technology is currently being commercialized by a company called Glengary Brain Health. The goal is to develop a therapy that could one day be used to treat patients who already show signs of cognitive loss. As Chaubey noted, “Through our study, we demonstrated one drug-based way to accomplish this in animal models, and also identified candidate proteins in the human AD brain that may relate to the ability to reverse AD.”
The study, “Pharmacologic reversal of advanced Alzheimer’s disease in mice and identification of potential therapeutic nodes in human brain,” was authored by Kalyani Chaubey, Edwin Vázquez-Rosa, Sunil Jamuna Tripathi, Min-Kyoo Shin, Youngmin Yu, Matasha Dhar, Suwarna Chakraborty, Mai Yamakawa, Xinming Wang, Preethy S. Sridharan, Emiko Miller, Zea Bud, Sofia G. Corella, Sarah Barker, Salvatore G. Caradonna, Yeojung Koh, Kathryn Franke, Coral J. Cintrón-Pérez, Sophia Rose, Hua Fang, Adrian A. Cintrón-Pérez, Taylor Tomco, Xiongwei Zhu, Hisashi Fujioka, Tamar Gefen, Margaret E. Flanagan, Noelle S. Williams, Brigid M. Wilson, Lawrence Chen, Lijun Dou, Feixiong Cheng, Jessica E. Rexach, Jung-A Woo, David E. Kang, Bindu D. Paul, and Andrew A. Pieper.

Here's the verdict.

The potential is immense.


The 'superionic phase' is real.

Videos promoting #testosteronemaxxing are racking up millions of views. Like “looksmaxxing” or “fibremaxxing” this trend takes something related to body image (improving your looks) or health (eating a lot of fibre) and pushes it to extreme levels.
Testosterone or “T” maxxing encourages young men – mostly teenage boys – to increase their testosterone levels, either naturally (for example, through diet) or by taking synthetic hormones.
Podcasters popular among young men, such as Joe Rogan and Andrew Huberman, enthusiastically promote it as a way to fight ageing, enhance performance or build strength.
However, taking testosterone when there’s no medical need has serious health risks. And the trend plays into the insecurities of young men and developing boys who want to be considered masculine and strong. This can leave them vulnerable to exploitation – and seriously affect their health.
We all produce the sex hormone testosterone, but levels are naturally much higher in males. It’s produced mainly in the testes, and in much smaller amounts in the ovaries and adrenal glands.
Testosterone’s effects on the body are wide ranging, including helping you grow and repair muscle and bone, produce red blood cells and stabilise mood and libido.
During male puberty, testosterone production increases 30-fold and drives changes such as a deeper voice, developing facial hair and increasing muscle mass and sperm production.
It’s normal for testosterone levels to change across your lifetime, and even to fluctuate daily (usually at their highest in the morning).
Lifestyle factors such as diet, sleep and stress can also affect how much testosterone you produce.
Natural testosterone levels generally peak in early adulthood, around the mid-twenties. They then start to progressively decline with age.
A doctor can check hormone levels with a blood test. For males, healthy testosterone levels usually range between about 450 and 600 ng/dL (nanograms per decilitre of blood serum). Low testosterone is generally below 300 ng/dL.
In Australia, taking testosterone is only legal with a doctor’s prescription and ongoing supervision. The only way to diagnose low testosterone is via a blood test.
Testosterone may be prescribed to men diagnosed with hypogonadism, meaning the testes don’t produce enough testosterone.
This condition can lead to:
Hypogonadism has even been linked to early death in men.
Hypogonadism affects around one in 200 men, although estimates vary. It is more common among older men and those with diabetes or obesity.
Yet on social media, “low T” is being framed as an epidemic among young men. Influencers warn them to look for signs, such as not developing muscle mass or strength as quickly as hoped – or simply not looking “masculine”.
Extreme self-improvement and optimisation trends spread like wildfire online. They tap into common anxieties about masculinity, status and popularity.
Conflating “manliness” with testosterone levels and a muscular physical appearance exploits an insecurity ripe for marketing.
This has fuelled a market surge for “solutions” including private clinics offering “testosterone optimisation” packages, supplements claiming to increase testosterone levels and influencers on social media promoting extreme exercise and diet programs.
There is evidence some people are undergoing testosterone replacement therapy, even when they don’t have clinically low levels of testosterone.
Taking testosterone as a medication can suppress the body’s own production, by shutting down the hypothalamic-pituitary-gonadal axis, which controls testosterone and sperm production.
While testosterone production can recover after you stop taking testosterone, this can be slow and is not guaranteed, particularly after long-term or unsupervised use. This means some men may feel a significant difference when they stop taking testosterone.
Testosterone therapy can also lead to side effects for some people, including acne and skin conditions, balding, reduced fertility and a high red blood cell count. It can also interact with some medications.
So there are added risks from using testosterone without a prescription and appropriate supervision.
On the black market, testosterone is sold in gyms, or online via encrypted messaging apps. These products can be contaminated, counterfeit or incorrectly dosed.
People taking these drugs without medical supervision face potential infection, organ damage, or even death, since contaminated or counterfeit products have been linked to toxic metal poisoning, heart attacks, strokes and fatal organ failure.
T maxxing offers young men an enticing image: raise your testosterone, be more manly.
But for healthy young men without hypogonadism, the best ways to regulate hormones and development are healthy lifestyle choices. This includes sleeping and eating well and staying active.
To fight misinformation and empower men to make informed choices, we need to meet them where they are. This means recognising their drive for self-improvement without judgement while helping them understand the real risks of non-medical hormone use.
We also need to acknowledge that young men chasing T maxxing often mask deeper issues, such as body image anxiety, social pressure or mental health issues.
Young men often delay seeking help until they have a medical emergency.
If you’re worried about your testosterone levels, speak to your doctor.![]()
This article is republished from The Conversation under a Creative Commons license. Read the original article.

The words you choose may reveal more than you realize.
People across three very different societies—Scotland, Pakistan, and Papua—show notable cultural and age-related differences in how much they prefer decorated objects, according to a new study published in Evolutionary Psychology.
Human ornamentation has existed for tens of thousands of years, appearing in archaeological sites spanning North Africa to Australia. Theories as to why ornamentation is so widespread have ranged from perceptual processing advantages, social identity, and costly signaling, to evolved aesthetic tendencies.
Many prior studies have focused on prehistoric records or Indigenous groups far removed from Western influence, leaving open the question of whether modern cultural shifts, particularly Western minimalism, have altered people’s underlying preferences for ornamentation.
To address this gap, Piotr Sorokowski and colleagues examined whether cultural environment and age shape people’s appreciation for decorated versus plain objects. Their approach was grounded in evolutionary psychology and developmental research showing that even across diverse societies, children often draw, adorn, and embellish objects spontaneously. This raises the possibility that preference for ornamentation might emerge early in life and only later be molded or suppressed by cultural norms.
The researchers recruited 215 parent-child dyads across three cultural contexts: Scotland (the most WEIRD location), Pakistan (moderately WEIRD), and Papua (a minimally Western-influenced region). Dyads were recruited online in Scotland and Pakistan through an international research company, and in person in Papua via snowball sampling.
The final sample included 84 dyads from Scotland, 88 from Pakistan, and 43 from Papua (Dani and Yali tribes in the Baliem Valley and Yalimo Highlands). Adults, who completed the task first, and children completed the same object-choice paradigm independently.
Each participant viewed six pairs of images representing everyday items: three plates and three shirts. In every pair, one item appeared in a plain design while the other incorporated a simple ornament such as a floral motif, leaf pattern, or abstract line drawing. The researchers selected objects expected to be universally recognizable and culturally neutral to facilitate comparable interpretation across societies.
Participants indicated which object they preferred for each pair, and the presentation order and left-right positioning of ornamented items were systematically varied to reduce patterns and side biases. Demographic data, including age, sex, and place of residence, were also collected. From the six choices, the researchers derived three preference scores for each participant: one for ornamented plates, one for ornamented shirts, and one aggregated score reflecting overall ornamentation preference.
Across all analyses, strong cultural differences emerged. Participants from Papua showed the highest preference for ornamented objects, followed by participants from Pakistan, while Scottish participants demonstrated the lowest enthusiasm for decoration. These differences were robust across plates, shirts, and overall scores. The results support the idea that cultural context, particularly Western minimalism, suppresses ornamentation preferences.
The researchers also observed age differences, but primarily within the Scottish sample. Children generally preferred ornamented objects more than adults. This pattern was strongest in Scotland, where it was significant for both shirts and the aggregated preference score. In Pakistan, the difference was more modest and found only for shirts; in Papua, adults and children showed similarly high enthusiasm for ornamented designs. Correlations also revealed moderate similarity between parents and children within dyads, and ornamentation preference tended to decline with age in the Scottish sample.
A key finding is that children across cultures, especially in the Western sample, favored ornamentation more than adults, suggesting that younger individuals may display a more “baseline” or biologically rooted preference before cultural norms exert their influence. The authors argue that Papuan adults may be closer than Western adults to this natural preference level, given their similarity to children’s choices.
One limitation is that only two object types (plates and shirts) were used; broader categories such as home décor, architecture, or sculpture were not tested.
Overall, the findings suggest that humans may possess an evolutionarily grounded inclination toward ornamentation, one that can be dampened, but not erased, by cultural forces such as Western minimalism.
The research, “Is Ornamentation a Universal Human Preference? Cross-Cultural and Developmental Evidence From Scotland, Pakistan, and Papua,” was authored by Piotr Sorokowski, Jerzy Luty, Wiktoria Jędryczka, and Michal Mikolaj Stefanczyk.

An analysis of the Survey of Health, Ageing, and Retirement in Europe data found that individuals born between 1945 and 1957 (baby boomers) who were in stable marriages experience greater well-being in old age compared to those who are single or in less stable relationships. Participants with lower education who have divorced showed even lower well-being. The study was published in the European Journal of Population.
Romantic couple relationships play a central role in adult life. A relationship with a romantic partner provides companionship, emotional security, and a sense of belonging. Through daily interactions, romantic partners influence each other’s emotions, behaviors, and life choices.
Supportive and stable relationships are associated with better mental health, including lower levels of depression, anxiety, and loneliness. They can also buffer the effects of stress by offering emotional reassurance and practical help during difficult periods.
High-quality romantic relationships are linked to better physical health, including lower risk of cardiovascular disease and improved immune functioning. Partners shape each other’s health behaviors, such as diet, physical activity, substance use, and adherence to medical advice. Conversely, conflictual or unsupportive relationships can increase stress, negatively affect mental health, and contribute to poorer physical outcomes.
Study author Miika Mäki and his colleagues note that well-being in old age reflects combined experiences over the entire life course. Previous studies indicate that marriage dissolutions have long-term negative implications on well-being and health that can persist even among those who remarry. Similarly, unstable partnerships and multiple relationship transitions or long-term singlehood are all associated with higher levels of depression and stress and lower social and emotional support.
To explore this in more detail, these authors conducted a study in which they examined the links between well-being in old age and romantic relationship history. They hypothesized that individuals with stable marital relationship histories will experience higher well-being after age 60 compared to those with less stable relationship histories.
To explore this, they analyzed data from the SHARELIFE interviews of the Survey of Health, Ageing, and Retirement in Europe (SHARE). SHARE covers households with at least one member over 50 years of age in all EU countries, Switzerland, and Israel. The SHARELIFE interviews were conducted in 2008 and 2017. Respondents were asked to report, among other things, on their childhood circumstances and their romantic relationships, including all cohabitational, marital, and dating relationships.
This analysis was based on the data of individuals born between 1945 and 1957, who were at least 60 years old in 2017 (all part of the baby boomer generation). Data from a total of 18,256 participants were included in the analyses.
Analyses identified 5 general patterns of partnership history: stable marriage (a brief period of dating followed by a permanent first marriage), being remarried (getting married in their 20s and divorcing within the first 10 years, only to remarry in their 30s, with later marriages often preceded by cohabitation), divorce (the same as the previous one, but without a remarriage), serial cohabitation (dating and cohabiting prominent throughout the life course), and single (individuals who never lived with a partner, and many of whom never dated).
Most of the participants were in the stable marriage category, while the singles and serial cohabitation patterns were the rarest. Men were more frequent in the single category, while women were more frequent in the divorce category.
Further analysis revealed that individuals in the stable marriage category enjoyed greater well-being compared to all the other categories. This difference was present across all education levels. However, those with lower education who have divorced experienced even lower well-being in old age. Overall, results indicate that those with fewer resources tend to suffer more from losing a partner.
Study authors conclude that “…life courses characterized by stable marriages tend to be coupled with good health and high quality of life, unstable and single histories less so. Low educational attainment together with partnership trajectories characterized by divorce have pronounced adverse well-being associations. Our results hint at family formation patterns that may foster well-being and mechanisms that potentially boost or buffer the outcomes.”
The study sheds light on the links between romantic relationship patterns and well-being in old age. However, it should be noted that the study exclusively included individuals from the baby boomer generation. Given pronounced cultural differences between generations in the past century, results on people from other generations might not be identical.
The paper, “Stable Marital Histories Predict Happiness and Health Across Educational Groups,” was authored by Miika Mäki, Anna Erika Hägglund, Anna Rotkirch, Sangita Kulathinal, and Mikko Myrskylä.

A single dose made tumors disappear completely in mice.
A new study suggests that body shape, specifically the degree of roundness around the abdomen, may help predict the risk of developing depression. Researchers found that individuals with a higher Body Roundness Index faced a higher likelihood of being diagnosed with this mental health condition over time. These findings were published in the Journal of Affective Disorders.
Depression is a widespread mental health challenge that affects roughly 300 million people globally. It often brings severe physical health burdens and economic costs to individuals and society. Medical professionals have identified obesity as a potential risk factor for mental health issues. The standard tool for measuring obesity is the Body Mass Index, or BMI. This metric calculates a score based solely on a person’s weight and height.
However, the Body Mass Index has limitations regarding accuracy in assessing health risks. It cannot distinguish between muscle mass and fat mass. It also fails to indicate where fat is stored on the body. This distinction is vital because fat stored around the abdomen is often more metabolically harmful than fat stored elsewhere. To address these gaps, scientists developed the Body Roundness Index.
This newer metric uses waist circumference in relation to height to estimate the amount of visceral fat a person carries. Visceral fat is the fat stored deep inside the abdomen, wrapping around internal organs. This type of fat is biologically active and linked to various chronic diseases. Previous research hinted at a connection between this type of fat and mental health, but long-term data was limited.
Yinghong Zhai from the Shanghai Jiao Tong University School of Medicine served as a lead author on this new project. Zhai and colleagues sought to clarify if body roundness could predict future depression better than general weight measures. They also wanted to understand if lifestyle choices like smoking or exercise explained the connection.
To investigate this, the team utilized data from the UK Biobank. This is a massive biomedical database containing genetic, health, and lifestyle information from residents of the United Kingdom. The researchers selected records for 201,813 adults who did not have a diagnosis of depression when they joined the biobank. Participants ranged in age from 40 to 69 years old at the start of the data collection.
The researchers calculated the Body Roundness Index for each person using their waist and height measurements. They then tracked these individuals for an average of nearly 13 years. The goal was to see which participants developed new cases of depression during that decade. To ensure accuracy, the analysis accounted for various influencing factors.
These factors included age, biological sex, socioeconomic status, and ethnicity. The team also controlled for existing health conditions like type 2 diabetes and high blood pressure. They further adjusted for lifestyle habits, such as alcohol consumption and sleep duration. The results showed a clear pattern linking body shape to mental health outcomes.
Participants were divided into four groups, or quartiles, based on their body roundness scores. Those in the highest quartile had the largest waist-to-height ratios. The analysis showed that these individuals had a 30 percent higher risk of developing depression compared to those in the lowest quartile. This association held true even after the researchers adjusted for traditional Body Mass Index scores.
The relationship appeared to follow a “J-shaped” curve. This means that as body roundness increased, the probability of a depression diagnosis rose progressively. The trend was consistent across different subgroups of people. It affected both men and women, as well as people older and younger than 60.
The team also investigated the role of lifestyle behaviors in this relationship. They used statistical mediation analysis to see if habits like smoking or drinking explained the link. The question was whether body roundness led to specific behaviors that then caused depression. They found that smoking status did contribute to the increased risk.
Conversely, physical activity offered a protective effect, slightly lowering the risk. Education levels also played a minor mediating role. However, these lifestyle factors only explained a small portion of the overall connection. The direct link between body roundness and depression remained robust regardless of these behaviors.
The authors discussed potential biological mechanisms that might explain why central obesity correlates with mood disorders. Abdominal fat acts somewhat like an active organ. It releases inflammatory markers, such as cytokines, into the bloodstream. These markers can cross the blood-brain barrier. Once in the brain, they may disrupt the function of neurotransmitters that regulate mood.
Another possibility involves hormonal imbalances. Obesity is often associated with resistance to leptin, a hormone that regulates energy balance. High levels of leptin can interfere with the hypothalamic-pituitary-adrenal axis. This axis is a complex system of neuroendocrine pathways that controls the body’s reaction to stress. Disruption here is a known feature of depression.
The study also considered the social and psychological aspects of body image. While the biological links are strong, the authors noted that societal stigma could play a role. However, the persistence of the link after adjusting for many social factors points toward a physiological connection.
While the study involved a large number of people, it has specific limitations. The majority of participants in the UK Biobank are of white European descent. This lack of diversity means the results might not apply directly to other ethnic groups. The authors advise caution when generalizing these findings to diverse populations.
Additionally, the study is observational rather than experimental. This design means researchers can identify a correlation but cannot definitively prove that body roundness causes depression. There is also the possibility of unmeasured factors influencing the results. For example, changes in body weight or mental health status over the 13-year period were not fully tracked day-to-day.
The researchers also noted that they did not compare the predictive power of the Body Roundness Index against other metrics directly in a competition. They focused on establishing the link between this specific index and depression. Future research would need to validate how this tool performs against others in clinical settings.
The authors suggest that future research should focus on more diverse populations to confirm these trends. They also recommend investigating the specific biological pathways that connect abdominal fat to brain function more deeply. Understanding the role of inflammation and hormones could lead to better treatments.
If confirmed, these results could help doctors use simple body measurements as a screening tool. It highlights the potential mental health benefits of managing central obesity. By monitoring body roundness, healthcare providers might identify individuals at higher risk for depression earlier. This could allow for earlier interventions regarding lifestyle or mental health support.
The study, “Body roundness index, depression, and the mediating role of lifestyle: Insights from the UK biobank cohort,” was authored by Yinghong Zhai, Fangyuan Hu, Yang Cao, Run Du, Chao Xue, and Feng Xu.

New research provides evidence that men who are concerned about maintaining a traditional masculine image may be less likely to express concern about climate change. The findings suggest that acknowledging environmental problems is psychologically linked to traits such as warmth and compassion. These traits are stereotypically associated with femininity in many cultures. Consequently, men who feel pressure to prove their manhood may avoid environmentalist attitudes to protect their gender identity. The study was published in the Journal of Environmental Psychology.
Scientific consensus indicates that climate change is occurring and poses significant risks to global stability. Despite this evidence, public opinion remains divided. Surveys consistently reveal a gender gap regarding environmental attitudes. Men typically express less concern about climate change than women do. Michael P. Haselhuhn, a researcher at the University of California, Riverside, sought to understand the psychological drivers behind this disparity.
Haselhuhn conducted this research to investigate why within-gender differences exist regarding climate views. Past studies have often focused on political ideology or a lack of scientific knowledge as primary explanations. Haselhuhn proposed that the motivation to adhere to gender norms plays a significant but overlooked role. He based his hypothesis on the theory of precarious manhood.
Precarious manhood theory posits that manhood is viewed socially as a status that is difficult to earn and easy to lose. Unlike womanhood, which is often treated as a biological inevitability, manhood must be proven through action. This psychological framework suggests that men experience anxiety about failing to meet societal standards of masculinity. They must constant reinforce their status and avoid behaviors that appear feminine.
Socialization often expects women to be communal, caring, and warm. In contrast, men are often expected to be agentic, tough, and emotionally reserved. Haselhuhn theorized that because caring for the environment involves communal concern, it signals warmth. Men who are anxious about their social status might perceive this signal as a threat. They may reject climate science not because they misunderstand the data, but because they wish to avoid seeming “soft.”
The researcher began with a preliminary test to establish whether environmental concern is indeed viewed as a feminine trait. He recruited 450 participants from the United States through an online platform. These participants read a short scenario about a male university student named Adam. Adam was described as an undergraduate majoring in Economics who enjoyed running.
In the control condition, Adam was described as active in general student issues. In the experimental condition, Adam was described as concerned about climate change and active in a “Save the Planet” group. After reading the scenario, participants rated Adam on various personality traits. Haselhuhn specifically looked at ratings for warmth, caring, and compassion.
The results showed that when Adam was described as concerned about climate change, he was perceived as significantly warmer than when he was interested in general student issues. Participants viewed the environmentalist version of Adam as possessing more traditionally feminine character traits. This initial test confirmed that expressing environmental concern can alter how a man’s gender presentation is perceived by others.
Following this pretest, Haselhuhn analyzed data from the European Social Survey to test the hypothesis on a large scale. This survey included responses from 40,156 individuals across multiple European nations. The survey provided a diverse sample that allowed the researcher to look for broad patterns in the general population.
The survey asked participants to rate how important “being a man” was to their self-concept if they were male. It asked women the same regarding “being a woman.” It also measured three specific climate attitudes. These included belief in human causation, feelings of personal responsibility, and overall worry about climate change.
Haselhuhn found a negative relationship between masculinity concerns and climate engagement. Men who placed a high importance on being a man were less likely to believe that climate change is caused by human activity. They also reported feeling less personal responsibility to reduce climate change. Furthermore, these men expressed lower levels of worry about the issue.
A similar pattern appeared for women regarding the importance of being a woman. However, statistical analysis confirmed that the effect of gender role concern on climate attitudes was significantly stronger for men. This aligns with the theory that the pressure to maintain one’s gender status is more acute for men due to the precarious nature of manhood.
To validate these findings with more precise psychological tools, Haselhuhn conducted a second study with 401 adults in the United States. The measure used in the European survey was a single question, which might have lacked nuance. In this second study, men completed the Masculine Gender Role Stress scale.
This scale assesses how much anxiety men feel in situations that challenge traditional masculinity. Items include situations such as losing in a sports competition or admitting fear. Women completed a parallel scale regarding feminine gender stress. This scale includes items about trying to excel at work while being a good parent. Climate attitudes were measured using a standard scale assessing conviction that climate change is real and concern about its impact.
The results from the second study replicated the findings from the large-scale survey. Men who scored higher on masculinity stress expressed significantly less concern about climate change. This relationship held true regardless of the participants’ political orientation. Haselhuhn found no relationship between gender role stress and climate attitudes among women in this sample. This suggests that the pressure to adhere to gender norms specifically discourages men from engaging with environmental issues.
A third study was conducted to pinpoint the underlying mechanism. Haselhuhn recruited 482 men from the United States for this final experiment. He sought to confirm that the fear of appearing “warm” or feminine was the specific driver of the effect. Participants completed the same masculinity stress scale and climate attitude measures used in the previous study.
They also completed a task where they categorized various personality traits. Participants rated whether traits such as “warm,” “tolerant,” and “sincere” were expected to be more characteristic of men or women. This allowed the researcher to see how strongly each participant associated warmth with femininity.
Haselhuhn found that men with high masculinity concerns were generally less concerned about climate change. However, this effect depended on their beliefs about warmth. The negative relationship between masculinity concerns and climate attitudes was strongest among men who viewed warmth as a distinctly feminine characteristic.
For men who did not strongly associate warmth with women, the pressure to be masculine did not strongly predict their views on climate change. This provides evidence that the avoidance of feminine stereotypes is a key reason why insecure men distance themselves from environmentalism. They appear to regulate their attitudes to avoid signaling traits that society assigns to women.
These findings have implications for how climate change communication is framed. If environmentalism is perceived as an act of caring and compassion, it may continue to alienate men who are anxious about their gender status. Haselhuhn notes that the effect sizes in the study were small but consistent. This means that while gender concerns are not the only factor driving climate denial, they are a measurable contributor.
The study has some limitations. It relied on self-reported attitudes rather than observable behaviors. It is possible that the pressure to conform to masculine norms would be even higher in public settings where men are watched by peers. Men might be willing to express concern in an anonymous survey but reject those views in a group setting to maintain status.
Future research could examine whether reframing environmental action affects these attitudes. Describing climate action in terms of protection, courage, or duty might make the issue more palatable to men with high masculinity concerns. Additionally, future work could investigate whether affirming a man’s masculinity in other ways reduces his need to reject environmental concern. The current data indicates that for many men, the desire to be seen as a “real man” conflicts with the desire to save the planet.
The study, “Man enough to save the planet? Masculinity concerns predict attitudes toward climate change,” was authored by Michael P. Haselhuhn.

New research indicates that perceiving one’s social group as possessing inner spiritual strength can drive members to extreme acts of self-sacrifice. This willingness to suffer for the group appears to be fueled by collective narcissism, a belief that the group is exceptional but underappreciated by others. The findings suggest that narratives of spiritual power may inadvertently foster dangerous forms of group entitlement. The study was published in the Personality and Social Psychology Bulletin.
History is replete with examples of smaller groups overcoming larger adversaries through sheer willpower. Social psychologists have termed this perceived inner strength “spiritual formidability.” This concept refers to the conviction in a cause and the resolve to pursue it regardless of material disadvantages. Previous observations of combatants in conflict zones have shown that spiritual formidability is often a better predictor of the willingness to fight than physical strength or weaponry.
The authors of the current study sought to understand the psychological mechanisms behind this phenomenon. They aimed to determine why a perception of spiritual strength translates into a readiness to die or suffer for a group. They hypothesized that this process is not merely a result of loyalty or love for the group. Instead, they proposed that it stems from a demand for symbolic recognition.
The researchers suspected that viewing one’s group as spiritually powerful feeds into collective narcissism. Collective narcissism differs from simple group pride or satisfaction. It involves a defensive form of attachment where members believe their group possesses an undervalued greatness that requires external validation. The study tested whether this specific type of narcissistic belief acts as the bridge between spiritual formidability and self-sacrifice.
“Previous research has shown that perceiving one’s group as spiritually strong—deeply committed to its values—predicts a willingness to fight and self-sacrifice, but the psychological mechanisms behind this link were still unclear,” said study author Juana Chinchilla, an assistant professor of social psychology at the Universidad Nacional de Educación a Distancia (UNED) in Spain.
“We were particularly interested in understanding why narratives of moral or spiritual strength can motivate extreme sacrifices, especially in real-world contexts marked by conflict and behavioral radicalization. This study addresses that gap by identifying collective narcissism as a key mechanism connecting spiritual formidability to extreme self-sacrificial intentions.”
The research team conducted a series of five investigations to test their hypothesis. They began with a preliminary online survey of 420 individuals from the general population in Spain. Participants completed measures assessing their satisfaction with their nation and their levels of national collective narcissism. They also rated their willingness to engage in extreme actions to defend the country, such as going to jail or dying.
A central component of this preliminary study was the inclusion of ingroup satisfaction as a control variable. Ingroup satisfaction represents a secure sense of pride and happiness with one’s membership in a group. It is distinct from the defensive and resentful nature of collective narcissism. By statistically controlling for this variable, the researchers aimed to isolate the specific effects of narcissism.
The data from this initial survey provided a baseline for the researchers’ theory. The results showed that collective narcissism predicted a willingness to sacrifice for the country even after accounting for the influence of ingroup satisfaction.
“One striking finding was how reliably collective narcissism explained self-sacrificial intentions even when controlling for more secure forms of group attachment, such as ingroup satisfaction,” Chinchilla told PsyPost. “This suggests that extreme sacrifice is not always driven by genuine concern for the group’s well-being, but sometimes by defensive beliefs about the group’s greatness and lack of recognition. We were also surprised by how easily these processes could be activated through shared narratives about spiritual strength.”
Following this preliminary work, the researchers gained access to high-security penitentiary centers across Spain for two field studies. Study 1a involved 70 male inmates convicted of crimes related to membership in violent street gangs. Study 1b focused on 47 male inmates imprisoned for organized property crimes and membership in delinquent bands. These populations were selected because they are known for engaging in costly actions to protect their groups.
In these prison studies, participants used a dynamic visual measure to rate their group’s spiritual formidability. They were shown an image of a human body and adjusted a slider to change its size and muscularity. This visual metaphor represented the inner strength and conviction of their specific gang or band. They also completed questionnaires measuring collective narcissism and their willingness to make sacrifices, such as enduring longer prison sentences or cutting off family contact.
The findings from the prison samples were consistent with the initial hypothesis. Inmates who perceived their gang or band as spiritually formidable reported higher levels of collective narcissism. This sense of underappreciated greatness was statistically associated with a higher willingness to make severe personal sacrifices. Mediation analysis indicated that collective narcissism explains why spiritual formidability leads to self-sacrifice.
The researchers then extended their investigation to a sample of 88 inmates convicted of jihadist terrorism or proselytizing in prison. This sample included individuals involved in major attacks and thwarted plots. The procedure mirrored the previous studies but focused on the broader ideological group of Muslims rather than a specific criminal band. Participants rated the spiritual formidability of Muslims and their willingness to sacrifice for their religious ideology.
The researchers conducted additional statistical analyses to ensure the robustness of these findings. These models explicitly controlled for the gender of the participants. This step ensured that the observed effects were not simply due to differences in how men and women might approach sacrifice or group perception.
The results from the jihadist sample aligned with those from the street gangs. Perceptions of spiritual strength within the religious community were associated with higher collective narcissism regarding the faith. This defensive pride predicted a greater readiness to suffer for the ideology. The relationship remained significant even when controlling for gender. The study demonstrated that the psychological mechanism operates for large-scale ideological values just as it does for small, cohesive gangs.
Finally, the researchers conducted an experimental study with 457 Spanish citizens to establish causality. This study took place during the early stages of the COVID-19 pandemic, a time of heightened threat and social uncertainty. The researchers provided false feedback to a portion of the participants. This feedback stated that most Spaniards viewed their country as possessing high spiritual formidability.
Participants in the control group received no information regarding how other citizens viewed the nation. All participants then completed measures of collective narcissism and willingness to sacrifice to defend the country against the pandemic. The manipulation was designed to test if simply hearing about the group’s spiritual strength would trigger the proposed psychological chain reaction.
The experiment confirmed the causal role of spiritual formidability. Participants led to believe their country was spiritually formidable scored higher on measures of collective narcissism. They also expressed a greater willingness to endure extreme hardships to fight the pandemic. Statistical analysis confirmed that the manipulation influenced self-sacrifice specifically by boosting collective narcissism.
The study provides evidence that narratives of spiritual strength can have a double-edged nature. While such beliefs can foster cohesion, they can also trigger a sense of entitlement and resentment toward those who do not recognize the group’s greatness. This defensive mindset appears to be a key driver of extreme pro-group behavior.
“Our findings suggest that believing one’s group is spiritually formidable can motivate extreme self-sacrifice not only through loyalty or love, but also through a sense that the group is undervalued and deserves greater recognition,” Chinchilla explained. “This illustrates that people may engage in risky or extreme progroup actions to achieve symbolic recognition. Importantly, it also highlights how seemingly positive narratives about spiritual strength can have unintended and potentially dangerous consequences.”
However, “it would be a mistake to interpret spiritual formidability as inherently dangerous or as a direct cause of violence. On its own, perceiving the ingroup as morally committed and spiritually strong can promote loyalty, trust, and cohesion. The problematic consequences may arise only under severe threat or when perceptions of spiritual formidability become intertwined with collective narcissism.”
Future research is needed to determine when exactly these beliefs turn into narcissistic entitlement. The authors note that a key challenge is clarifying the boundary conditions under which spiritual formidability gives rise to collective narcissism. This distinction might depend on whether individuals see violence as morally acceptable.
“We plan to examine whether similar mechanisms operate in non-violent movements, such as environmental or human rights activism, where strong moral commitment is critical,” Chinchilla said. “Another important next step is identifying interventions that can decouple spiritual formidability from collective narcissism, for example by promoting narratives that frame cooperation and peace as markers of true moral strength.”
“One of the strengths of this research is the diversity of the samples, including populations that are rarely accessible in psychological research. Studying these processes in real-world, high-stakes contexts helps bridge the gap between laboratory findings and the dynamics underlying radicalization, intergroup conflict, and extreme collective behavior.”
The study, “Spiritual Formidability Predicts the Will to Self-Sacrifice Through Collective Narcissism,” was authored by Juana Chinchilla and Angel Gomez.

Recent research has identified specific patterns of brain activity that distinguish young children with autism from their typically developing peers. These patterns involve the way different regions of the brain communicate with one another over time and appear to be directly linked to the severity of autism symptoms. The findings suggest that these neural dynamics influence daily adaptive skills, which in turn affect cognitive performance. The study was published in The Journal of Neuroscience.
Diagnosing Autism Spectrum Disorder in young children currently relies heavily on observing behavior. This process can be subjective because symptoms vary widely from one child to another. Scientists have sought to find objective biological markers to improve the accuracy of early diagnosis. They also aim to understand the underlying neural mechanisms that contribute to the social and cognitive challenges associated with the condition.
Most previous research in this area has looked at the brain as a static object. These earlier studies calculated the average connection strength between brain regions over a long period. This approach assumes that brain activity remains constant during the measurement. However, the brain is highly active and constantly reorganizes its networks to process information.
A team of researchers led by Conghui Su and Yaqiong Xiao at the Shenzhen University of Advanced Technology decided to investigate these changing patterns. They focused on a concept known as dynamic functional connectivity. This method treats brain activity like a movie rather than a photograph. It allows scientists to see how functional networks configure and reconfigure themselves from moment to moment.
To measure this activity, the team used a technology called functional near-infrared spectroscopy. This technique involves placing a cap with light sensors on the child’s head. The sensors emit harmless near-infrared light that penetrates the scalp and skull. The light detects changes in blood oxygen levels in the brain, which serves as a proxy for neural activity.
This method is particularly well suited for studying young children. Unlike magnetic resonance imaging scanners, which are loud and require participants to be perfectly still, this optical system is quiet and tolerates some movement. This flexibility allows researchers to collect data in a more natural and comfortable environment.
The study included 44 children between the ages of two and six years old. Approximately half of the participants had been diagnosed with Autism Spectrum Disorder. The other half were typically developing children who served as a control group. The researchers recorded brain activity while the children sat quietly and watched a silent cartoon.
The researchers analyzed the data using a “sliding window” technique. They looked at short segments of the recording to see which brain regions were synchronized at any given second. By applying mathematical clustering algorithms, the team identified four distinct “states” of brain connectivity that recurred throughout the session.
One specific state, referred to as State 4, emerged as a key point of difference between the two groups. This state was characterized by strong connections between the left and right hemispheres of the brain. It specifically involved robust communication between the temporal and parietal regions, which are areas often associated with language and sensory processing.
The data showed that children with autism spent considerably less time in State 4 compared to the typically developing children. They also transitioned into and out of this state less frequently. The reduced time spent in this high-connectivity state was statistically distinct.
The researchers then compared these brain patterns to clinical assessments of the children. They found a correlation between the brain data and the severity of autism symptoms. Children who spent the least amount of time in State 4 tended to have higher scores on standardized measures of autism severity.
The study also looked at adaptive behavior. This term refers to the collection of conceptual, social, and practical skills that people learn to function in their daily lives. The analysis revealed that children who maintained State 4 for longer durations exhibited better adaptive behavior scores.
In addition to watching cartoons, the children performed a visual search task to measure their cognitive abilities. They were asked to find a specific shape on a touchscreen. The researchers found that the brain patterns observed during the cartoon viewing predicted how well the children performed on this separate game.
The team conducted a statistical mediation analysis to understand the relationship between these variables. This type of analysis helps determine if a third variable explains the relationship between an independent and a dependent variable. The results suggested a specific pathway of influence.
The analysis indicated that the dynamic brain patterns directly influenced the child’s adaptive behavior. In turn, the level of adaptive behavior influenced the child’s cognitive performance on the visual search task. This implies that adaptive skills serve as a bridge connecting neural activity to cognitive outcomes.
To test the robustness of their findings, the researchers analyzed data from an independent group of 24 typically developing children. They observed the same brain states in this new group. The relationship between the duration of State 4 and cognitive response time was replicated in this validation sample.
The researchers also explored whether these brain patterns could be used for classification. They fed the connectivity data into a machine learning algorithm. The computer model was able to distinguish between children with autism and typically developing children with an accuracy of roughly 74 percent.
This accuracy rate suggests that dynamic connectivity features have potential as a diagnostic biomarker. The ability to identify such markers objectively could complement traditional behavioral assessments. It may help clinicians identify the condition earlier or monitor how a child responds to treatment over time.
The study highlights the importance of interhemispheric communication. The reduced connections between the left and right temporal regions in the autism group align with the “underconnectivity” theory of autism. This theory proposes that long-range communication between brain areas is weaker in individuals on the spectrum.
There are limitations to this study that require consideration. The sample size was relatively small. A larger group of participants would be needed to confirm the results and ensure they apply to the broader population.
The demographics of the study participants may also limit generalization. The group with autism was predominantly male, which reflects the general diagnosis rates but leaves the patterns in females less explored. There were also socioeconomic differences between the autism group and the control group in terms of family income.
The technology used in the study has physical limitations. The sensors were placed over the frontal, temporal, and parietal lobes. This placement means the researchers could not analyze activity in the entire brain. Deeper brain structures or other cortical areas might play a role that this study could not detect.
The researchers suggest that future work should focus on longitudinal studies. Tracking children over several years would help scientists understand how these brain dynamics develop as the child grows. It would also clarify whether improvements in adaptive behavior lead to changes in brain connectivity.
The findings point toward potential avenues for intervention. Therapies that target adaptive behaviors might have downstream effects on cognitive performance. Understanding the specific neural deficits could also lead to more targeted treatments designed to enhance connectivity between brain hemispheres.
This research represents a step forward in linking the biology of the brain to the behavioral characteristics of autism. It moves beyond static snapshots of brain activity. Instead, it embraces the dynamic, ever-changing nature of the human mind to find clearer signals of neurodevelopmental differences.
The study, “Linking Connectivity Dynamics to Symptom Severity and Cognitive Abilities in Children with Autism Spectrum Disorder: An FNIRS Study,” was authored by Conghui Su, Yubin Hu, Yifan Liu, Ningxuan Zhang, Liming Tan, Shuiqun Zhang, Aiwen Yi, and Yaqiong Xiao.



New research suggests that specific personality traits may amplify the way childhood adversity shapes an individual’s approach to life. A study published in the journal Personality and Individual Differences provides evidence that subclinical psychopathy strengthens the link between childhood trauma and “fast” life history strategies. The findings indicate that for those who have experienced severe early difficulties, certain dark personality traits may function as adaptive mechanisms for survival.
Psychologists use a framework called Life History Theory to explain how people allocate their energy. This theory proposes that all living organisms must make trade-offs between investing in their own growth and investing in reproduction. These trade-offs create a spectrum of strategies that range from “fast” to “slow.”
A fast life history strategy typically emerges in environments that are harsh or unpredictable. Individuals with this orientation tend to prioritize immediate rewards and reproduction over long-term planning. They often engage in riskier behaviors and invest less effort in long-term relationships. This approach makes evolutionary sense when the future is uncertain.
Conversely, a slow life history strategy is favored in stable and safe environments. This approach involves delaying gratification and investing heavily in personal development and long-term goals. It also involves a focus on building deep, enduring social and family bonds.
The researchers also examined the “Dark Triad” of personality. This cluster includes three distinct traits: narcissism, Machiavellianism, and psychopathy. Narcissism involves grandiosity and a need for admiration. Machiavellianism is characterized by manipulation and strategic calculation. Psychopathy involves high impulsivity and a lack of empathy.
The research team, led by Vlad Burtaverde from the University of Bucharest, sought to understand how these dark traits interact with early life experiences. They hypothesized that these traits might help individuals adapt to traumatic environments by accelerating their life strategies. The study aimed to determine if the Dark Triad traits or childhood socioeconomic status moderate the relationship between trauma and life outcomes.
To investigate this, the researchers recruited 270 undergraduate students. The participants had an average age of approximately 20 years. The majority of the sample was female. The participants completed a series of online questionnaires designed to measure their childhood experiences and current personality traits.
The Childhood Trauma Questionnaire assessed exposure to emotional, physical, and sexual abuse, as well as neglect. The Short Dark Triad measure evaluated levels of narcissism, Machiavellianism, and psychopathy. The High-K Strategy Scale assessed life history strategies by asking about health, social capital, and future planning. Participants also answered questions regarding their family’s financial situation during their childhood.
The results showed that participants who reported higher levels of childhood trauma were more likely to exhibit fast life history strategies. These individuals also tended to report lower childhood socioeconomic status. This aligns with the expectation that adverse environments encourage a focus on the present rather than the future.
Among the Dark Triad traits, subclinical narcissism showed a unique pattern. It was the only trait that had a statistically significant direct relationship with life history strategies. Specifically, higher narcissism was associated with slower life history strategies. This suggests that narcissism may function differently than the other dark traits.
The most significant finding involved subclinical psychopathy. The analysis revealed that psychopathy moderated the relationship between childhood trauma and fast life history strategies. For individuals with low levels of psychopathy, the link between trauma and a fast strategy was weaker. However, for those with high levels of psychopathy, the link was much stronger.
This means that psychopathy may act as a catalyst. It appears to amplify the effect of trauma, pushing the individual more aggressively toward a fast life strategy. The authors suggest this frames psychopathy as a “survival” trait. It helps the individual pursue immediate resources in a world they perceive as dangerous.
In contrast, the researchers found that childhood socioeconomic status did not moderate this relationship. While growing up poor was linked to faster life strategies, it did not change how trauma impacted those strategies. This suggests that the psychological impact of trauma operates somewhat independently of financial resources.
These findings build upon a growing body of research linking environmental conditions to personality development. A global study by Peter Jonason and colleagues analyzed data from over 11,000 participants across 48 countries. They found that macro-level ecological factors, such as natural disasters and skewed sex ratios, predict national averages of Dark Triad traits. For instance, countries with more men than women tended to have higher levels of narcissism.
That global study suggested that these traits are not merely pathologies. They may be functional responses to broad ecological pressures. The current study by Burtaverde and colleagues zooms in from the national level to the individual level. It shows how personal history interacts with these traits to shape behavior.
Research by Lisa Bohon and colleagues provides further context regarding gender and environment. Their study of female college students found that a disordered home life predicted fast life history traits. They found that father absence and childhood trauma were strong predictors of psychopathy in women. These traits then mediated the relationship between childhood environment and mating effort.
The Bohon study highlighted that immediate family dynamics, or the “microsystem,” are powerful predictors of adult personality. This aligns with the Burtaverde study’s focus on childhood trauma. Both studies suggest that the “dark” traits serve a function in regulating reproductive effort and survival strategies.
Another study by Junwei Pu and Xiong Gan examined the social roots of these traits in adolescents. They found that social ostracism led to increased loneliness. This loneliness subsequently promoted the development of Dark Triad traits over time. Their work suggests that social isolation acts as a signal to the individual that the environment is hostile.
This hostility prompts the development of defensive personality traits. Psychopathy, in particular, was strongly connected to feelings of loneliness in their sample. This complements the Burtaverde finding that psychopathy strengthens the reaction to trauma. A person who feels rejected and traumatized may develop callousness as a protective shell.
David Pineda and his team investigated the specific role of parental discipline. They found that psychological aggression from parents was a unique predictor of psychopathy and sadism in adulthood. Severe physical assault was linked to Machiavellianism and narcissism. Their work emphasizes that specific types of mistreatment yield specific personality outcomes.
This nuance helps explain why the Burtaverde study found a link between general trauma and life strategies. The specific type of trauma likely matters. Pineda’s research suggests that psychological aggression may be particularly potent in fostering the traits that Burtaverde identified as moderators.
Finally, research by Jacob Dye and colleagues looked at the buffering effect of positive experiences. They found that positive childhood experiences could reduce psychopathic traits, but only up to a point. If a child faced severe adversity, positive experiences were no longer enough to prevent the development of dark traits.
This limitation noted by Dye supports the Burtaverde finding regarding the strength of the trauma-psychopathy link. In cases of high trauma, the “survival” mechanism of psychopathy appears to override other developmental pathways. The protective factors become less effective when the threat level is perceived as extreme.
Nevertheless, the authors of the new study note some limitations to their work. The reliance on self-reported data introduces potential bias. Participants may not accurately remember or report their childhood experiences. The sample consisted largely of female undergraduate students. This limits the ability to generalize the findings to the broader population or to men specifically.
Future research is needed to track these relationships over time. Longitudinal studies could help determine the direction of causality. It is possible that children with certain temperaments elicit different reactions from their environment. Understanding the precise timeline of these developments would require observing participants from childhood through adulthood.
The study, “Childhood trauma and life history strategies – the moderating role of childhood socio-economic status and the dark triad traits,” was authored by Vlad Burtaverde, Peter K. Jonason, Anca Minulescu, Bogdan Oprea, Șerban A. Zanfirescu, Ștefan -C. Ionescu, and Andreea M. Gheorghe.

But it could signal a concerning health issue.
This is unique among mammals.


An 18-week experimental study examining the effects of topiramate on tobacco smoking and alcohol use found no differences between groups treated with topiramate and those receiving placebo treatment in the last 4 weeks of treatment. However, the authors report a lower average percentage of heavy drinking days and drinks per day in participants treated with the highest tested dose of the substance compared to the other groups across the assessments conducted after the target quit date. The research was published in Alcohol Clinical & Experimental Research.
Topiramate is a prescription medication originally developed as an antiepileptic drug to treat seizures. It is also commonly used for migraine prevention and, in combination with other drugs, for weight management.
Topiramate works by modulating multiple neurotransmitter systems, including enhancing the activity of the inhibitory neurotransmitter GABA and reducing the excitatory activity of the neurotransmitter glutamate. Because of these mechanisms, topiramate can reduce the heightened excitability of neurons in the brain.
The medication is sometimes used off-label for conditions such as bipolar disorder, alcohol use disorder, and binge eating disorder. However, it has substantial cognitive and neurological side effects (e.g., cognitive slowing, difficulty with word finding, tingling sensations in hands or feet), which limit its long-term use.
Study author Jason D. Robinson and his colleagues wanted to explore whether topiramate would be effective in treating individuals with alcohol use disorder and tobacco use disorder. More specifically, they wanted to see whether 250 mg and 125 mg of topiramate per day would result in reducing heavy drinking and cigarette smoking behaviors in individuals motivated to try to quit both substances. The study authors hypothesized that the higher dose would be more effective than the lower dose.
Study participants were 236 adults who met the criteria for both alcohol use disorder and tobacco use disorder. They were recruited from San Diego, Houston, and Charlottesville. Participating women had been drinking at least 8 standard drink units (of alcohol) per week in the past 30 days, while this minimum limit was 15 standard drink units for men.
One standard drink unit is the amount of alcohol that contains about 14 grams of pure ethanol, roughly equivalent to a small beer (350 ml), a glass of wine (150 ml), or a shot of spirits (45 ml). They were also smoking an average of 5 or more cigarettes per day in the 30 days before the study.
Study participants were randomly divided into three groups. One group was assigned to receive a high dose of topiramate, up to 250 mg/day; the second group would receive up to 125 mg/day of topiramate; and the third group would receive a placebo.
The placebo consisted of pills that looked like topiramate pills but contained no active ingredients. Participants did not know which treatment they were receiving, and the same was the case for the researchers directly working with them. In other words, the study was double-blind.
The treatment lasted for 18 weeks. During this period, participants took their assigned treatment and received adherence counseling, including a self-help manual for smoking cessation. In the first 5 weeks, the dose was gradually increased, and at the start of week 6, participants were expected to actively stop both alcohol drinking and tobacco smoking.
Results showed no differences between the study groups regarding the primary outcomes. In other words, the three groups did not differ significantly in the percentage of heavy drinking days and the rate of continuous smoking abstinence in the last 4 weeks of treatment.
However, study authors report that participants in the group that received 250 mg of topiramate had a lower average percentage of heavy drinking days and drinks per day compared to the other two groups when all assessments done from week 6 onward were taken into account. Similarly, participants in both topiramate groups smoked fewer cigarettes per day and reported greater cigarette abstinence than those in the placebo group during the same period.
“While the primary analyses did not find evidence that topiramate decreases drinking and smoking behavior, likely influenced by a high attrition rate and poor medication adherence, exploratory repeated measures analyses suggest that topiramate 250 mg reduces drinking behavior and that both the 125 mg and 250 mg doses reduce smoking behavior,” the study authors concluded.
The study contributes to the scientific understanding of the effects of topiramate. However, it should be noted that 236 participants started the treatments, but only 107 completed them, which is less than half. This extremely high attrition rate could have substantially altered the results.
The paper, “High- and low-dose topiramate for the treatment of persons with alcohol use disorder who smoke cigarettes: A randomized control trial,” was authored by Jason D. Robinson, Robert M. Anthenelli, Paul M. Cinciripini, Maher Karam-Hage, Yong Cui, George Kypriotakis, and Nassima Ait-Daoud Tiouririne.

“Birds of a feather flock together” is a cliche for a reason when it comes to romantic relationships. Shared religious beliefs, values, political affiliation and even music taste all influence attraction and satisfaction in a relationship. But a recent study has now identified another unexpected factor that may bring couples closer together: sharing a similar mental health diagnosis.
The concept of romantic partners sharing a psychiatric diagnosis is not new. Indeed, between 1964 and 1985 several studies that explored the reasons why people choose their romantic partners included psychiatric diagnosis as a variable. However, no large-scale, cross-cultural investigation had been conducted until recently.
Using national health insurance data from more than six million couples in total, a team of researchers recently analysed the degree to which psychiatric disorders were shared between couples. They examined data from five million couples in Taiwan, 571,534 couples in Denmark and 707,263 couples in Sweden.
They looked at nine psychiatric disorders in their analysis, including depression, anxiety, substance-use disorder, bipolar disorder, anorexia nervosa, ADHD, autism, obsessive-compulsive disorder and schizophrenia. They found that people with a diagnosed psychiatric disorder had a higher likelihood of marrying someone with the same or a similar psychiatric disorder than they did of marrying someone who isn’t diagnosed with one.
While the finding is robust, the authors do acknowledge there are some limitations when interpreting the results.
The first is that the timing of relationships and diagnoses were not recorded. This means that diagnosis could have occurred after the beginning of the relationship – and thus may not be the result of active choice.
Furthermore, a care provider’s own biases may influence how likely they are to diagnose a person with a specific mental health condition. Since many couples share the same family doctor, this could influence their likelihood of being diagnosed with a psychiatric condition — and could have biased the results seen in the study.
Finally, the authors stress their results are purely observational. This means they don’t explicitly consider the contributing factors as to why people with psychiatric diagnoses might be more likely to choose romantic relationships with each other.
However, there are several psychological theories that may help to explain this phenomenon.
1. Assortative mating:
This theory assumes that we choose partners who are similar to us. Normally this is applied to personality and social factors (such as shared religious or socioeconomic background). But this recent study suggests that this choice may extend beyond these factors and into how we think.
So a person with a specific psychiatric disorder – such as anxiety or autism – may be drawn to someone with a similar psychiatric disorder because they share similar traits, values or approaches to daily life (such as prioritising structure and routine).
2. Proximity:
According to the mere exposure effect, we often choose relationships with people that we live or work in close proximity to – or otherwise spend time around.
People who share psychiatric diagnoses may be drawn to similar social situations. For example, people with substance use disorder may visit bars or other social settings where taking substances is more commonplace – and thus may be more likely to meet potential mates who are struggling with a similar disorder.
3. Attachment theory:
Attachment theory assumes that as infants, we develop a specific emotional bond to our primary caregivers. This early bond then shapes our subsequent emotional and psychological patterns of behaviour as we get older – and also influences what we’re looking for in a relationship.
So someone with an anxious attachment style (which can manifest as fear of abandonment, desire for closeness or need for reassurance) might feel drawn to a partner who has a similar attachment style or exhibits the kind of behaviour they desire – such as a partner who texts them all night when they’re apart. Even if this is not a healthy dynamic, the validation gained from a high-intensity relationship would likely make it hard to resist.
Research shows certain attachment styles are more common in people with specific psychiatric conditions. For example, anxious attachment style is more common in people who have anxiety, depression and bipolar disorder. This might help explain why the study found people with certain psychiatric conditions were more likely to be married to each other.
4. Social identity theory:
Social identity theory assumes that our self-esteem is gained through a sense of belonging within our social groups. So when you begin a relationship with somebody from within your social group, it boosts self-esteem as it brings a greater sense of belonging and feeling understood.
This might explain why people with the same psychiatric diagnosis (a social group) would be drawn to each other. Finding someone who understands and experiences the same struggles you do could help you bond and make you feel understood and validated.
What does this mean for us? Well, the results reported by this recent study can only tell us whether couples share psychiatric diagnoses. They don’t tell us the quality and duration of the relationship, nor do they account for individual differences which may also affect the relationship.
Ultimately, shared experiences promote closeness and empathetic communication for couples – and it stands to reason that this would extend to psychiatric diagnosis. Having a partner who understands and can relate to your mental illness can provide social support and validation that’s not available from someone who has never struggled with their mental health.![]()
This article is republished from The Conversation under a Creative Commons license. Read the original article.

Do you really need to pick it?
Everything about this is wack.
A new analysis suggests that physical frailty serves as a robust warning sign for cognitive decline in later life. Researchers found that middle-aged and older adults with weaker muscles faced a much higher likelihood of developing dementia compared to their stronger peers. These findings were published in the Journal of Psychiatric Research.
Dementia rates are climbing globally as life expectancy increases. This condition places a heavy strain on families and healthcare systems. Medical experts are urgently looking for early indicators to identify people at risk before severe memory loss begins. One potential marker is sarcopenia. This is the age-related loss of muscle mass and power. Previous investigations have hinted at a link between physical frailty and brain health. However, many prior attempts to measure this connection did not account for body size differences among individuals.
Wei Jin and colleagues from Xinxiang Medical University in China sought to clarify this relationship. They wanted to see if the connection held true when adjusting for body mass and weight. They also aimed to look at both upper and lower body strength. Most previous work focused only on handgrip strength. The team believed a comprehensive approach could offer better insights into how physical decline might mirror changes in the brain.
The research team utilized data from the English Longitudinal Study of Ageing (ELSA). This is a long-running project that tracks the health and well-being of people living in England. The analysis included nearly 6,000 participants. All subjects were at least 50 years old at the start of the review. The researchers followed these individuals for a median period of about nine years.
To measure upper body strength, the team used a handheld dynamometer. Participants squeezed the device as hard as they could using their dominant hand. The researchers recorded the maximum force exerted during three trials.
Absolute strength is not always the best measure of health. A heavier person typically requires more muscle mass to move their body than a lighter person. To address this, the researchers standardized the grip strength scores. They adjusted the measurements based on the person’s body mass index (BMI) and total weight. This calculation ensured that strength scores were fair comparisons between people of different sizes.
The team also needed a reliable way to assess lower body function. They utilized a test involving a chair. Participants had to stand up from a sitting position five times as fast as possible. They were not allowed to use their arms for support. A stopwatch recorded the time it took to complete the five repetitions. Slower times indicated weaker leg muscles.
During the follow-up period, 197 participants developed dementia. This represented about 3.3 percent of the study population. The data revealed a clear pattern connecting muscle weakness to cognitive diagnoses.
Participants with the lowest absolute handgrip strength faced a high probability of diagnosis. Their risk was roughly 2.8 times higher than those with the strongest grip. This relationship remained consistent even after the researchers accounted for differences in body mass.
When looking at BMI-standardized strength, the trend persisted. Those in the lowest tier of strength relative to their size had more than double the risk of dementia. This suggests that low muscle quality is a danger sign regardless of a person’s weight.
The results for leg strength were similarly distinct. People who took the longest to stand up from a chair had a much higher probability of developing dementia. Their risk was approximately 2.75 times higher than those who could stand up quickly.
The researchers checked to see if these trends varied by demographic. They found the pattern was consistent for both men and women. It also held true for middle-aged adults between 50 and 64, as well as for those over 65. The connection appeared to be linear. This means that for every incremental decrease in strength, the estimated risk of dementia rose.
The team performed a sensitivity analysis to check the robustness of their data. They excluded participants who were diagnosed with dementia within the first two years of the study. This step helps rule out the possibility that the muscle weakness was caused by pre-existing, undiagnosed dementia. The results remained largely the same after this exclusion.
There are several biological theories that might explain these results. One theory involves white matter hyperintensities. These are lesions that appear on brain scans. They represent damage to the brain’s communication network. Previous research shows that declines in muscle strength often correlate with an increase in these lesions.
Another potential mechanism involves the nervous system’s interconnectivity. The systems that control movement, senses, and cognition are linked. Damage to the neural pathways that control muscles might occur alongside damage to cognitive pathways.
Inflammation may also play a specific role. Chronic inflammation is known to damage both muscle tissue and neurons. High levels of inflammatory markers in the blood are associated with both sarcopenia and dementia. This creates a cycle where inflammation degrades the body and the brain simultaneously.
The authors noted several limitations to their work. This was an observational study. It can show a relationship between two factors, but it cannot prove that muscle weakness causes dementia directly. It is possible that unmeasured lifestyle factors contribute to both conditions.
The study also relied partly on self-reported medical diagnoses. This method can sometimes lead to inaccuracies if participants do not recall their medical history perfectly. Additionally, the study did not distinguish between different types of dementia. It grouped Alzheimer’s disease and other forms of cognitive decline together.
The study population was specific to the United Kingdom. The participants were predominantly white and over age 50. The results may not apply perfectly to younger populations or different ethnic groups. Cultural and genetic differences could influence the strength-dementia relationship in other parts of the world.
Despite these caveats, the implications for public health are clear. The study highlights the value of maintaining muscle strength as we age. Grip strength and chair-rising speed are simple, non-invasive tests. Doctors could easily use them to screen patients for dementia risk.
Future research should focus on intervention strategies. Scientists need to determine if building muscle can actively delay the onset of dementia. Clinical trials involving strength training exercises would be a logical next step.
The researchers conclude that muscle strength is a key component of healthy aging. Both upper and lower limb strength appear to matter. Interventions that target total body strength could be an effective way to support brain health. Identifying physical decline early provides a window of opportunity for preventative care.
The study, “Association between muscle strength and dementia in middle-aged and older adults: A nationwide longitudinal study,” was authored by Wei Jin, Sheng Liu, Li Huang, Xi Xiong, Huajian Chen, and Zhenzhen Liang.

Prescription stimulants are among the most widely used psychiatric medications in the world. For decades, the prevailing medical consensus held that drugs like methylphenidate treat attention deficit hyperactivity disorder by targeting the brain’s executive control centers. A new study challenges this long-held dogma, revealing that these medications act primarily on neural networks responsible for wakefulness and reward rather than attention. The study was published in the journal Cell.
Medical textbooks have traditionally taught that stimulants function by enhancing activity in the prefrontal cortex. This region of the brain is often associated with voluntary control, planning, and the direction of focus. The assumption was that by boosting activity in these circuits, the drugs allowed patients to filter out distractions and maintain concentration on specific tasks. However, the precise neural mechanisms have remained a subject of debate among neuroscientists.
Earlier research into these medications often produced inconsistent results. Some studies suggested that stimulants improved motivation and reaction times rather than higher-level reasoning. Furthermore, behavioral experiments have shown that the drugs do not universally improve performance. They tend to help individuals who are performing poorly but offer little benefit to those who are already performing well.
To resolve these discrepancies, a research team led by neurologist Benjamin P. Kay at Washington University School of Medicine in St. Louis undertook a massive analysis of brain activity. Working with senior author Nico U.F. Dosenbach, Kay aimed to map the effects of stimulants across the entire brain without restricting their focus to pre-determined areas. They sought to understand which specific brain networks were most altered when a child took these medications.
The researchers utilized data from the Adolescent Brain Cognitive Development Study. This large-scale project tracks the biological and psychological development of thousands of children across the United States. The team selected functional magnetic resonance imaging scans from 5,795 children between the ages of eight and eleven.
Kay and his colleagues compared the brain scans of children who had taken prescription stimulants on the day of their MRI against those who had not. They employed a technique known as resting-state functional connectivity. This method measures how different regions of the brain communicate and synchronize with one another when the person is not performing a specific task.
The analysis did not rely on small, isolated samples. The researchers used a data-driven approach to look at the whole connectome, which is the complete map of neural connections in the brain. They controlled for various factors that could skew the results, such as head motion during the scan and socioeconomic status.
The findings contradicted the traditional “attention-centric” view of stimulant medication. The researchers observed no statistical difference in the functional connectivity of the dorsal or ventral attention networks. The drugs also did not produce measurable changes in the frontoparietal control network, which is usually linked to complex problem-solving.
Instead, the most substantial changes occurred in the sensorimotor cortex and the salience network. The sensorimotor cortex is traditionally associated with physical movement and sensation. However, recent discoveries suggest this area also plays a major role in regulating the body’s overall arousal and wakefulness levels.
The salience network is responsible for determining what is important in the environment. It helps the brain calculate the value of a task and decides whether an action is worth the effort. The study found that stimulants increased connectivity between these reward-processing regions and the motor systems.
This shift in connectivity suggests that the drugs work by altering the brain’s calculation of effort and reward. By boosting activity in the salience network, the medication makes tedious activities feel more rewarding than they otherwise would. This reduces the urge to switch tasks or seek stimulation elsewhere.
“Essentially, we found that stimulants pre-reward our brains and allow us to keep working at things that wouldn’t normally hold our interest — like our least favorite class in school, for example,” Dosenbach said. This explains the paradox of why a stimulant can help a hyperactive child sit still. The drug removes the biological drive to fidget by satisfying the brain’s need for reward.
To verify that these findings were not an artifact of the pediatric data, the team conducted a separate validation study. They recruited five healthy adults who did not have attention deficits. These volunteers underwent repeated brain scans before and after taking a controlled dose of methylphenidate.
The results from the adult trial mirrored the findings in the children. The medication consistently altered the arousal and reward networks while leaving the attention networks largely unchanged. This replication in a controlled setting provides strong evidence that the drugs act on basic physiological drivers of behavior.
The study also uncovered a distinct relationship between stimulant medication and sleep. The researchers compared the brain patterns of medicated children to those of children who reported getting a full night of sleep. The functional connectivity signatures were remarkably similar.
Stimulants appeared to mimic the neurological effects of being well-rested. Children who were sleep-deprived showed specific disruptions in their sensorimotor and arousal networks. When sleep-deprived children took a stimulant, those disruptions disappeared.
This “rescue” effect extended to cognitive performance as well. The researchers analyzed school grades and test scores for the children in the study. As expected, children with attention deficits performed better when taking medication. However, the data revealed a nuance regarding sleep.
Stimulants improved the grades and test scores of children who did not get enough sleep. In fact, the medication raised the performance of sleep-deprived children to the level of their well-rested peers. Conversely, for children who did not have attention deficits and already got sufficient sleep, the drugs provided no statistical benefit to performance.
“We saw that if a participant didn’t sleep enough, but they took a stimulant, the brain signature of insufficient sleep was erased, as were the associated behavioral and cognitive decrements,” Dosenbach noted. The medication effectively masked the neural and behavioral symptoms of fatigue.
This finding raises important questions about the use of stimulants as performance enhancers. The data suggests that the drugs do not make a well-rested brain smarter or more attentive. They simply counteract the drag of fatigue and lack of motivation.
The authors of the study advise caution regarding this sleep-masking effect. While the drugs can hide the immediate signs of sleep deprivation, they do not replace the biological necessity of sleep. Chronic sleep loss is linked to cellular stress, metabolic issues, and other long-term health consequences that stimulants cannot fix.
Kay highlighted the clinical implications of these findings for doctors and parents. Symptoms of sleep deprivation often mimic the symptoms of attention deficit hyperactivity disorder, including lack of focus and irritability. Treating a sleep-deprived child with stimulants might mask the root cause of their struggles.
“Not getting enough sleep is always bad for you, and it’s especially bad for kids,” Kay said. He suggested that clinicians should screen for sleep disturbances before prescribing these medications. It is possible that some children diagnosed with attention deficits are actually suffering from chronic exhaustion.
The study also provides a new framework for understanding the brain’s motor cortex. The researchers noted that the changes in the motor system align with the recently discovered Somato-Cognitive Action Network. This network integrates body control with planning and arousal, further cementing the link between movement and alertness.
Future research will need to investigate the long-term effects of using stimulants to override sleep signals. The current study looked at a snapshot in time, but the cumulative impact of masking fatigue over years remains unknown. The researchers also hope to explore whether these arousal mechanisms differ in various subtypes of attention disorders.
By shifting the focus from attention to arousal and reward, this research fundamentally alters the understanding of how psychostimulants function. It suggests that these drugs are not “smart pills” that boost intelligence. Instead, they are endurance tools that help the brain maintain effort and wakefulness in the face of boredom or fatigue.
The study, “Stimulant medications affect arousal and reward, not attention,” was authored by Benjamin P. Kay, Muriah D. Wheelock, Joshua S. Siegel, Ryan Raut, Roselyne J. Chauvin, Athanasia Metoki, Aishwarya Rajesh, Andrew Eck, Jim Pollaro, Anxu Wang, Vahdeta Suljic, Babatunde Adeyemo, Noah J. Baden, Kristen M. Scheidter, Julia Monk, Nadeshka Ramirez-Perez, Samuel R. Krimmel, Russel T. Shinohara, Brenden Tervo-Clemmens, Robert J. M. Hermosillo, Steven M. Nelson, Timothy J. Hendrickson, Thomas Madison, Lucille A. Moore, Óscar Miranda-Domínguez, Anita Randolph, Eric Feczko, Jarod L. Roland, Ginger E. Nicol, Timothy O. Laumann, Scott Marek, Evan M. Gordon, Marcus E. Raichle, Deanna M. Barch, Damien A. Fair, and Nico U.F. Dosenbach.

A new study suggests that the average person may be far more aware of their own lack of political knowledge than previously thought. Contrary to the popular idea that people consistently overestimate their competence, this research indicates that individuals with low political information generally admit they do not know much. These findings were published in Political Research Quarterly.
Political scientists have spent years investigating the gap between what citizens know and what they think they know. This gap is often attributed to the Dunning-Kruger effect. This psychological phenomenon occurs when people with low ability in a specific area overestimate their competence.
In their new study, Alexander G. Hall and Kevin B. Smith of the University of Nebraska sought to answer several unresolved questions regarding this phenomenon. They wanted to determine if receiving objective feedback could reduce overconfidence. The researchers also intended to see if the Dunning-Kruger effect remains stable over time or changes due to major events. The study utilized a natural experiment to test these ideas in a real-world educational setting.
“Kevin and I have had an ongoing interest in this question: if you make someone’s substantive knowledge salient, will they do a more accurate job of reporting it?” explained Hall, who is now a staff statistician for Creighton University’s School of Medicine and adjunct instructor for the University of Nebraska-Omaha.
“I noticed that in his intro political science course he had been consistently collecting information that could speak to this, and that we had the makings of a neat natural experiment where participants had either taken this knowledge assessment before (presumably increasing that salience) or after being asked about their self-rated political knowledge.”
This data collection spanned eleven consecutive semesters between the fall of 2018 and the fall of 2023. The total sample included 1,985 students. The mean sample size per semester was approximately 180 participants.
The course required students to complete two specific assignments during the first week of the semester. One assignment was a forty-two-question assessment test designed to measure objective knowledge of American government and politics. The questions included items from textbook test banks and the United States citizenship test. The second assignment was a class survey that asked students to rate their own knowledge.
The researchers measured confidence using a specific question on the survey. Students rated their knowledge of American politics on a scale from zero to ten. A score of zero represented no knowledge, while a score of ten indicated the student felt capable of running a presidential campaign.
The study design took advantage of the order in which students completed these assignments. The course did not require students to finish the tasks in a specific sequence. Approximately one-third of the students chose to take the objective assessment test before completing the survey. The remaining two-thirds completed the survey before taking the test.
This natural variation allowed the researchers to treat the situation as a quasi-experiment. The students who took the test first effectively received feedback on their knowledge levels before rating their confidence. This group served as the experimental group. The students who rated their confidence before taking the test served as the control group.
The results provided a consistent pattern across the five-year period. The researchers found that students objectively knew very little about American politics. The average score on the assessment test was roughly 60 percent. This grade corresponds to a D-minus or F in academic terms.
Despite these low scores, the students did not demonstrate the expected overconfidence. When asked to rate their general political knowledge, the students gave answers that aligned with their low performance. The average response on the zero-to-ten confidence scale was modest.
The researchers compared the confidence levels of the group that took the test first against the group that took the survey first. They hypothesized that taking the test would provide a “reality check” and lower confidence scores. The analysis showed no statistically significant difference between the two groups. Providing objective feedback did not reduce confidence because the students’ self-assessments were already low.
The study also examined the stability of these findings over time. The data collection period covered significant events, including the COVID-19 pandemic and the 2020 presidential election. The researchers looked for any shifts in knowledge or confidence that might have resulted from these environmental shocks.
The analysis revealed that levels of political knowledge and confidence remained remarkably stable. The pandemic and the election cycle did not lead to meaningful changes in how much students knew or how much they thought they knew. The gap between actual knowledge and perceived knowledge remained substantively close to zero throughout the study.
“More than anything, I thought we’d see an impact of the natural experiment,” Hall told PsyPost. “I was also somewhat surprised by how flat the results appeared around 2020, when external factors like COVID-19 and the presidential election may have been impacting actual and perceived student knowledge.”
The authors utilized distinct statistical methods to verify their findings regarding overconfidence. They calculated overconfidence using quintiles, which divides the sample into five equal groups based on performance. They also used Z-scores, which measure how far a data point is from the average. Both methods yielded similar conclusions.
Using the quintile method, the researchers subtracted the quintile of the student’s actual score from the quintile of their self-assessment. The resulting overconfidence estimates were not statistically different from zero across all eleven semesters. This finding persisted regardless of whether the students took the assessment before or after the survey.
The Z-score analysis showed minor fluctuations but supported the main conclusion. There was a slight decrease in overconfidence in the control group between 2020 and 2023. However, the magnitude of this change was so small that it had little practical meaning. The overarching trend showed that students consistently recognized their own lack of expertise.
These results challenge the prevailing narrative in political science regarding the Dunning-Kruger effect. Hall and Smith suggest that the difference in findings may stem from how confidence is measured. Many previous studies ask participants to estimate their performance on a specific test they just took. This prompt often triggers a psychological bias where people assume they performed better than average.
In contrast, this study asked students to rate their general knowledge of a broad domain. When faced with a general question about how much they know about politics, individuals appear to be more humble. They do not default to assuming they are above average. Instead, they provide a rating that accurately reflects their limited understanding.
“The gap between what people know and what they think they know (over-or-under-confidence) may be less of a problem than we think, at least in the realm of political knowledge,” Hall said. “What we found is that if you ask someone what they know about politics they are likely to respond with ‘not much.’ You don’t have to provide them with evidence of that lack of information to get that response, they seem to be well-aware of the limitations of their knowledge regardless.”
“The short version here is that we did not find the Dunning-Kruger effect we expected to find. People with low information about politics did not overestimate their political knowledge, they seemed well-aware of its limitations.”
The authors argue that the Dunning-Kruger effect in politics might be an artifact of measurement choices. If researchers ask people how they did on a test, they find overconfidence. If researchers ask people how much they generally know, the overconfidence disappears. This distinction implies that the gap between actual and perceived knowledge may be less problematic than previously feared.
The study does have limitations that the authors acknowledge. The sample consisted entirely of undergraduate students. While the sample was diverse in terms of gender and political orientation, students are not perfectly representative of the general voting population. It is possible that being in an educational setting influences how students rate their own knowledge.
Another limitation involves the nature of the questions. The assessment relied on factual knowledge about civics and government structure. It is possible that overconfidence manifests differently when discussing controversial policy issues or specific political events. Future research could investigate whether different types of political knowledge elicit different levels of confidence.
The study also relied on a natural experiment rather than a randomized controlled trial. While the researchers found no significant differences between the groups initially, they did not control who took the test first. However, the large sample size and repeated data collection add weight to the findings.
“We should certainly be mindful of the principle that ‘absence of evidence isn’t evidence of absence,’ given the frequentist nature of null hypothesis significance testing,” Hall noted. “It’s also critical to understand the limitations of a natural experiment. There’s a lot of work on the Dunning-Kruger effect, and this is just one study, but I think it challenges us to think closely about the construct and how it generalizes.”
Future research could explore these measurement discrepancies further. The authors suggest that scholars should investigate how different ways of asking about confidence affect the results. Understanding whether overconfidence is a stable trait or a response to specific questions is vital for political psychology.
“Whether or not the Dunning-Kruger effect applies to broad domain knowledge is an important question for addressing political engagement – continuing down this line to broaden the domain coverage (something like civic reasoning, or real-world policy scenarios), and trying to move from a knowledge-based test scenario towards some closer indicator of manifest political behavior may give us a better sense of what’s likely to succeed in addressing political informedness,” Hall said.
The study, “They Know What They Know and It Ain’t Much: Revisiting the Dunning–Kruger Effect and Overconfidence in Political Knowledge,” was authored by Alexander G. Hall and Kevin B. Smith.






As artificial intelligence becomes a staple of modern life, people are increasingly turning to chatbots for companionship and comfort. A new study suggests that while users often rely on these digital entities for stability, the resulting bond is built more on habit and trust than deep emotional connection. These findings on the psychology of human-machine relationships were published in the journal Psychology of Popular Media.
The rise of sophisticated chatbots has created a unique social phenomenon where humans interact with software as if it were a living being. This dynamic draws upon a concept known as social presence theory. This theory describes the psychological sensation that another entity is physically or emotionally present during a mediated interaction.
Designers of these systems often aim to create a sense of social presence to make the user experience more engaging. The goal is for the artificial agent to appear to have a personality and the capacity for a relationship. However, the academic community has not fully reached a consensus on what constitutes intimacy in these synthetic scenarios.
Researchers wanted to understand the mechanics of this perceived intimacy. They sought to determine if personality traits influence how a user connects with a machine. The investigation was led by Yingjia Huang from the Department of Philosophy at Peking University and Jianfeng Lan from the School of Media and Communication at Shanghai Jiao Tong University.
The team recruited 103 participants who actively use AI companion applications such as Doubao and Xingye. These apps are designed to provide emotional interaction through text and voice. The participants completed detailed surveys designed to measure their personality traits and their perceived closeness to the AI.
To measure personality, the researchers utilized the “Big Five” framework. This model assesses individuals based on neuroticism, conscientiousness, agreeableness, openness, and extraversion. The survey also evaluated intimacy through five specific dimensions: trust, attachment, self-disclosure, virtual rapport, and addiction.
In addition to the quantitative survey, the researchers conducted in-depth interviews with eight selected participants. These conversations provided qualitative data regarding why users turn to digital companions. The interview subjects were chosen because they reported higher levels of intimacy in the initial survey.
The study revealed that most users do not experience a profound sense of intimacy with their chatbots. The average scores for emotional closeness were relatively low. This suggests that current technology has not yet bridged the gap required to foster deep interpersonal connections.
When analyzing what composed the relationship, the authors identified trust and addiction as the primary drivers. Users viewed the AI as a reliable outlet that is always available. The researchers interpreted the “addiction” component not necessarily as a pathology, but as a habit formed through daily routines.
The data showed that specific personality types are more prone to bonding with algorithms. Individuals scoring high in neuroticism reported stronger feelings of intimacy. Neuroticism is a trait often associated with emotional instability and anxiety.
For these users, the predictability of the computer program offers a sense of safety. Humans can be unpredictable or judgmental, but a coded companion provides consistent responses. One participant noted in an interview, “He’s always there, no matter what mood I’m in.”
People with high openness to experience also developed tighter bonds. These users tend to be imaginative and curious about new technologies. They engage with the AI as a form of exploration.
Users with high openness are willing to suspend disbelief to enjoy the interaction. They view the exchange as a form of experimental play rather than a replacement for human contact. They do not require the AI to be “real” to find value in the conversation.
The interviews highlighted that users often engage in emotional projection. They attribute feelings to the bot even while knowing it has no consciousness. This allows them to feel understood without the complexities of reciprocal human relationships.
The researchers identified three distinct ways users engaged with these systems. The first is “objectified companionship.” These users treat the AI like a digital pet, engaging in routine check-ins without deep emotional investment.
The second category is “emotional projection.” Users in this group use the AI as a safe container for their vulnerabilities. They vent their frustrations and anxieties, finding comfort in the machine’s non-judgmental nature.
The third category is “rational support.” These users do not seek emotional warmth. Instead, they value the AI for its logic and objectivity, using it as a counselor or advisor to help regulate their thoughts.
Despite these uses, participants frequently expressed frustration with technological limitations. Many described the AI’s language as too formal or repetitive. One user compared the experience to reading a customer service script.
This lack of spontaneity hinders the development of genuine immersion. Users noted that the AI lacks the warmth and fluidity of human conversation. Consequently, the relationship remains functional rather than truly affective.
The study posits that this form of intimacy relies on a “functional-affective gap.” Users maintain a high frequency of interaction for functional reasons, such as boredom relief or anxiety management. However, this does not translate into high emotional intimacy.
Trust in this context is defined by reliability rather than emotional closeness. Users trust the AI not to leak secrets or judge them. This form of trust acts as a substitute for the intuitive understanding found in human bonds.
The authors reference the philosophical concept of “I–Thou” versus “I–It” relationships. A true intimate bond is usually an “I–Thou” connection involving mutual recognition. Interactions with AI are technically “I–It” relationships because the machine lacks subjectivity.
However, the findings suggest that users psychologically approximate an “I–Thou” dynamic. They project meaning onto the AI’s output. The experience of intimacy is co-constructed by the user’s imagination and needs.
This dynamic creates a new relational paradigm. The line between simulation and reality becomes blurred. The user feels supported, which matters more to them than the ontological reality of the supporter.
The researchers argue that AI serves as a technological mediator of social affect. It functions as a mirror for the user’s emotions. The intimacy is layered and highly dependent on the context of the user’s life.
The study relies on a relatively small sample size of users from a specific cultural context. This focus on Chinese users may limit how well the results apply to other populations. Cultural attitudes toward technology and privacy could influence these results in different regions.
The cross-sectional nature of the survey also limits the ability to determine causality. It is unclear if neuroticism causes users to seek AI, or if the interaction appeals to those traits. Longitudinal studies would be needed to track how these relationships evolve over time.
Future investigations could examine how improved AI memory and emotional mimicry might alter these dynamics. As the technology becomes more lifelike, the distinction between functional and emotional intimacy may narrow. The authors imply that ethical design is essential as these bonds become more common.
The study, “Personality Meets the Machine: Traits and Attributes in Human–Artificial Intelligence Intimate Interactions,” was authored by Yingjia Huang and Jianfeng Lan.

What is going on here?







A new study published in Applied Psychology provides evidence that the belief in free will may carry unintended negative consequences for how individuals view gay men. The findings suggest that while believing in free will often promotes moral responsibility, it is also associated with less favorable attitudes toward gay men and preferential treatment for heterosexual men. This effect appears to be driven by the perception that sexual orientation is a personal choice.
Psychological research has historically investigated the concept of free will as a positive force in social behavior. Scholars have frequently observed that when people believe they have control over their actions, they tend to act more responsibly and helpfully. The general assumption has been that a sense of agency leads to adherence to moral standards. However, the authors of the current study argued that this sense of agency might have a “dark side” when applied to social groups that are often stigmatized.
The researchers reasoned that if people believe strongly in human agency, they may incorrectly attribute complex traits like sexual orientation to personal decision-making. This attribution could lead to the conclusion that gay men are responsible for their sexual orientation.
“I’m broadly interested in how beliefs that are typically seen as morally virtuous—like believing in free will—can, in some cases, have unintended negative consequences. Free-will beliefs are generally associated with personal agency, accountability, and moral responsibility,” said study author Shahin Sharifi, a senior lecturer in Marketing at La Trobe Business School.
“But from reviewing the literature, I began to wonder whether these beliefs might also create a sense of moral licensing—where people feel they’ve met their moral obligations simply by believing in responsibility, and therefore let their guard down in other ways. In this paper, we explored one potential manifestation of that: the subtle prejudice that can emerge when people assume sexual orientation is a matter of personal choice and hold others accountable for it.”
The researchers conducted five separate studies using different methodologies. The first study involved 201 adults recruited from the United States. Participants read a workplace scenario about an employee named Jimmy who was nominated for an “employee of the month” award. The researchers manipulated Jimmy’s sexual orientation by altering a single detail in the text. In one version, Jimmy mentioned his girlfriend, while in the other, he mentioned his boyfriend.
Participants in this first study also completed a survey measuring their chronic belief in free will. They rated their agreement with statements such as “People always have the ability to do otherwise.” The researchers then measured the participants’ attitudes toward Jimmy and their willingness to support his nomination. The results showed that participants with stronger free-will beliefs reported more favorable attitudes toward the heterosexual version of Jimmy. This positive association did not exist for the gay version of Jimmy.
The second study sought to establish a causal link by manipulating the belief in free will rather than just measuring it. The researchers recruited 200 participants and assigned them to one of two conditions. One group completed a writing task designed to promote a belief in free will by recalling experiences where they had high control over their lives. The other group wrote about experiences where they lacked control, effectively promoting disbelief in free will.
Following this manipulation, participants evaluated the same “Jimmy” scenario used in the first study. The data revealed that inducing a belief in free will led to divergent outcomes depending on the target’s sexual orientation. Participants primed with free-will beliefs expressed greater intentions to help the heterosexual employee. However, this same prime resulted in reduced intentions to help the gay employee. This finding suggests that free-will beliefs can simultaneously fuel favoritism toward the cultural majority and bias against a minority group.
The third study examined these dynamics in a more formal personnel selection context. The researchers recruited 310 participants who worked in healthcare and social assistance sectors. These industries were chosen because they typically have strong policies regarding workplace discrimination. Participants reviewed a resume for a psychologist position. The qualifications were identical across conditions, but the applicant’s personal interests differed.
In one condition, the applicant was listed as an active member of an LGBTQ+ support group. In the other, he was involved in a general community support group. Participants rated how much they liked the applicant, their expectations of his performance, and his likely organizational citizenship behavior.
The results mirrored the previous studies. Stronger endorsement of free will predicted higher likability ratings for the heterosexual applicant. This “liking” then mediated higher ratings for performance and citizenship. This positive chain of evaluation was significantly weaker or absent when the applicant was identified as gay.
“What surprised us most was how consistent the pattern was,” Sharifi told PsyPost. “We didn’t just find that free-will beliefs were linked to harsher views of gay men; we also found more favorable views of straight individuals. This suggests it’s not just about negativity toward a minority group, it’s also about a kind of favoritism toward the majority, which can be just as impactful.”
The fourth and fifth studies focused on identifying the specific psychological mechanism behind these biases. Study 4a surveyed 297 individuals to assess the relationship between free-will beliefs and perceptions of controllability. Participants rated the extent to which they believed people can freely control or shape their sexual orientation.
The analysis confirmed that belief in free will is strongly correlated with the belief that sexual orientation is controllable. This perception of control was, in turn, associated with more negative attitudes toward homosexuality.
Study 4b utilized an experimental design to verify this mechanism. The researchers recruited 241 participants and divided them into two groups. One group read a scientific passage explaining that sexual orientation is biologically determined and largely unchangeable. The other group read a neutral passage about the effects of classical music. Participants then completed measures of free-will beliefs and attitudes toward gay men.
The findings from this final experiment provided evidence for the researchers’ proposed mechanism. When participants were exposed to information that described sexual orientation as biological and uncontrollable, the link between free-will beliefs and anti-gay attitudes was significantly weakened. This suggests that the negative impact of free-will beliefs relies heavily on the assumption that being gay is a choice. When that assumption is challenged, the bias appears to diminish.
“The main takeaway is that even well-intentioned beliefs—like the idea that everyone has free will—can lead to biased or unfair attitudes, especially when applied to aspects of identity that people don’t actually choose, like sexual orientation,” Sharifi explained.
“Our findings suggest that when people strongly believe in free will, they may assume that being gay is a choice, and as a result, judge gay individuals more harshly. This isn’t always obvious or intentional—it can show up in subtle ways, like hiring preferences or gut-level reactions. The broader message is that we need to be thoughtful about how we apply our moral beliefs and recognize that not everything in life is under personal control.”
“The effects we found were small to moderate—but they matter, especially in real-world settings like job interviews or healthcare. Even subtle biases can add up and shape decisions that affect people’s lives. Our results suggest that moral beliefs like free will can quietly influence how we judge others, without us even realizing it.”
There are limitations to this research that provide directions for future inquiry. The studies focused exclusively on attitudes toward gay men. It remains unclear if similar patterns would emerge regarding lesbian women, bisexual individuals, or transgender people. The underlying mechanism of “controllability” might function differently for other identities within the LGBTQ+ community. Additionally, the samples were drawn entirely from the United States. Conceptions of free will and attitudes toward sexual orientation vary significantly across cultures.
“A key point is that we’re not saying belief in free will is bad,” Sharifi noted. “It can promote responsibility and good behavior in many contexts. But when it’s applied to parts of people’s identity they don’t control—like sexual orientation—it can backfire. Also, most people in our studies didn’t show strong anti-gay attitudes overall. The effects we found were about subtle shifts, not overt prejudice.”
Regarding direction for future research, Sharifi said that “we want to explore how other beliefs that are seen as positive might also contribute to hidden biases. We’re especially interested in workplace settings and how to design training or policies that help reduce these effects without making people feel blamed or defensive.”
“This study reminds us how complex human judgment can be,” he added. “Even our most cherished values, like fairness or responsibility, can have unintended effects. Being aware of these blind spots is the first step toward creating more inclusive and equitable environments, for everyone.”
The study, “The dark side of free will: How belief in agency fuels anti-gay attitudes,” was authored by Shahin Sharifi and Raymond Nam Cam Trau.







A new study published in the Journal of Psychiatric Research suggests that individuals with misophonia experience sensory sensitivities that extend beyond just sound. The findings suggest that this condition may involve a broader pattern of sensory processing differences, particularly regarding touch and smell, though these additional sensitivities rarely cause the same level of impairment as auditory triggers.
The motivation for this research emerged from clinical observations during trials for misophonia treatments. Lead researcher Mercedes Woolley noted that participants frequently described irritations with sensory inputs other than sound. Patients often mentioned discomfort with the feeling of clothing on their skin or specific odors.
“The idea for this study grew out of my work conducting interviews with adults enrolled in our clinical trial on the efficacy of acceptance and commitment therapy for misophonia. Our lab at Utah State University specializes in this form of cognitive‑behavioral therapy, and because misophonia is still relatively underexplored, our team wanted to gather as much information as possible about the lived experiences of people with misophonia,” explained Woolley.
“During the interviews, I asked participants about sensory sensitivities beyond sound, and I began noticing a pattern: many of them described additional sensitivities, especially tactile ones. One participant explained that it felt like being constantly aware of the sensation of wearing clothes, something that becomes irritating when your mind can’t shift attention away from it, especially when you need to focus on something else.”
“That comment resonated with me,” Woolley said. “I’ve always been sensitive to smells; certain odors can be overwhelming or frustrating. Personally, as a child, I strongly dislike particular smells, especially the smell of fruit and would go out of my way to avoid it and become irritated when my family disregarded my requests to avoid eating it in front of me. I made significant efforts to avoid anyone eating it, and sometimes I still do.”
“Hearing participants describe their reactions to specific trigger sounds reminded me of my own experiences, just in a different sensory domain. These observations made me wonder whether misophonia might be connected to broader sensory processing challenges or sensory overstimulation.”
“When I reviewed the existing literature, I found that a few researchers had already suggested that heightened sensory sensitivity could be correlated with, or even contribute to, misophonia. That gave me enough grounding to justify developing a study focused on this idea. We still don’t fully understand the underlying mechanisms of misophonia, but sensory processing clearly plays a role. Having data that allowed us to explore this connection was exciting, and publishing this paper felt like a meaningful step toward clarifying potential mechanisms and clinical correlates.”
To explore this, the researchers recruited 60 adults who met the clinical criteria for misophonia and 60 control participants who did not possess measurable traits of the condition. The groups were matched based on age and gender to ensure compatibility.
Participants in the clinical group underwent a detailed interview using the Duke Misophonia Interview to assess symptom severity and impairment. Both groups completed the Misophonia Questionnaire and the Adolescent/Adult Sensory Profile. This standardized measure evaluates how individuals respond to sensory experiences across categories like taste, smell, visual input, and touch.
The Adolescent/Adult Sensory Profile assesses four distinct patterns of sensory processing. These patterns are based on a person’s neurological threshold for noticing a stimulus and their behavioral response to it. The four quadrants include low registration, sensation seeking, sensory sensitivity, and sensation avoidance.
Low registration refers to a tendency to miss sensory cues that others notice. Sensation seeking involves actively looking for sensory stimulation. Sensory sensitivity involves noticing stimuli more acutely than others. Finally, sensation avoidance involves actively trying to escape or reduce sensory input.
The researchers found distinct differences in how the two groups processed sensory information. Individuals with misophonia reported significantly higher levels of sensory sensitivity and sensation avoidance compared to the control group. They also reported lower levels of sensation seeking.
There was no statistical difference between the groups regarding low sensory registration. This indicates that people with misophonia do not lack awareness of sensory input. Instead, their systems appear to be highly reactive to the input they receive.
Within the misophonia group, 80 percent of participants endorsed sensitivity in at least one non-auditory domain. Sensitivity to touch was the most frequently reported non-auditory issue, affecting nearly 57 percent of the clinical group. Of those reporting tactile issues, close to half described their symptoms as moderate to severe. Olfactory sensitivities followed, while visual and taste sensitivities were less common.
Despite the high prevalence of these additional sensitivities, the participants reported that they caused relatively low impairment in their daily lives. This stands in contrast to the significant life disruption caused by their auditory triggers.
For example, 75 percent of participants reported no functional impairment related to their tactile sensitivities. The distress associated with misophonia appears to be tied specifically to the emotional nature of auditory triggers rather than general sensory over-responsivity.
The data indicated a positive association between the severity of misophonia and the intensity of other sensory issues. As misophonia symptoms became more severe, participants were more likely to report higher levels of sensory avoidance and sensitivity. This pattern was also observed in the control group among individuals with subthreshold symptoms. This suggests that sensory vulnerabilities may represent a general risk factor for the development of misophonia-like experiences.
“People with misophonia are most bothered by specific sounds, but many also have sensitivities in other senses, such as touch or smell,” Woolley told PsyPost. “This doesn’t mean they’re overwhelmed by everything; rather, their sensory processing system seems more reactive overall.”
“While many people with misophonia notice certain textures or smells more intensely, these sensitivities typically do not cause major life challenges in the same way misophonic sounds do. We also found that the more severe someone’s misophonia is, the more likely they are to have other sensory sensitivities as well. This doesn’t mean that sensitivities in other senses cause misophonia, but they may reflect a broader sensory processing vulnerability.”
These findings regarding sensory processing align with other recent investigations into the psychological profile of misophonia. A study published in the British Journal of Psychology indicates that the condition may reflect broader cognitive traits rather than being limited to annoyance at noises.
Researchers found that individuals with misophonia struggle with switching attention in emotionally charged situations. This suggests a pattern of mental rigidity that extends beyond the auditory system. Individuals with the condition often hyperfocus on specific sounds and find it difficult to shift their attention elsewhere.
Further evidence regarding attentional processing comes from research published in the Journal of Affective Disorders. This study examined young people and found that those with misophonia exhibit heightened attentional processing compared to those with anxiety disorders.
The data supports the hypothesis that misophonia is linked to a state of increased vigilance. The affected individuals appear to be more aware of environmental stimuli in general. They performed better on tasks requiring the detection of subtle differences in stimuli, indicating a nervous system that is highly tuned to the environment.
The heightened state of arousal observed in misophonia patients also has associations with stress levels. Research published in PLOS One examined the relationship between misophonia severity and various forms of stress. The authors found that higher symptom severity was associated with greater levels of perceived stress and hyperarousal.
This suggests that the condition involves transdiagnostic processes related to how the body manages stress and alertness. While the study did not find a direct causal link to traumatic history, the presence of hyperarousal suggests a physiological state similar to that seen in post-traumatic stress disorders.
The biological underpinnings of these traits have been explored through genetic analysis as well. A large-scale study published in Frontiers in Neuroscience utilized a Genome-Wide Association Study to identify genetic factors. The researchers found that misophonia shares significant genetic overlap with psychiatric disorders such as anxiety and post-traumatic stress disorder. The study identified a specific genetic locus associated with the rage response to chewing sounds.
Understanding misophonia as a condition involving multisensory and cognitive differences helps explain why treatments solely focused on sound often fall short. The combination of sensory avoidance, cognitive rigidity, and physiological hyperarousal points to a complex disorder. The new findings from Woolley and colleagues reinforce the idea that while sound is the primary trigger, the underlying mechanism involves a broader sensory processing vulnerability.
As with all research, the current study by Woolley and colleagues has certain limitations. The researchers did not screen participants for autism spectrum disorder, so it is possible that some reported sensory traits reflect undiagnosed autism. The study relied on a single clinician for interviews, and interrater reliability was not assessed. Additionally, the researchers were unable to compare specific sensory domains between the clinical and control groups due to data limitations in the control set.
Future research should aim to clarify the relationship between misophonia and broader sensory processing patterns using larger samples. Longitudinal designs could help determine how these sensory sensitivities develop over time. It remains to be seen whether these non-auditory sensitivities precede the onset of misophonia or develop concurrently. Further investigation into the mechanisms of sensory over-responsivity could lead to more effective, holistic treatment strategies for those suffering from this challenging condition.
The study, “Sensory processing differences in misophonia: Assessing sensory sensitivities beyond auditory triggers,” was authored by Mercedes G. Woolley, Hailey E. Johnson, Samuel J.E. Knight, Emily M. Bowers, Julie M. Petersen, Karen Muñoz, and Michael P. Twohig.

Our clearest look yet.
"It's going to be really bad for humans."


Is your hat red or green?
This may have implications for humans.
It's all in the neutrons.
A new study published in BMC Psychology provides evidence that the way people judge a woman’s physical attractiveness differs fundamentally from how they judge her personality traits. The findings suggest that physical attractiveness is primarily evaluated based on static body features, such as body mass index, while traits like warmth and understanding are inferred largely through body motion and gestures. This research highlights the distinct roles that fixed physical attributes and dynamic movements play in social perception.
Previous psychological research has established that physical appearance substantially influences first impressions. People often attribute positive personality characteristics to individuals who are physically attractive, a phenomenon known as the halo effect. Despite this, there is limited understanding of how specific visual cues contribute to these different types of judgments. While static features like body shape are known to be important, the role of body motion is less clear.
A team from Shanghai International Studies University and McGill University conducted this research to disentangle these factors. They aimed to determine the relative contributions of unchanging body features versus dynamic movements when observers evaluate a woman’s attractiveness and her expressive character traits. They hypothesized that judgments of physical beauty would rely more on stable physical traits. On the other hand, they proposed that judgments of personality would depend more on transient movements.
To test this hypothesis, the researchers recruited fifteen female participants to serve as models, or posers. These women were photographed and filmed to create the visual stimuli for the study. The researchers took detailed physical measurements of each poser. These measurements included height, weight, waist-to-hip ratio, and limb circumference. This allowed the team to calculate body mass index and other anthropometric data points.
The researchers created two types of visual stimuli. For the static images, the posers stood in neutral positions and also adopted specific poses. Some poses were instructed. This means the models mimicked attractive stances shown to them by the researchers. Other poses were spontaneous. In these cases, the models were asked to pose in ways they personally considered attractive or unattractive without specific guidance.
For the dynamic stimuli, the researchers recorded the models delivering a short speech introducing their hometown. The models performed this speech under two conditions. In the first condition, they spoke in a neutral and emotionless manner. In the second condition, they were asked to speak with passion. The goal was to convince an audience to visit their hometown. The researchers then edited these videos. They isolated the first five seconds and the last five seconds of the clips to examine how impressions might change over time.
The study recruited fifty-four adults to act as perceivers. This group consisted of an equal split of twenty-seven men and twenty-seven women. None of the raters knew the models. They viewed the images and silent video clips to provide ratings. The participants rated the physical attractiveness of the women in the images and videos on a seven-point scale.
The participants also evaluated the models on feminine expressive traits. These traits included characteristics such as being understanding, sympathetic, compassionate, warm, and tender. The researchers coded specific body movements in the videos. They tracked variables such as the number of hand gestures used and whether the hands were kept close to the body or moved freely.
The results indicated a clear distinction in how different judgments are formed. When rating physical attractiveness, the statistical analysis showed that static body features were the strongest predictors. This held true for both the static photographs and the video clips. The Lasso regression analysis revealed that body features accounted for a large portion of the variance in attractiveness ratings.
Among the various body measurements, body mass index emerged as the most significant predictor of attractiveness ratings. Models with lower body mass index scores generally received higher attractiveness ratings. Other features like skin color and shoulder-to-hip ratio also played a role. However, body mass index was the most consistent and robust factor.
In contrast, body motion had a much smaller impact on judgments of physical attractiveness. The statistical models showed that while movement played a role, it was secondary to fixed physical attributes. For instance, in the video condition, body motions explained only a small fraction of the variance in attractiveness compared to body features.
However, the researchers did find that posture style mattered in photographs. Spontaneous attractive poses were rated higher than instructed attractive poses. This suggests that the women had an intuitive understanding of how to present themselves to appear appealing. They were more effective when allowed to pose naturally than when mimicking a standard attractive pose.
A different pattern emerged for the evaluation of feminine expressive traits. In the video condition, body motion was a much stronger predictor of traits like warmth and compassion than static body features. The frequency of hand gestures and the use of open body language were positively associated with these traits. Body features alone were poor predictors of these personality characteristics.
The study found that neither body features nor body motions effectively predicted feminine traits in static images. This suggests that perceiving these personality attributes requires the observation of movement over time. A static image does not convey enough information for an observer to reliably infer warmth or sympathy.
The researchers also compared the neutral and passionate video conditions. The passionate presentations received higher ratings for both attractiveness and feminine traits. This effect was particularly strong in the final five seconds of the passionate videos. This finding suggests that positive body language accumulates to influence perception. As the observers watched the passionate clips for longer, they perceived greater levels of feminine expressive traits.
The results support the idea that humans use different visual channels for different types of social judgments. Physical attractiveness appears to be assessed rapidly based on stable biological signals. These signals may be associated with health and reproductive potential. In contrast, traits like warmth and understanding are social signals. These are inferred from behavioral cues that unfold during an interaction.
The study has certain limitations that affect the generalizability of the results. The sample size of fifteen posers is relatively small. This restricts the range of body types and movement styles represented in the stimuli. The distribution of body mass index among the posers was not perfectly balanced. There were fewer individuals in the overweight category compared to the healthy weight category.
Future research would benefit from a larger and more diverse group of models. This would allow for a more comprehensive analysis of how different body types interact with movement. The current study focused exclusively on female targets. Cultural norms regarding body language and ideal body types vary significantly. The participants in this study were from a specific cultural background. Future studies should investigate these dynamics across different cultures to see if the patterns hold true.
Another direction for future inquiry involves the interaction of other factors. The current study focused on silent videos to isolate body motion. However, voice and facial expressions are also potent social cues. Future research could examine how body motion interacts with vocal tone and facial expressions to form a holistic impression. It would also be useful to investigate how personality traits of the observer influence these ratings.
This research contributes to the understanding of nonverbal communication. It provides evidence that while we may judge beauty largely by what we see in a snapshot, we judge character by watching how a person moves. The distinction emphasizes that social perception is a complex process integrating multiple streams of visual information.
The study, “Perceiving female physical attractiveness and expressive traits from body features and body motion,” was authored by Lin Gao, Marc D. Pell, Zhikang Peng, and Xiaoming Jiang.

A new study suggests that the way young adults process moral emotions is shaped by a combination of their own personality traits and their memories of how they were raised. The research indicates that mothers and fathers may influence a child’s moral development in distinct ways, but these effects depend heavily on the child’s individual temperament. These findings regarding the roots of shame, guilt, and moral identity were published in the journal Psychological Reports.
To understand these findings, it is necessary to first distinguish between two powerful emotions: guilt and shame. While these feelings are often grouped together, psychologists view them as having different functions and outcomes. Guilt is generally considered a helpful moral emotion. It focuses on a specific behavior, such as realizing one has made a mistake or hurt someone.
Because guilt focuses on an action, it often motivates people to apologize or repair the damage. In contrast, shame is viewed as a negative evaluation of the self. Instead of feeling that they did something bad, a person experiencing shame feels that they are bad. This emotion often leads to withdrawal, avoidance, or hiding from others rather than fixing the problem.
Researchers have previously established that family environments play a major role in which of these emotions a person tends to feel. Warm parenting, characterized by affection and structure, generally helps children internalize morality and develop healthy guilt. Conversely, cold parenting, marked by hostility or rejection, is often linked to higher levels of shame.
However, parents are not the only factor in this equation. A theory known as the bidirectional model suggests that children also influence their parents and their own development through their innate personalities. Lead author CaSandra L. Swearingen-Stanbrough and her colleagues at Missouri State University sought to examine this two-way street. They investigated whether a child’s specific personality traits might change the way parenting styles affect their moral identity.
The researchers recruited ninety-nine undergraduate students from a university in the Midwest. The participants provided demographic information and completed a series of standardized psychological questionnaires. Most participants were white and female, with an average age of roughly 20 years.
The first step for the researchers was to assess the participants’ personalities using the “Big Five” model. This model evaluates traits such as agreeableness, which involves kindness and cooperation, and conscientiousness, which involves organization and reliability. It also measures neuroticism, a trait associated with emotional instability and a tendency toward anxiety.
Next, the students reflected on their upbringing. They completed surveys regarding the parenting styles of their mother and father figures. They rated statements to determine if their parents were perceived as “warm,” meaning supportive and affectionate, or “cold,” meaning harsh or chaotic.
Finally, the researchers measured the participants’ moral tendencies. They used the Moral Identity Questionnaire to assess how central morality was to the students’ self-image. They also used the Guilt and Shame Proneness Scale. This tool presents hypothetical scenarios, such as making a mistake at work, and asks how likely the person is to feel bad about the act (guilt) or feel like a bad person (shame).
The results revealed that mothers and fathers appear to influence different aspects of moral development. The study showed that perceiving a mother as warm was strongly linked to a tendency to feel guilt rather than shame. This connection suggests that affectionate maternal figures help children focus on their behavior rather than internalizing failures as character flaws.
However, this effect was not uniform for everyone. The researchers found that the participant’s personality acted as a moderator. The link between a warm mother and the tendency to feel healthy guilt was strongest in participants who scored high on agreeableness. This means that an agreeable child might be more receptive to a warm mother’s influence in developing reparative moral emotions.
The study also examined “shame withdrawal,” which is the urge to hide or pull away from others when one has done something wrong. Generally, having a warm mother reduced this unhealthy reaction. Yet, this relationship was moderated by neuroticism. For individuals with different levels of emotional stability, the protective effect of a warm mother against shame withdrawal manifested differently.
The findings regarding father figures presented a different pattern. The researchers found that fathers had a stronger statistical connection to “moral integrity” than to the emotional processing of guilt or shame. In this specific study, moral integrity referred to behavioral consistency, such as doing the right thing even when no one is watching.
The data indicated that perceiving a father as cold—characterized by rejection or coercion—was actually associated with higher reported moral integrity. This counter-intuitive finding suggests that strict or harsh paternal environments might sometimes prompt young adults to strictly adhere to rules. However, this relationship was also dependent on personality.
Conscientiousness moderated the link between a cold father and moral integrity. While the general trend showed a link between cold fathers and higher reported integrity, this dynamic changed based on how conscientious the student was. The results imply that highly conscientious individuals process harsh parenting differently than those who are less organized or self-disciplined.
The authors note that these distinct roles align with previous theories about family dynamics. Mothers are often viewed as the primary source of emotional warmth and acceptance. Consequently, their parenting style has a greater impact on emotional responses like guilt and shame. Fathers, who may exhibit more variable interactions or rougher play, appear to influence the behavioral enforcement of moral rules.
There are limitations to this research that affect how the results should be interpreted. The study relied entirely on self-reported data from the students. This means the results represent the participants’ perceptions of their parents, which may not match what actually occurred during their childhood.
Additionally, the sample size was relatively small and lacked diversity. The participants were primarily white, female college students. This specific demographic does not represent the broader population. Cultural differences in parenting styles and moral values could lead to different results in other groups.
The study is also correlational, meaning it cannot prove that the parenting styles caused the moral outcomes. It is possible that other unmeasured factors influenced the results. Future research would benefit from observing actual moral behavior rather than relying on hypothetical survey questions.
The researchers suggest that future studies should include the parents’ perspectives as well. Comparing what parents believe they did with what children perceived could offer a more complete picture of the family dynamic. Despite these caveats, the study highlights that moral development is not a one-size-fits-all process.
The authors conclude that children are active participants in their own upbringing. A child’s personality filters the parenting they receive. This helps explain why siblings raised in the same household can grow up to have very different emotional and moral responses to the world.
The study, “Mom, Dad, and Me: Personality Moderates the Relationships Between Parenting Traits, Shame, and Morality,” was authored by CaSandra L. Swearingen-Stanbrough, Lauren Smith, and Olive Baron.

A new analysis of millions of social media posts across seven different platforms reveals that the relationship between political content and user engagement is highly dependent on the specific digital environment. The findings suggest that while users tend to engage more with news that aligns with the dominant political orientation of a specific platform, there appears to be a consistent pattern regarding the quality of information.
Across all examined sites, users tended to engage more with lower-quality news sources compared to high-quality sources shared by the same individual. The study, which highlights the fragmented nature of the modern online landscape, was published in the Proceedings of the National Academy of Sciences.
The motivation for this research stems from a need to update the scientific understanding of social media dynamics. For many years, academic inquiry into online behavior relied heavily on data derived from a single platform, most notably Twitter (now X).
This concentration occurred largely because Twitter provided an application programming interface that made data collection relatively accessible for scholars. As a result, many assumptions about how misinformation spreads or how political biases function were based on a potentially unrepresentative sample of the internet. The research team sought to correct this by broadening the scope of analysis to include a diverse array of newer and alternative platforms.
The study was conducted by a collaborative group of researchers from several institutions. The team included Mohsen Mosleh from the University of Oxford and the Massachusetts Institute of Technology, Jennifer Allen from New York University, and David G. Rand from the Massachusetts Institute of Technology and Cornell University.
Their goal was to determine if phenomena such as the “right-wing advantage” in engagement or the rapid spread of falsehoods were universal truths or artifacts of specific platform architectures. They also aimed to understand whether the rise of alternative social media sites has led to the creation of “echo platforms,” where entire user bases segregate themselves by political ideology.
To achieve this, the researchers collected data during January 2024. They focused on seven platforms that allow for the public sharing of news links: X, BlueSky, Mastodon, LinkedIn, TruthSocial, Gab, and GETTR. This selection represents a mix of mainstream networks, professional networking sites, decentralized platforms, and sites that explicitly cater to specific political demographics.
The final dataset included nearly 11 million posts that contained links to external news domains. This large sample provided a comprehensive cross-section of online sharing behaviors.
The researchers employed a rigorous set of measures to evaluate the content within these posts. To assess the quality of the news being shared, they did not rely on their own subjective judgments. Instead, they utilized a set of reliability ratings for 11,520 news domains. These ratings were generated through a “wisdom of crowds” methodology that aggregated evaluations from professional fact-checkers, journalists, and academics. This system allowed the team to assign a quality score to the publisher of each link, serving as a proxy for the likely accuracy of the content.
In addition to quality, the team needed to quantify the political leaning of the news sources. They utilized a sophisticated large language model to estimate the political alignment of each domain. The model was asked to rate domains on a scale ranging from strongly liberal to strongly conservative.
To ensure the validity of these AI-generated estimates, the researchers cross-referenced them with established political benchmarks and found a high degree of correlation. This allowed them to categorize content as left-leaning, right-leaning, or neutral with a high degree of confidence.
The primary statistical method used in the study was a linear regression analysis that incorporated user fixed effects. This is a statistical technique designed to control for variables that remain constant for each individual. By comparing a user’s posts only against other posts by the same user, the researchers effectively removed the influence of popularity. It did not matter if a user had ten followers or ten million. The study measured whether a specific user received more engagement than usual when they shared a specific type of content.
The results regarding political polarization challenged the idea of a universal advantage for conservative content. The data indicated that the political lean of the most engaging content generally matched the political lean of the platform’s user base.
On platforms known for attracting conservative users, such as TruthSocial, Gab, and GETTR, right-leaning news sources garnered significantly more engagement. On platforms with more liberal or neutral populations, such as BlueSky, Mastodon, and LinkedIn, left-leaning news attracted more likes and shares.
This finding supports the hypothesis of “echo platforms.” In the past, researchers worried about echo chambers forming within a single site like Facebook. The current landscape suggests a migration where users choose entire platforms that align with their views.
The researchers found a strong correlation between the average political lean of a platform and the type of content that gets rewarded with engagement. This implies that the “right-wing advantage” observed in earlier studies of Twitter and Facebook may have been a product of those specific user bases rather than an inherent property of social media.
While political engagement varied by platform, the findings regarding news quality were remarkably consistent. The researchers discovered that on all seven platforms, posts containing links to lower-quality news domains received more engagement than posts linking to high-quality domains. This pattern held true regardless of whether the platform was considered left-leaning, right-leaning, or neutral. It was observed on sites with complex algorithmic feeds as well as on Mastodon, which displays posts in chronological order.
The magnitude of this effect was notable. The analysis showed that a user’s posts linking to the lowest-quality sites received approximately seven percent more engagement than their posts linking to high-quality sites. This effect was robust even when controlling for the political slant of the article. This suggests that the engaging nature of low-quality news is not solely driven by partisanship. The authors propose that factors such as novelty, negative emotional valence, and sensationalism likely contribute to this phenomenon.
The study also clarified the relationship between the volume of content and engagement rates. In terms of absolute numbers, users shared links to high-quality news sources much more frequently than they shared low-quality sources. High-quality news dominates the ecosystem in terms of prevalence. However, the engagement data indicates a discrepancy. While reputable news is shared more often, it generates less excitement or interaction per post compared to low-quality alternatives.
The inclusion of Mastodon in the dataset provided a significant control condition for the study. Because Mastodon does not use an engagement-based ranking algorithm to sort user feeds, the results from that platform suggest that algorithms are not the sole driver of the misinformation advantage. The fact that low-quality news still outperformed high-quality news on a chronological feed points to human psychology as a primary factor. Users appear to naturally prefer or react more strongly to the type of content found in lower-quality outlets.
But as with all research, there are some caveats. The data collection was restricted to a single month, which may not capture seasonal variations or behavior during major political events. The researchers were also unable to include data from Meta platforms like Facebook and Instagram, or video platforms like TikTok, due to data access restrictions. This means the findings apply primarily to text-heavy, link-sharing platforms and may not perfectly translate to video-centric environments.
Additionally, the study is observational, meaning it identifies associations but cannot definitively prove causation beyond the controls applied in the statistical models.
Future research directions could involve expanding the scope of platforms analyzed as data becomes available. Investigating the specific psychological triggers that make low-quality news more engaging remains a priority. The researchers also suggest that further work is needed to understand how the migration of users between platforms affects the spread of information. As the social media landscape continues to fracture, understanding these cross-platform dynamics will become increasingly important.
The study, “Divergent patterns of engagement with partisan and low-quality news across seven social media platforms,” was authored by Mohsen Mosleh, Jennifer Allen, and David G. Rand.




Recent psychological research has identified distinct personality profiles that shed light on why some individuals turn to sexual behavior to manage emotional distress while others do not. The findings suggest that hostility, impulsivity, and deep-seated self-criticism are key factors that distinguish hypersexual coping mechanisms from other forms of emotional insecurity. This research was published in the journal Sexual Health & Compulsivity.
Psychologists classify hypersexuality as a condition involving excessive sexual fantasies, urges, and behaviors. While high sexual desire is natural for many, hypersexuality becomes a clinical concern when it causes distress or disrupts daily life. Many experts view this behavior not merely as a drive for pleasure but as a coping strategy.
Individuals may engage in sexual activity to escape negative emotions such as anxiety, depression, boredom, or loneliness. This creates a cycle where the temporary relief provided by sexual activity reinforces the behavior. Eventually, this pattern can lead to feelings of guilt or shame, which may trigger further urges to cope through sex.
To understand this dynamic, researchers look to attachment theory. This psychological framework describes how early bonds with caregivers shape the way adults relate to others and regulate their emotions. People with secure attachment styles generally feel comfortable with intimacy and trust others.
Those with insecure attachment styles often struggle with these bonds. Anxious attachment involves a fear of abandonment and a constant need for approval. Avoidant attachment involves a discomfort with closeness and a desire for emotional distance.
Prior studies have linked insecure attachment to difficulties in regulating emotions. When individuals cannot manage their feelings effectively, they may seek external ways to soothe themselves. For some, this external method becomes sexuality.
However, not everyone with an insecure attachment style develops hypersexual behaviors. Camilla Tacchino and her colleagues at Sapienza University of Rome sought to understand what separates these groups. They aimed to identify specific psychological profiles based on attachment, self-criticism, and personality traits.
The researchers recruited 562 participants from the general population in Italy. The group was predominantly female and had an average age of roughly 31 years. The participants completed a series of detailed self-report questionnaires.
One survey measured the tendency to use sex as a coping mechanism to deal with emotional pain. Another assessed attachment styles, looking for signs of anxiety or avoidance in relationships. Additional surveys evaluated pathological personality traits and levels of self-criticism.
The team used a statistical method known as latent profile analysis. This technique allows researchers to group participants based on shared patterns across multiple variables. Instead of looking at averages for the whole group, this method identifies distinct “types” of people within the data.
The analysis revealed three specific profiles. The largest group, comprising 50% of the sample, was labeled “Secure without Sexual Coping.” These individuals showed low levels of attachment anxiety and avoidance. They also reported very low reliance on sex to manage their emotions.
Demographically, this secure group tended to be older than the other groups. They were also more likely to be in romantic relationships and to have children. Psychologically, they displayed the highest levels of emotional stability.
The second profile was labeled “Insecure with Sexual Coping.” This group made up about 13% of the sample. These participants exhibited high levels of attachment insecurity, characterized by both a fear of intimacy and a strong need for approval.
The defining feature of this second profile was their high score on using sex to cope. They frequently reported engaging in sexual acts to deal with life problems or negative feelings. This group was generally younger and less likely to be in a committed relationship.
The third profile was labeled “Insecure without Sexual Coping.” Comprising 37% of the sample, these individuals also scored high on attachment insecurity. They experienced significant worries about relationships and discomfort with closeness. However, unlike the second group, they did not use sex as a coping strategy.
The researchers then compared the personality traits and self-criticism levels of these three groups. The “Secure” group scored the lowest on all measures of pathology. They were generally less self-critical and had fewer negative personality traits.
The “Insecure with Sexual Coping” group displayed a specific set of personality markers. They scored highest in the domains of Antagonism and Disinhibition. Antagonism refers to behaviors that put an individual at odds with others, such as hostility or grandiosity.
Disinhibition involves an orientation toward immediate gratification and impulsive behavior. This suggests that for this group, the drive to use sex as a coping mechanism is linked to difficulties in impulse control. They may act on their urges without fully considering the long-term consequences.
This group also reported high levels of self-hatred. They experienced feelings of disgust and aggression toward themselves. The authors suggest that this self-loathing may be both a cause and a result of their compulsive sexual behavior.
The “Insecure without Sexual Coping” group presented a different psychological landscape. While they shared the attachment insecurities of the second group, they did not exhibit the same levels of impulsivity or hostility. Instead, they scored highest on a dimension called “Negative Affect.”
Negative Affect involves the frequent experience of unpleasant emotions like sadness, worry, and anxiety. This group also reported the highest levels of feeling “inadequate.” They viewed themselves as inferior or flawed but did not turn to impulsive behaviors to manage these feelings.
The researchers interpreted this distinction as a difference in how these groups process distress. The group that uses sex to cope appears to “externalize” their pain. They act out through impulsive and potentially risky behaviors.
In contrast, the insecure group that avoids sexual coping appears to “internalize” their distress. They may be more prone to rumination, depression, or self-blame. Their feelings of inadequacy might paralyze them or lead to withdrawal rather than active coping strategies like sex.
The study highlights that attachment insecurity is a vulnerability factor but does not guarantee hypersexuality. The presence of specific personality traits determines the direction that insecurity takes. Impulsivity and antagonism seem to steer individuals toward hypersexual coping.
Conversely, feelings of deep inadequacy and sadness may steer individuals away from sexual coping. It is possible that their low self-esteem inhibits sexual pursuit. They may fear rejection too deeply to engage with others sexually, even for coping purposes.
There are limitations to this study that contextualize the results. The participants were drawn from the general population rather than a clinical setting. This means the findings describe trends in everyday people rather than patients diagnosed with hypersexual disorders.
Additionally, the data relied entirely on self-report questionnaires. Participants may not always assess their own behaviors or feelings accurately. Social desirability bias could lead some to underreport sexual behaviors or negative traits.
The cross-sectional nature of the study is another consideration. The researchers collected data at a single point in time. This prevents them from determining causality. It is unclear if personality traits cause the sexual coping or if the behavior influences personality over time.
Future research could address these gaps by studying clinical populations. Investigating individuals who are seeking treatment for compulsive sexual behavior would provide a clearer picture of severe cases. Longitudinal studies could also track how these profiles develop over time.
The authors also suggest investigating the role of guilt and shame more deeply. While self-criticism was measured, the specific emotions following sexual acts could offer further insight. Understanding the cycle of shame is essential for treating hypersexuality.
These findings have implications for mental health treatment. They suggest that therapy for hypersexuality should not focus solely on the sexual behavior itself. Clinicians should also address the underlying attachment insecurities and personality traits.
For patients fitting the “Insecure with Sexual Coping” profile, interventions might focus on impulse control. Therapies that target emotion regulation and reduce antagonism could be beneficial. Helping patients find healthier ways to soothe themselves is a primary goal.
For those in the “Insecure without Sexual Coping” profile, treatment might differ. Although they do not present with hypersexuality, their high levels of negative affect require attention. Therapy for this group might focus on building self-esteem and combating feelings of inadequacy.
This study provides a nuanced view of the relationship between personality and sexual behavior. It challenges the idea that hypersexuality is simply a matter of high sex drive. Instead, it frames the behavior as a complex response to emotional and relational deficits.
By identifying these distinct profiles, the researchers have offered a roadmap for better assessment. Mental health professionals can use this information to tailor their approaches. Understanding the specific psychological makeup of a patient allows for more precise and effective care.
The study, “Decoding Hypersexuality: A Latent Profile Approach to Attachment, Self-Criticism, and Personality Disorders,” was authored by Camilla Tacchino, Guyonne Rogier, and Patrizia Velotti.

A new study published in JMIR Serious Games suggests that playing whimsical video games may help young adults manage the symptoms of burnout. The research indicates that titles like Super Mario Bros. can foster a sense of “childlike wonder” that boosts happiness and lowers emotional exhaustion. This effect offers a potential mental health tool for students facing high levels of stress and anxiety.
Young adults today are navigating a developmental period often referred to as “emerging adulthood.” This stage involves identity exploration but also brings specific types of instability and anxiety. Rising costs of living and competitive academic environments contribute to a high risk of burnout among this demographic. The digital world often exacerbates these pressures through constant social media comparisons and an “always-on” work culture.
These cumulative stressors can lead to a state of chronic exhaustion and cynicism. Researchers Winze Tam, Congcong Hou, and Andreas Benedikt Eisingerich sought to understand if specific digital games could offer a solution. They focused on whether the lighthearted nature of Nintendo platformers could provide a necessary mental reset. The team hypothesized that the specific design of these games might counteract the negativity associated with burnout.
The researchers employed a mixed-methods approach to explore this theory. First, they conducted detailed interviews with 41 university students. These participants had experience playing Super Mario Bros. or Yoshi games. The goal was to understand the subjective emotional experience of gameplay in a natural setting. The researchers asked students to reflect on how the games affected their daily lives and emotional states.
During these interviews, students described the bright colors and optimistic music as creating a safe atmosphere. One respondent compared the experience to being “wrapped in a cozy, warm blanket.” Others noted that the games allowed them to appreciate small details, like the animation of clouds or the sounds of jumping. This shift in perspective helped them detach from real-world cynicism. The games offered clear, achievable goals, which stood in contrast to the ambiguous challenges of adult life.
Following the interviews, the team administered a survey to 336 students. This quantitative phase measured three specific variables: burnout risk, overall happiness, and the experience of childlike wonder. The researchers defined childlike wonder as a state of openness, curiosity, and delight in discovery. They used statistical modeling to analyze the relationships between these factors.
The data revealed a positive association between the game-induced wonder and general life happiness. The results indicated that happiness fully mediated the relationship between wonder and burnout. This means that the games appear to reduce burnout specifically by increasing happiness through the mechanism of wonder. The findings were consistent across genders.
Eisingerich noted the implications of these results for mental health strategies. He stated, “This study suggests that the path to combating burnout in young adults may lie not just in traditional wellness but also in reclaiming joy.” The authors argue that these games act as a “vacation for the mind.”
This research adds a new dimension to existing knowledge about how video games affect the brain. Previous studies have largely focused on cognitive skills or physiological stress rather than emotional restoration. For instance, a study published in Experimental Brain Research found that 3D platformers could improve memory and focus in older adults. That work highlighted the cognitive demands of navigating virtual spaces to improve executive function.
The current study also contrasts with research that focuses solely on the physiological effects of gaming. A study in the International Journal of Psychophysiology showed that while gaming generally lowered physiological stress markers, violent sections of a game could increase self-reported aggression. The Super Mario study differs by focusing on non-violent content that promotes positive emotions. It suggests that the aesthetic and tone of the game are vital components of its psychological impact.
Recent work from the University of Oxford challenged the idea that the amount of time spent playing matters most. Published in Royal Society Open Science, that study found that the sheer number of hours played did not predict mental well-being. Instead, the player’s perception of how gaming fit into their life was the deciding factor. The current findings support this by emphasizing the quality of the experience—specifically the feeling of wonder—over the duration of play.
Additionally, a longitudinal analysis of PowerWash Simulator players published in ACM Games found that mood improves slightly within the first 15 minutes of play. This aligns with the idea that games can provide immediate emotional uplift. The Super Mario study extends this by linking that uplift to a reduction in long-term burnout symptoms. It identifies a specific emotional pathway involving wonder, rather than just general relaxation.
While the results are promising, the authors note that video games are not a cure-all for systemic issues like financial hardship or workplace inequity. The study relied on self-reported data, which depends on participants accurately assessing their own feelings. It is also possible that people who are already happier are more prone to experiencing wonder.
The researchers also point out that the benefits are likely contingent on moderate, voluntary play. Compulsive gaming used solely to avoid real-world problems could potentially have negative effects. The study focused specifically on university students, so the results may not apply to all age groups.
Future research is needed to track these effects over a longer period to see if the reduction in burnout is sustained. Scientists also need to determine if other genres of games can produce similar benefits or if this effect is unique to the whimsical style of Nintendo platformers. Exploring how these effects vary across different cultures and demographics would also be beneficial.
The study, “Super Mario Bros. and Yoshi Games’ Affordance of Childlike Wonder and Reduced Burnout Risk in Young Adults: In-Depth Mixed Methods Cross-Sectional Study,” was authored by Winze Tam, Congcong Hou, and Andreas Benedikt Eisingerich.









A study conducted in Israel found that survivors of childhood maltreatment showed impaired belief updating when interacting with strangers. Moreover, in individuals with impaired belief updating, childhood maltreatment severity was associated with the severity of PTSD symptoms, while those with better updating showed low levels of PTSD symptoms regardless of the severity of childhood maltreatment they experienced. The research was published in Behaviour Research and Therapy.
Childhood maltreatment occurs when caregivers fail to provide safety, care, or emotional support, or actively cause harm to the child. Such experiences can disrupt normal emotional, cognitive, and social development. Childhood maltreatment is strongly associated with an increased risk for mental health disorders, including depression, anxiety disorders, post-traumatic stress disorder, and substance use disorders. It can also affect stress regulation systems, leading to long-term alterations in brain development and stress hormone functioning.
Individuals exposed to maltreatment often show difficulties with emotion regulation, self-esteem, and interpersonal relationships. Cognitive effects include problems with attention, memory, and executive functioning. Childhood maltreatment is also linked to a higher risk of physical health problems later in life, such as cardiovascular disease, metabolic disorders, and chronic inflammation. The impact of maltreatment varies depending on its type, severity, duration, and the presence of protective factors like supportive relationships.
Study author Shir Porat-Butman and her colleagues note that, among other issues, survivors of childhood maltreatment show greater interpersonal distance. The authors reason that this might be because of their inability to distinguish between friends and strangers, due to a rigid interpersonal style that fails to adapt flexibly to varying levels of relational closeness.
With this in mind, the researchers conducted a study aiming to test whether adults with a history of childhood maltreatment show alterations in learning new positive and negative information about friends and strangers. They also wanted to see whether childhood maltreatment was associated with difficulties in updating social beliefs about friends and strangers when confronted with inconsistent information.
Study participants were 114 individuals recruited from the general population. The authors note that the study was conducted during an ongoing war, meaning that all participants were exposed to additional trauma. Participants’ average age was 28 years**, and** 70% were female.
Participants completed assessments of childhood maltreatment (the Childhood Trauma Questionnaire), PTSD symptoms (the Posttraumatic Stress Disorder Checklist), social anxiety symptoms (the Mini Social Phobia Inventory), anxiety and depression symptoms (the Hospital Anxiety and Depression Scale), and cumulative traumatic exposure (the Cumulative Traumatic Exposure Questionnaire).
They also completed the Friend-Stranger Social Updating task. In this task, participants are shown pictures of faces labeled either “friends” or “strangers.” Participants have to decide whether to approach or avoid the person.
If they decide to approach, they can either gain or lose points. If they decide to avoid, they neither lose nor gain points. Half of the faces (of both “strangers” and “friends”) are associated with gaining points after approaching them, and approaching the other half of faces loses points.
Crucially, the task includes a second phase (the updating phase). After participants learn these associations, the outcomes are reversed without warning: faces previously associated with gains result in losses, and vice versa. Participants must realize this change and adjust their choices. In this way, the assessment tests participants’ capacity to update beliefs about other people when provided with new, contradictory information.
Results showed that, in general, participants were better at learning after receiving points than after losing them. They were also better at learning when pictures were labeled as “friends” than when they were labeled as “strangers.”
Childhood maltreatment experiences were not associated with the initial formation of beliefs (learning who to avoid or approach in the first phase). However, participants with more severe childhood maltreatment history tended to show impaired updating of beliefs when interacting with strangers. In other words, once the rules changed, they were less efficient in updating their beliefs about which of the “stranger” pictures were now safe or unsafe.
Moreover, this impaired updating of beliefs moderated the association between childhood maltreatment and PTSD symptom severity. In individuals with impaired updating of beliefs, more severe childhood maltreatment was associated with more severe PTSD symptoms. On the other hand, participants with flexible updating of beliefs reported low PTSD symptom levels regardless of childhood maltreatment severity.
“These findings suggest that CM [childhood maltreatment] may disrupt adaptive belief updating in interpersonal contexts, contributing to later vulnerability to psychopathology. The results highlight the potential value of targeting social cognitive processes, particularly belief updating, in interventions aimed at improving social functioning and psychological resilience among individuals with a history of CM,” the study authors concluded.
The study contributes to the scientific understanding of the consequences of childhood maltreatment. However, it should be noted that the assessment of childhood maltreatment used in this study was based on the recall of childhood experiences when the participants were already adults. This means that results might have been affected by recall and reporting bias.
The paper, “From maltreatment to mistrust: Impaired belief updating as a mechanism linking childhood maltreatment to interpersonal and clinical outcomes,” was authored by Shir Porat-Butman, Görkem Ayas, Stefanie Rita Balle, Julia Carranza-Neira, Natalia E. Fares-Otero, Alla Hemi, Billy Jansson, Antonia Lüönd, Tanja Michael, Dany Laure Wadji, Misari Oe, Roxanne M. Sopp, Tanya Tandon, Ulrich Schnyder, Monique Pfaltz, and Einat Levy-Gigi.







A recent study published in Psychology of Sport & Exercise has found that long-term engagement in competitive athletics is linked to reduced aggression in daily life and specific patterns of brain organization. The findings challenge the common stereotype that contact sports foster violent behavior outside of the game. By combining behavioral assessments with advanced brain imaging, the researchers identified a biological basis for the observed differences in aggression between athletes and non-athletes.
Aggression is a complex trait influenced by both biological and environmental factors. A persistent debate in psychology concerns the impact of competitive sports on an individual’s tendency toward aggressive behavior. One perspective, known as social learning theory, suggests that the aggression often required and rewarded in sports like football or rugby can spill over into non-sport contexts. This theory posits that athletes learn to solve problems with physical dominance, which might make them more prone to aggression in social situations.
An opposing perspective argues that the structured environment of competitive sports promotes discipline and emotional regulation. This view suggests that the intense physical and mental demands of high-level competition require athletes to develop superior self-control to succeed.
According to this framework, the ability to inhibit impulsive reactions during a game translates into better behavioral regulation in everyday life. Previous research attempting to settle this debate has yielded mixed results, largely relying on self-reported questionnaires without examining the underlying biological mechanisms.
“This study was motivated by inconsistent findings in previous research regarding the relationship between long-term engagement in competitive sports and aggression,” explained study author Mengkai Luan, associate professor of psychology at the Shanghai University of Sport.
“While some studies suggest that competitive sports, particularly those involving intense physical and emotional demands, may increase off-field aggression through a ‘spillover’ effect, other research indicates that athletes, due to the emotional regulation and discipline developed through long-term training, often exhibit lower levels of aggression in everyday situations compared to non-athletes. This study aims to examine how long-term engagement in competitive athletics is associated with off-field aggression, while also exploring the neural mechanisms underlying these behavioral differences using resting-state functional connectivity analysis.”
The research team recruited a total of 190 participants from a university community in China. The sample consisted of 84 competitive athletes drawn from university football and rugby teams. These athletes had an average of nearly seven years of competitive experience and engaged in rigorous weekly training. The comparison group included 106 non-athlete controls who did not participate in regular organized sports.
All participants completed the Chinese version of the Buss–Perry Aggression Questionnaire. This widely used psychological tool measures an individual’s general aggression levels as well as four specific subtypes. These subtypes include physical aggression, verbal aggression, anger, and hostility. Participants also rated their tendency toward self-directed aggression. The researchers compared the scores of the athlete group against those of the non-athlete control group to identify behavioral differences.
Following the behavioral assessment, participants underwent functional magnetic resonance imaging (fMRI) scans. The researchers utilized a resting-state fMRI protocol. This method involves scanning the brain while the participant is awake but not performing any specific cognitive task. It allows scientists to map the brain’s intrinsic functional architecture by observing spontaneous fluctuations in brain activity. This approach is particularly useful for identifying stable, trait-like characteristics of brain organization.
The behavioral data revealed clear differences between the two groups. Athletes reported significantly lower scores on total aggression than the non-athlete controls. When the researchers analyzed the specific subscales, they found that athletes scored lower on physical aggression, anger, hostility, and self-directed aggression.
The only dimension where no significant difference appeared was verbal aggression. These results provide behavioral evidence supporting the idea that competitive sport participation functions as a protective factor against maladaptive aggression.
The brain imaging analysis offered insights into the potential neural mechanisms behind these behavioral findings. The researchers used a method called Network-Based Statistics to compare the whole-brain connectivity matrices of athletes and non-athletes. They identified a large subnetwork where athletes exhibited significantly stronger connectivity than controls. This enhanced network comprised 105 connections linking 70 distinct brain regions.
The strengthened connections in athletes were not random but were concentrated within specific systems. The analysis showed increased integration between the salience network and sensorimotor networks. The salience network is responsible for detecting important stimuli and coordinating the brain’s response, while sensorimotor networks manage movement and sensory processing. This pattern suggests that the athletic brain is more efficiently wired to integrate sensory information with motor control and attentional resources.
To further understand the link between brain function and behavior, the authors employed a machine-learning technique called Connectome-Based Predictive Modeling. This analysis aimed to determine if patterns of brain connectivity could accurately predict an individual’s aggression scores, regardless of their group membership. The model successfully predicted levels of total aggression and physical aggression based on the fMRI data.
The predictive modeling revealed that lower levels of aggression were associated with specific connectivity patterns involving the prefrontal cortex. The prefrontal cortex is the brain region primarily responsible for executive functions, such as decision-making, impulse control, and planning.
The analysis showed that stronger negative connections between the prefrontal cortex and subcortical regions were predictive of reduced aggression. This implies that a well-regulated brain utilizes top-down control mechanisms to inhibit impulsive drives originating in deeper brain structures.
The researchers also found a significant overlap between the group-level differences and the individual prediction models. Four specific neural connections were identified both as distinguishing features of the athlete group and as strong predictors of lower aggression. These connections involved the orbitofrontal cortex and the cerebellum. The orbitofrontal cortex is key for emotion regulation, while the cerebellum is traditionally associated with balance and motor coordination but is increasingly recognized for its role in emotional processing.
The convergence of these findings suggests that the demands of competitive sports may induce neuroplastic changes that support better behavioral regulation. The need to execute complex motor skills while managing high levels of physiological arousal and adhering to game rules likely strengthens the neural pathways that integrate motor and emotional control. This enhanced neural efficiency appears to extend beyond the field, helping athletes manage frustration and suppress aggressive impulses in their daily lives.
“The study challenges the common stereotype that individuals who participate in competitive, contact sports are more aggressive or dangerous in everyday life,” Luan told PsyPost. “In fact, the research suggests that long-term participation in these sports may help individuals manage aggression better. Through their training, they develop emotional regulation and self-discipline, which may be linked to brain changes that help them control aggression and behavior off the field.”
There are some limitations. The research utilized a cross-sectional design, which captures data at a single point in time. This means the study cannot definitively prove that sports training caused the brain changes or the reduced aggression. It is possible that individuals with better emotional regulation and specific brain connectivity patterns are naturally drawn to and successful in competitive sports.
The sample was also limited to university-level athletes in team-based contact sports within a specific cultural setting. Cultural values regarding emotion and social harmony may influence how aggression is expressed and regulated.
“One of our long-term goals is to expand the sample to include athletes from a wider range of sports, including individual and non-contact sports, as well as participants from different cultural backgrounds,” Luan said. “This would help increase the generalizability of our findings.”
“Additionally, since our current study is cross-sectional, it cannot establish causal relationships. In future research, we plan to adopt longitudinal and intervention-based designs to better understand the causal mechanisms behind the observed effects, and to separate pre-existing individual traits from the neural adaptations resulting from sustained athletic training.”
The study, “Competitive sport experience is associated with reduced off-field aggression and distinct functional brain connectivity,” was authored by Yujing Huang, Zhuofei Lin, Chenglin Zhou, Yingying Wang, and Mengkai Luan.





Recent research published in the International Journal of Cosmetic Science provides evidence that wrinkles around the eyes are the primary physical feature driving perceptions of age and attractiveness across diverse ethnic groups. While factors such as skin color and gloss contribute to how healthy a woman appears, the depth and density of lines in the periorbital region consistently predict age assessments in women from Asia, Europe, and Africa.
The rationale behind this study stems from the fact that the skin around the eyes is structurally unique. It is significantly thinner than facial skin in other areas and contains fewer oil glands. This biological reality makes the eye area particularly susceptible to the effects of aging and environmental damage.
In addition to its delicate structure, the skin around the eyes is subjected to constant mechanical stress. Humans blink approximately 15,000 times per day, and these repeated muscle contractions eventually lead to permanent lines. Previous surveys have indicated that women worldwide consider under-eye bags, dark circles, and “crow’s feet” to be among their top aesthetic concerns.
However, most prior research on this topic has focused on specific populations or general facial aging. It has remained unclear whether specific changes in the eye region influence social perceptions in the same way across different cultures. The authors of the current study aimed to determine if the visual impact of periorbital skin features is consistent globally or if it varies significantly by ethnicity.
To investigate this, the researchers utilized a multi-center approach involving participants and assessors from five distinct locations. Data collection took place in Guangzhou, China; Tokyo, Japan; Lyon, France; New Delhi, India; and Cape Town, South Africa. The team initially recruited 526 women across these five locations to serve as the pool for the study.
From this larger group, the researchers selected a standardized subset of 180 women to serve as the subjects of the analysis. This final sample included exactly 36 women from each of the five ethnic groups. The participants ranged in age from 20 to 65 years, allowing for a comprehensive view of the aging process.
The researchers recorded high-resolution digital portraits of these women using a specialized system known as ColorFace. This equipment allowed for the standardization of lighting and angles, which is essential for accurate computer analysis. The team then defined two specific regions of interest on each face for detailed measurement.
The first region analyzed was the area directly under the eyes, which included the lower eyelid and the infraorbital hollow. The second region was the area at the outer corners of the eyes where lateral canthal lines, commonly known as crow’s feet, typically develop. The researchers used digital image analysis software to objectively quantify skin characteristics in these zones.
For the region under the eyes, the software measured skin color, gloss, skin tone evenness, and wrinkles. Skin color was broken down into specific components, including lightness, redness, and yellowness. Gloss was measured in terms of its intensity and contrast, while tone evenness was calculated based on the similarity of adjacent pixels.
For the crow’s feet region, the analysis focused exclusively on the measurement of wrinkles. The software identified wrinkles by detecting lines in the image that met specific criteria. The researchers quantified these features by calculating the total length of the wrinkles, their density within the region, and their volume.
To determine how these objective features translated into social perceptions, the study employed a large panel of human assessors. The researchers recruited 120 assessors in each of the five study locations, resulting in a total of 600 raters. These assessors were “naïve,” meaning they were not experts in dermatology or cosmetics.
The assessors were matched to the participants by ethnicity. For example, Chinese assessors rated the images of Chinese women, and French assessors rated the images of French women. Each assessor viewed the digital portraits on color-calibrated monitors.
They were asked to rate each face for perceived age, health, and attractiveness. These ratings were given on a continuous scale ranging from 0 to 100, where 0 represented a low attribute score and 100 represented a high attribute score. The researchers then used statistical methods to identify relationships between the objective skin measurements and the subjective ratings.
The results revealed distinct biological differences in how skin ages across the different groups. For instance, Indian and South African women tended to have lower skin lightness scores under the eyes compared to Chinese, Japanese, and French women. South African women also exhibited the highest density of wrinkles in the under-eye region among all groups.
Regarding the crow’s feet region, the analysis showed that South African, Chinese, and French women had similar levels of wrinkling. These levels were notably higher than those observed in Indian and Japanese women. This finding aligns with some previous research suggesting that wrinkle onset and progression can vary significantly based on ethnic background.
Despite these physical differences, the study found strong consistencies in how these features influenced perception. When looking at the full sample, wrinkles in both the under-eye and crow’s feet regions showed a strong positive correlation with perceived age. This means that as wrinkle density and volume increased, assessors consistently rated the faces as looking older.
On the other hand, wrinkles were negatively correlated with ratings of health and attractiveness. Faces with more pronounced lines around the eyes were perceived as less healthy and less attractive. This pattern held true regardless of the ethnic group of the woman or the assessor.
The study also highlighted the role of skin gloss, or radiance. Higher levels of specular gloss, which corresponds to the shine or glow of the skin, were associated with perceptions of better health and higher attractiveness. This suggests that skin radiance is a universal cue for vitality.
In contrast, skin tone evenness showed a more complex relationship. While generally associated with youth and health, it appeared to be a stronger cue for health judgments than for age. Uneven pigmentation and lower skin lightness were linked to lower health ratings, particularly in populations with darker skin tones.
Regression analyses allowed the researchers to determine which features were the strongest predictors of the ratings. For perceived age, wrinkles in the crow’s feet region emerged as a significant predictor for all five ethnic groups. This confirms that lines at the corners of the eyes are a primary marker used by people to estimate a woman’s age.
For Japanese and French women, wrinkles specifically under the eyes provided additional information for age judgments. This suggests that in these groups, the under-eye area may contribute more distinct visual information regarding aging than in other groups.
When predicting perceived health, the results were more varied. While wrinkles remained a negative predictor, skin color variables played a more prominent role. For Indian women, lighter skin in the under-eye region was a significant positive predictor of rated health.
Similarly, for South African women, skin yellowness was a positive predictor of both health and attractiveness ratings. This indicates that while wrinkles drive age perception, color cues are vital for judgments of well-being in these populations. The researchers posit that pigmentary issues, such as dark circles, may weigh more heavily on health perception in darker skin types.
An exception to these specific predictive patterns was observed in the French group regarding health ratings. While the overall statistical models were effective, no single skin feature stood out as a solitary predictor for health judgments in French women. This implies that French assessors might use a more holistic approach, combining multiple features rather than relying on a single cue like wrinkles or color.
The study has certain limitations that warrant mention. The sample size for the specific sub-group analyses was relatively small, with only 36 women per ethnicity. This reduces the statistical power to detect very subtle differences within each group.
Additionally, the study relied on static digital images. In real-world interactions, facial dynamics and expressions play a major role in the visibility of crow’s feet and other lines. Future research could investigate how movement influences the perception of these features.
The study, “Effects of under-eye skin and crow’s feet on perceived facial appearance in women of five ethnic groups,” was authored by Bernhard Fink, Remo Campiche, Todd K. Shackelford, and Rainer Voegeli.

As the world’s population ages, the number of people living with dementias such as Alzheimer’s disease increases. Given the lack of curative treatments and the limited effectiveness of available medications, interest in new therapeutic approaches is growing. Among them are cannabinoids from the cannabis plant.
A small new Brazilian study published in the international Journal of Alzheimer’s Disease investigated the effects of microdoses of cannabis extract on patients with mild Alzheimer’s disease. The results found positive effects, without the associated “high” of cannabis.
The study, led by professor Francisney Nascimento and colleagues at the Federal University of Latin American Integration (UNILA), recruited 24 elderly patients (60-80 years) diagnosed with mild Alzheimer’s. It evaluated the effects of daily use of an oil prepared from Cannabis extract containing THC and CBD in similar proportions and extremely low concentrations (0.3 mg of each cannabinoid). These sub-psychoactive doses do not cause the “high” associated with recreational use of the plant.
The extract used was donated by ABRACE, Brazil’s biggest patient association and had no contribution from cannabis companies or other funding sources.
“Microdosing” is a term usually associated with recreational use of psychedelics. Given the size of the dose, it would be easy to question whether it could have any effect at all.
Doses below 1 mg of the cannabinoid compounds are not frequently reported in the literature of clinical practice. However, the researchers’ decision to use microdosing did not come out of nowhere. In 2017, the group led by Andreas Zimmer and Andras Bilkei-Gorzo had already demonstrated that very low doses of THC restored cognition in elderly mice, reversing gene expression patterns and brain synapse density in the hippocampus to levels similar to those of young animals.
Subsequently, other studies in mice reinforced that the endocannabinoid system, which is important for neuroprotection and regulates normal brain activity (ranging from body temperature to memory), undergoes a natural decline during ageing.
Inspired by these findings, the group initially tested microdosing of cannabis extract in a single patient with Alzheimer’s disease for 22 months. They found cognitive improvement, assessed using the Adas-Cog scale, a set of tasks using things like word recall to test cognitive function. This triggered the decision to run a more robust clinical trial in human volunteers to verify the cognitive-enhancement effects observed in the volunteer. The second study was a properly controlled randomised and double-blinded clinical trial.
Several clinical scales were used to objectively measure the impact of cannabis treatment. This time, the improvement was observed in the mini-mental state exam (MMSE) scale, a widely used scale for assessing cognitive function in patients with dementia. It’s a validated set of questions that are asked to the patient, with the aid of an accompanying person (typically a family member of helper). After 24 weeks of treatment, the group receiving the cannabis extract showed stabilisation in their scores, while the placebo group showed cognitive deterioration (worsening of Alzheimer’s symptoms).
The impact was modest but relevant, patients using cannabis microdosing scored two to three points higher than their placebo counterparts (full points on the MMSE is 30). In patients with preserved or moderately impaired cognitive function, it may be unrealistic to expect major changes in a few weeks.
Cannabis extracts did not improve other non-cognitive symptoms, like depression, general health or overall quality of life. On the other hand, there was no difference in adverse side effects. This was likely due to the extremely low dose used.
This result echoes findings from my 2022 study which found a reduction in endocannabinoid signalling during ageing, meaning ageing brains are more prone to cognitive degradation without the protection of the cannabinoids. Among other mechanisms, cannabinoids seem to protect cognition by reducing drivers of inflammation in the brain.
The biggest obstacle to the acceptance of cannabis as a therapeutic tool in brain ageing is perhaps not scientific, but cultural. In many countries, the fear of “getting high” deters many patients and even healthcare professionals. But studies such as this show there are ways to get around this problem by using doses so low they do not cause noticeable changes in consciousness, but which can still modulate important biological systems, such as inflammation and neuroplasticity.
Microdoses of cannabis can escape the psychoactive zone and still deliver benefits. This could open the door to new formulations focused on prevention, especially in more vulnerable populations, such as elderly people with mild cognitive impairment or a family history of dementia.
Despite its potential, the study also has important limitations: the sample size is small, and the effects were restricted to one dimension of the cognition scale. Still, the work represents an unprecedented step: it is the first clinical trial to successfully test the microdose approach in patients with Alzheimer’s disease. It is a new way of looking at this plant in the treatment of important diseases.
To move forward, new studies with a larger number of participants, longer follow-up times, and in combination with biological markers (such as neuroimaging and inflammatory biomarkers) will be necessary. Only then will it be possible to answer the fundamental question: can cannabis slow down the progression of Alzheimer’s disease? We have taken an important step towards understanding this, but for now, the question remains unanswered.![]()
This article is republished from The Conversation under a Creative Commons license. Read the original article.

Recent research indicates that bodily inflammation may disrupt the brain’s ability to process rewards and risks in American Indian adults who have experienced depression. The study found that higher levels of specific inflammatory markers in the blood corresponded with reduced activity in brain regions essential for motivation. These findings were published in Biological Psychiatry: Cognitive Neuroscience and Neuroimaging.
Major Depressive Disorder is a complex mental health condition that goes beyond feelings of sadness. One of its hallmark symptoms is anhedonia, which is a reduced ability to experience pleasure or interest in daily activities. This symptom is often linked to dysfunctions in the brain’s reward circuitry. This system governs how the brain anticipates positive outcomes, such as winning a prize, or negative outcomes, like a financial loss.
Scientists are increasingly looking at the immune system to understand these brain changes. Physical inflammation is the body’s natural response to injury or stress. However, chronic stress can lead to persistent, low-grade inflammation that affects the entire body. Over time, the immune system releases signaling proteins called cytokines that can cross into the brain. Once there, these proteins may alter how neural circuits function.
This biological connection is particularly relevant for American Indian populations. Many Indigenous communities face unique and chronic stressors rooted in historical trauma. These stressors include the long-term psychological impacts of colonization and systemic health disparities. Previous research links symptoms of historical loss to higher risks for both depression and physical health issues.
The researchers hypothesized that this unique stress environment might elevate inflammation levels. They proposed that this inflammation could, in turn, impair the brain’s reward system. This pathway might explain why depression prevalence and severity can be higher in these communities. To test this, the study focused on American Indian individuals who had been diagnosed with Major Depressive Disorder at some point in their lives.
Leading the investigation was Lizbeth Rojas from the Department of Psychology at Oklahoma State University. She collaborated with a team of experts from the Laureate Institute for Brain Research and other academic institutions. The team aimed to move beyond simple surveys by looking at direct biological and neurological evidence. They sought to connect blood markers of inflammation with real-time brain activity.
The study included 73 adult participants who identified as American Indian. All participants had a history of clinical depression. To assess their biological state, the researchers collected blood samples from each individual. They analyzed these samples for specific biomarkers related to the immune system.
The team measured levels of proinflammatory cytokines, which promote inflammation. These included tumor necrosis factor (TNF) and interleukin-6 (IL-6). They also measured C-reactive protein (CRP), a general marker of inflammation produced by the liver. Additionally, they looked at interleukin-10 (IL-10), a cytokine that helps reduce inflammation.
To observe brain function, the researchers utilized two advanced imaging technologies simultaneously. Participants entered a functional magnetic resonance imaging (fMRI) scanner. This machine measures brain activity by tracking changes in blood oxygen levels. At the same time, participants wore caps to record electroencephalography (EEG) data. EEG measures the electrical activity of the brain with high time precision.
While inside the scanner, the participants performed a specific psychological test called the Monetary Incentive Delay task. This task is designed to activate the brain’s reward centers. Participants viewed a screen that displayed different visual cues. Some cues indicated a chance to win money, while others indicated a risk of losing money.
After seeing a cue, the participant had to press a button rapidly. If they were fast enough on a “win” trial, they gained a small amount of cash. If they were fast enough on a “loss” trial, they avoided a financial penalty. The researchers focused on the “anticipation phase” of this task. This is the brief moment after seeing the cue but before pressing the button.
During this anticipation phase, a healthy brain typically shows high activity in the basal ganglia. This is a group of structures deep in the brain that includes the striatum. The striatum is essential for processing incentives and generating the motivation to act. In people with depression, this area often shows “blunted” or reduced activity.
The study’s results revealed a clear link between the immune system and this brain activity. The researchers used statistical models to predict brain response based on inflammation levels. They found that higher concentrations of TNF were associated with reduced activation in the basal ganglia during the anticipation of a potential win.
This relationship was notably influenced by the sex of the participant. The negative association between TNF and brain activity was observed specifically in male participants. This suggests that for men in this sample, high inflammation dampened the brain’s excitement about a potential reward.
The researchers also examined how the brain reacted to the threat of losing money. In this context, they looked at the interaction between TNF and CRP. They found that elevated levels of both markers predicted reduced brain activation. The basal ganglia were less responsive even when the participant was trying to avoid a negative outcome.
Another finding involved the nucleus accumbens, a key part of the brain’s reward circuit. The study showed that medication status played a role here. Among participants taking psychotropic medication, higher TNF levels were linked to lower activity in this region during loss anticipation. This highlights the complexity of how treatments and biology interact.
The study also attempted to use EEG to measure a specific brain wave called the P300. The P300 is a spike in electrical activity that relates to attention and updating working memory. Previous studies have suggested that people with depression have a smaller P300 response. The researchers expected inflammation to predict the size of this brain wave.
However, the analysis did not find a statistical link between the inflammatory markers and the P300 amplitude. The electrical signals did not show the same clear pattern as the blood flow changes measured by the fMRI. This suggests that inflammation might affect the metabolic demand of brain regions more than the specific electrical timing measured by this task.
These findings support the idea that the immune system plays a role in the biology of depression. The presence of high inflammation appears to “turn down” the brain’s sensitivity to incentives. When the brain is less responsive to rewards, a person may feel less motivation. This aligns with the clinical experience of patients who feel a lack of drive or pleasure.
The authors described several limitations that provide context for these results. The study relied on a relatively small sample size of 73 people. A larger group would provide more statistical certainty. Additionally, the data came from parent studies that were not designed exclusively for this specific investigation.
Another limitation was the lack of a healthy control group. The study only looked at people with a history of depression. Without a non-depressed comparison group, it is difficult to determine if these patterns are unique to depression. They might also appear in people with high inflammation who are not depressed.
The study also could not fully account for cultural factors. While the background emphasizes the role of historical trauma, the analysis did not measure cultural connectedness. Previous research suggests that connection to one’s culture can protect against stress. It acts as a buffer that might improve mental health outcomes.
Despite these caveats, the research offers a specific biological target for understanding depression in American Indian populations. It moves away from purely psychological explanations. Instead, it frames mental health within a “biopsychosocial” model. This model considers how biological stress and social history combine to affect the brain.
The authors suggest that future research should focus on resilience. Understanding how some individuals maintain low inflammation despite stress could be key. This could lead to better prevention strategies. Interventions might focus on reducing inflammation as a way to help restore normal brain function.
Treating depression in these communities may require addressing physical health alongside mental health. If inflammation drives brain dysfunction, then reducing stress on the body is vital. This reinforces the need for holistic healthcare approaches. Such approaches would respect the unique history and challenges faced by American Indian communities.
The study, “Major Depressive Disorder and Serum Inflammatory Biomarkers as Predictors of Reward-Processing Dysfunction in an American Indian Sample,” was authored by Lizbeth Rojas, Eric Mann, Xi Ren, Danielle Bethel, Nicole Baughman, Kaiping Burrows, Rayus Kuplicki, Leandra K. Figueroa-Hall, Robin L. Aupperle, Jennifer L. Stewart, Salvador M. Guinjoan, Sahib S. Khalsa, Jonathan Savitz, Martin P. Paulus, Ricardo A. Wilhelm, Neha A. John-Henderson, Hung-Wen Yeh, and Evan J. White.

An analysis of the National Health and Nutrition Examination Survey data of older adults found no independent association between visceral adiposity and cognitive performance. While some correlations were initially found, these disappeared after the study authors controlled for sociodemographic factors and clinical conditions. The paper was published in Medicine.
Adiposity refers to the accumulation of body fat. It reflects the amount and distribution of fat tissue in the body. While some adiposity is normal and necessary for energy storage, insulation, and hormone regulation, excessive adiposity increases the risk of metabolic and cardiovascular diseases. Body fat is not uniform, and its health impact depends greatly on where it is located.
One specific type of adiposity is visceral adiposity. Visceral adiposity refers specifically to fat stored deep inside the abdominal cavity, surrounding organs such as the liver, pancreas, and intestines. This visceral fat is metabolically active and releases inflammatory molecules and hormones that disrupt glucose and lipid metabolism.
High visceral adiposity is strongly linked to insulin resistance, type 2 diabetes, hypertension, and heart disease, often more so than general obesity. In contrast, subcutaneous fat stored under the skin is less harmful and sometimes even protective when overall weight is stable. People may have a normal body weight yet still exhibit high visceral adiposity, a condition sometimes called “normal-weight obesity.”
Study author Long He and his colleagues note that previous studies indicated an association between excess adiposity and age-related cognitive decline in older individuals. They also note that visceral adiposity has been associated with a heightened risk of metabolic disorders. With this in mind, the authors investigated whether visceral adiposity is associated with cognitive performance in older adults.
The study authors analyzed data from the National Health and Nutrition Examination Survey (NHANES) 2011 to 2014. This is an epidemiological survey that uses a complex sampling system to obtain nationally representative data on the health and nutritional status of U.S. civilians.
To estimate visceral fat, the researchers used the Visceral Adiposity Index (VAI), a calculated score based on waist circumference, Body Mass Index (BMI), triglycerides, and HDL cholesterol. They analyzed data from 1,323 participants who were 60 years of age or older and for whom data on all cognitive assessments was available. These individuals completed the NHANES cognitive battery consisting of three tests: the CERAD Word List Learning Test, which measures immediate and delayed verbal memory; the Animal Fluency Test (AFT), which assesses semantic retrieval and executive functioning; and the Digit Symbol Substitution Test (DSST), which evaluates processing speed, attention, and working memory.
Results showed that after controlling for demographic factors and participants’ health conditions, there was no statistically significant association between participants’ VAI scores and their performance on the cognitive tests. While some of the cognitive tests showed associations with VAI scores before all demographic and clinical factors were taken into account, these associations disappeared after full adjustment.
“Age- and lifestyle-adjusted analyses showed inverse, domain‑specific links between higher VAI and cognition (most notably processing speed), but these weakened after full sociodemographic and clinical adjustment, suggesting measured sociodemographic and cardiometabolic factors largely explain the crude associations,” the study authors concluded.
The study contributes to the scientific understanding of the links between visceral adiposity and cognitive performance. However, it should be noted that visceral adiposity contributes to several of the clinical conditions the study authors controlled for (such as dyslipidemia). In doing so, the statistical models may have removed the part of the relationship between visceral adiposity and cognitive performance that acts through those factors.
The paper, “Association between visceral adiposity index and cognitive dysfunction in US participants derived from NHANES data: A cross-sectional analysis,” was authored by Long He, Cheng Xing, Xueying Yang, Shilin Wang, Boyan Tian, Jianhao Cheng, Yushan Yao, and Bowen Sui.

New research published in Sex Roles finds that when new fathers take longer paternity leave, mothers tend to show fewer gateclosing behaviors and hold more flexible attitudes about parental roles.
Becoming a parent brings major changes, especially for dual-earner couples who have to balance the demands of infant care with work responsibilities. Even though fathers in the United States have become increasingly involved in childcare over the past several decades, mothers still take on the greater share during infancy, in part due to longstanding norms and constraints around parental roles.
Expanding parental leave, especially leave available to fathers, has been considered one way to support more equal caregiving. Research has shown that when fathers take leave, they become more engaged in childcare and may even carry those habits forward for years after their child is born. Reed Donithen and colleagues were interested in whether this increased involvement might also change how mothers encourage or restrict fathers’ participation in caregiving.
One important family dynamic in this context is maternal gatekeeping, which includes behaviors and attitudes that either facilitate (“gateopening”) or restrict (“gateclosing”) fathers’ engagement in parenting. Past work has linked higher maternal gateclosing to less father involvement, lower-quality father-child relationships, and greater strain in the romantic relationship.
Despite increasing interest in paternal leave, no prior studies have examined whether new fathers’ leave length might shift maternal gatekeeping. Because parental identities are developing in the early postpartum period, the authors proposed that fathers’ longer leave could lead both parents to adopt more egalitarian views of childcare, reducing mothers’ gateclosing tendencies.
The study drew on data from a longitudinal project that followed 182 dual-earner, different-sex couples in the Midwestern United States through their transition to parenthood. Couples were originally recruited during the pregnant mother’s third trimester through childbirth classes, advertisements, flyers, and referrals.
After applying eligibility criteria, which required that both parents worked before and after birth, provided leave-length information, and excluding one extreme outlier, the final sample included 130 couples. Participants completed surveys during pregnancy and again at 3, 6, and 9 months postpartum.
Mothers’ and fathers’ leave lengths were measured in days across the postpartum follow-ups, distinguishing paid from unpaid leave. Maternal gatekeeping was assessed at nine months postpartum. Both mothers and fathers completed the Parental Regulation Inventory, which captures gateopening (e.g., asking for the father’s input) and gateclosing (e.g., criticizing or redoing fathers’ childcare efforts). Mothers also completed attitude subscales from the Allen & Hawkins (1999) measure, capturing standards/responsibilities and maternal role confirmation.
Multiple psychological and demographic factors measured during pregnancy, including parental self-efficacy, maternal psychological distress, maternal parenting perfectionism, maternal essentialism, fathers’ essentialism, relationship confidence, and socioeconomic status, were included as controls. Path analyses were then used to test whether fathers’ leave length predicted maternal gatekeeping behaviors and attitudes at nine months postpartum.
On average, mothers took about 67 days of leave, while fathers took about 14. Across analyses, longer paternity leave predicted significantly lower maternal gateclosing behaviors, according to both mothers’ and fathers’ reports. Fathers’ longer leave was also linked to more flexible maternal attitudes, including less stringent standards/responsibilities and weaker maternal role confirmation.
These associations remained significant even after adjusting for mothers’ leave time and the wide range of psychological and demographic covariates. In contrast, fathers’ leave length was not associated with maternal gateopening behaviors, meaning that mothers did not necessarily increase their active encouragement of father involvement despite becoming less restrictive.
Maternal leave length, by comparison, did not predict any form of maternal gatekeeping. Several covariates also showed meaningful associations. For example, maternal parenting perfectionism predicted stronger gateclosing and stricter household standards, and maternal confidence in the couple’s future predicted greater gateopening.
However, these factors did not alter the central finding, that paternity leave length uniquely and consistently predicted reductions in maternal gateclosing. Exploratory analyses examining whether the effects of paternity leave depended on maternity leave length found no significant interactions.
The authors note that the study relied on a U.S. sample of largely White, highly educated, dual-earner couples from one geographic region, which may limit generalizability to more diverse families or contexts with different parental leave policies.
These findings highlight that when fathers take longer leave after the birth of a child, mothers appear less likely to restrict fathers’ involvement and hold more flexible views of parental roles, offering insight into how paternity leave may support more egalitarian coparenting.
The study, “When New Fathers Take More Leave, Does Maternal Gatekeeping Decline?” was authored by Reed Donithen, Sarah Schoppe-Sullivan, Miranda Berrigan, and Claire Kamp Dush.




A new study published in the journal Behavioral Sciences highlights generational differences in how adolescents and their parents interact with artificial intelligence. The research suggests that teens with higher emotional intelligence and supportive, authoritative parents tend to use AI less frequently and with greater skepticism. Conversely, adolescents raised in authoritarian environments appear more likely to rely on AI for advice and trust it implicitly regarding data security and accuracy.
Artificial intelligence has rapidly integrated into daily life, reshaping how information is accessed and processed. This technological shift is particularly impactful for adolescents. This demographic is at a developmental stage where they are refining their social identities and learning to navigate complex information ecosystems.
While AI offers educational support, it also presents risks related to privacy and the potential for emotional over-reliance. Previous investigations have examined digital literacy or parenting styles in isolation. However, few have examined how these factors interact with emotional traits to shape trust in AI systems.
The authors of this study sought to bridge this gap by exploring the concept of a “digital secure base.” This theoretical framework proposes that strong, supportive family relationships provide a safety net that helps young people explore the digital world responsibly.
The researchers aimed to understand if emotional skills and specific family dynamics might predict whether a teen uses AI as a helpful tool or as a substitute for human connection. They hypothesized that the quality of the parent-child relationship could influence whether an adolescent develops a critical or dependent attitude toward these emerging technologies.
To investigate these dynamics, the research team recruited 345 participants from southern Italy. The sample consisted of 170 adolescents between the ages of 13 and 17. It also included 175 parents, with an average age of roughly 49. Within this group, the researchers were able to match 47 specific parent-adolescent pairs for a more detailed analysis. The data was collected using online structured questionnaires.
Participants completed several standardized assessments. They answered questions regarding parenting styles, specifically looking for authoritative or authoritarian behaviors. They also rated their own trait emotional intelligence, which measures how people perceive and manage their own emotions. Additional surveys evaluated perceived social support from family and friends.
To measure AI engagement, the researchers developed specific questions about the frequency of use and trust. These items asked about sharing personal data, seeking behavioral advice, and using AI for schoolwork. Trust was measured by how much participants believed AI data was secure and whether AI gave better advice than humans.
The data revealed a clear generational divide regarding usage habits. Adolescents reported using AI more often than their parents for school or work-related tasks. Approximately 32 percent of teens used AI for these purposes frequently, compared to only 17 percent of parents. Adolescents were also more likely to ask AI for advice on how to behave in certain situations.
In terms of trust, the younger generation appeared much more optimistic than the adult respondents. Teens expressed higher confidence in the security of the data they provided to AI systems. They were also more likely to believe that AI could provide better advice than their family members or friends. This suggests that adolescents may perceive these systems as more competent or benevolent than their parents do.
The researchers then analyzed how personality and family environment related to these behaviors. They found that adolescents with higher levels of trait emotional intelligence tended to use AI less frequently. These teens also expressed lower levels of trust in the technology. This negative association suggests that emotionally intelligent youth may be more cautious and critical. They may rely on their own internal resources or human networks rather than turning to algorithms for guidance.
A similar pattern emerged regarding parenting styles. Adolescents who described their parents as authoritative—characterized by warmth, open dialogue, and clear boundaries—were less likely to rely heavily on AI. This parenting style was associated with what the researchers called “balanced” use. These teens engaged with the technology but maintained a level of skepticism.
A different trend appeared for those with authoritarian parents. This parenting style involves rigid control and limited communication. Adolescents in these households were more likely to share personal data with AI systems. They also tended to seek behavioral advice from AI more often. This suggests a potential link between a lack of emotional support at home and a reliance on digital alternatives.
Using the matched parent-child pairs, the study identified two distinct profiles among the adolescents. The researchers labeled the first group “Balanced Users.” This group made up about 62 percent of the matched sample. These teens had higher emotional intelligence and reported strong family support. They used AI cautiously and did not view it as superior to human advice.
The second group was labeled “At-Risk Users.” These adolescents comprised roughly 38 percent of the matched pairs. They reported lower emotional intelligence and described their parents as more authoritarian. This group engaged with AI more intensively. They were more likely to share personal data and trust the advice given by AI over that of their parents or peers. They also reported feeling less support from their families.
These findings imply that emotional intelligence acts as a buffer against uncritical technology adoption. Adolescents who can regulate their own emotions may feel less need to turn to technology for comfort or guidance. They appear to approach AI as a tool rather than a companion. This aligns with the idea that emotionally competent individuals are better at critical evaluation.
The connection between parenting style and AI use highlights the importance of the family environment. Authoritative parenting seems to foster independent thinking and digital caution. When parents provide a secure emotional foundation, teens may not feel the need to seek validation from artificial agents. In contrast, authoritarian environments might leave teens seeking support elsewhere. If they cannot get emotional regulation from their parents, they may turn to AI systems that appear competent and non-judgmental.
The study provides evidence that AI systems cannot replace the emotional containment provided by human relationships. The results suggest that rather than simply restricting access to technology, interventions should focus on strengthening family bonds.
Enhancing emotional intelligence and encouraging open communication between parents and children could serve as protective factors. This approach creates a foundation that allows teens to navigate the digital world without becoming overly dependent on it.
The study has several limitations that affect how the results should be interpreted. The design was cross-sectional, meaning it captured data at a single point in time. This prevents researchers from proving that parenting styles cause specific AI behaviors. It is possible that the relationship works in the other direction or involves other factors. The sample size for the matched parent-child pairs was relatively small. This limits the ability to generalize the specific user profiles to broader populations.
Additionally, the study relied on self-reported data. Participants may have answered in ways they felt were socially acceptable rather than entirely accurate. There is also the potential for common-method bias since the same individuals provided data on both their personality and their technology use. The research focused primarily on psychological and relational factors. It did not account for socioeconomic status or cultural differences that might also influence access to and trust in AI.
Future research should look at these dynamics over time. Longitudinal studies could track how changes in emotional intelligence influence AI trust as teens grow older. Researchers could also include objective measures of AI use, such as usage logs, rather than relying solely on surveys.
Exploring these patterns in different cultural contexts would also be beneficial to see if the findings hold true globally. Further investigation is needed to understand how specific features of AI, such as human-like conversation styles, specifically impact adolescents with lower emotional support.
The study, “Emotional Intelligence and Adolescents’ Use of Artificial Intelligence: A Parent–Adolescent Study,” was authored by Marco Andrea Piombo, Sabina La Grutta, Maria Stella Epifanio, Gaetano Di Napoli, and Cinzia Novara.






Recent research published in the Journal of Personality provides a comprehensive look at the relationship between psychopathy and sexual aggression. By aggregating data from over one hundred separate samples, the researchers determined that while psychopathy is generally associated with sexually aggressive behavior, the connection varies depending on the specific type of aggression and the specific personality traits involved. These findings help clarify which aspects of the psychopathic personality are most dangerous regarding sexual violence.
The rationale for this large-scale analysis stems from the serious societal impact of sexual aggression. This term covers a wide range of non-consensual sexual activities, including the use of physical force, coercion, and verbal manipulation. Previous scientific literature has established a link between psychopathy and antisocial behavior.
However, prior summaries of the data primarily focused on whether sexual offenders would re-offend after being released from prison. There was a gap in understanding the fundamental relationship between psychopathy and sexual aggression across different populations, such as community members or college students, rather than just convicted criminals.
Additionally, the researchers sought to understand psychopathy not as a single block of negative traits but as a nuanced personality structure. They employed the triarchic model of psychopathy to do this. This model breaks psychopathy down into three distinct components: boldness, meanness, and disinhibition.
Boldness involves social dominance, emotional resilience, and venturesomeness. Meanness encompasses a lack of empathy and cruelty. Disinhibition refers to impulsivity and a lack of restraint. The researchers wanted to see how these specific dimensions related to different forms of sexual violence, such as rape, child molestation, and sexual harassment.
To conduct this investigation, the research team performed a meta-analysis. This is a statistical method that combines the results of multiple independent studies to identify broader patterns that a single study might miss. They performed a systematic search of databases for studies published between 1980 and early 2023.
To be included, a study had to involve adult participants and measure both psychopathy and sexual aggression. The final analysis included 117 independent samples from 95 studies, representing a total of 41,009 participants. The samples were diverse, including forensic groups like prisoners, as well as college students and community members.
A major challenge the researchers faced was that not every study used the same questionnaire to measure psychopathy. Some used the well-known Psychopathy Checklist, while others used self-report surveys. To solve this, the team used a statistical technique called relative weights analysis. This allowed them to estimate the levels of boldness, meanness, and disinhibition present in various psychopathy measures.
By calculating these weights, they could infer how the three traits influenced sexual aggression even in studies that did not explicitly use the triarchic model. They then ran statistical models to see how strong the associations were and tested for potential influencing factors, such as the gender of the participants or the type of measurement tool used.
The results of the meta-analysis showed a moderate, positive relationship between general psychopathy and general sexual aggression. This means that as psychopathic traits increase, the likelihood of committing sexually aggressive acts tends to increase as well. This pattern held true for several specific types of offending. The study found positive associations between psychopathy and sexual homicide, sexual sadism, voyeurism, exhibitionism, and online sexual harassment. The connection was particularly strong for sexual cyberbullying and harassment.
However, the findings revealed important exceptions. When the researchers looked specifically at rape and child molestation, the results were different. The analysis did not find a significant statistical link between global psychopathy scores and rape or child molestation in the aggregate data. This suggests that while psychopathy is a risk factor for many types of antisocial sexual behavior, it may not be the primary driver for these specific offenses in every case, or the relationship is more complex than a simple direct correlation.
When the researchers broke down psychopathy into its triarchic components, the picture became clearer. They found that meanness and disinhibition were positively related to sexual aggression. Individuals who scored high on traits involving cruelty, lack of empathy, and poor impulse control were more likely to engage in sexually aggressive behavior. This aligns with theories that sexual aggression often involves a failure to inhibit sexual impulses and a disregard for the suffering of others.
In contrast, the trait of boldness showed a different pattern. The researchers found that boldness was negatively related to sexual aggression. This implies that the socially dominant and emotionally resilient aspects of psychopathy might actually reduce the risk of committing sexual aggression, or at least are not the traits driving it. Boldness is often associated with adaptive social functioning, which might explain why it does not track with these maladaptive behaviors in the same way meanness and disinhibition do.
The study also identified several factors that influenced the strength of these relationships. The type of sample mattered. The link between psychopathy and sexual aggression was stronger in samples of sexual offenders compared to samples of students or the general community. This difference suggests that in forensic populations, where psychopathy scores might be higher or more severe, the trait plays a larger role in aggressive behavior.
Measurement methods also played a role. The relationship appeared stronger when sexual aggression was measured using risk assessment tools rather than self-report surveys. Risk assessment tools often include items related to criminal history and antisocial behavior, which naturally overlap with psychopathy. This could artificially inflate the apparent connection. Conversely, studies that relied on medical records or clinician ratings tended to show weaker associations than those using self-reports.
The findings regarding child molestation were particularly distinct. When child molestation was removed from the general category of sexual aggression, the overall link with psychopathy became stronger. This indicates that child molestation may be etiologically distinct from other forms of sexual violence. The researchers noted that child molesters often score lower on psychopathy measures compared to other types of sexual offenders. This group might be driven by different psychological mechanisms than the callousness and impulsivity that characterize psychopathy.
There are some limitations. The studies included in the meta-analysis varied widely in their methods, populations, and definitions. This high level of heterogeneity means that the average results might not apply perfectly to every specific situation or individual.
Additionally, the relative weights analysis relies on estimating trait levels rather than measuring them directly, which introduces a layer of abstraction. Some specific forms of aggression, like sexual homicide, had very few studies available, which makes those specific findings less robust than the general ones.
Future research could benefit from more direct measurements of the triarchic traits in relation to sexual violence. The researchers suggest that simply looking at a total psychopathy score obscures important details. Understanding that meanness and disinhibition are the primary danger signals, while boldness is not, allows for more precise risk assessment.
In terms of practical implications, these results suggest that prevention and treatment programs should focus heavily on the specific deficits associated with meanness and disinhibition. Interventions that target empathy deficits and impulse control may be more effective than broad approaches. Furthermore, the lack of a strong link with child molestation indicates that this population requires a different conceptual framework and treatment approach than other sexual offenders.
The study, “Psychopathy and sexual aggression: A meta-analysis,” was authored by Inti Brazil, Larisa McLoughlin, and colleagues.




New research suggests that a potential partner’s willingness to protect you from physical danger is a primary driver of attraction, often outweighing their actual physical strength. The findings indicate that these preferences likely stem from evolutionary adaptations to dangerous ancestral environments, persisting even in modern, relatively safe societies. This study was published in the journal Evolution and Human Behavior.
Throughout human evolutionary history, physical violence from other humans posed a significant and recurrent threat to survival. In these ancestral settings, individuals did not have access to modern institutions like police forces or judicial systems. Instead, they relied heavily on social alliances, including romantic partners and friends, for defense against aggression. Consequently, evolutionary psychology posits that humans may have evolved specific preferences for partners who demonstrate both the capacity and the motivation to provide physical protection.
Previous scientific inquiries into partner choice have frequently focused on physical strength or formidability. These studies often operated under the assumption that strength serves as a direct cue for protective capability. But physical strength and the willingness to use it are distinct traits. A physically powerful individual might not be inclined to intervene in a dangerous situation, whereas a less formidable individual might be ready to defend an ally regardless of the personal risk.
Past investigations rarely separated these two factors, making it difficult to determine whether people value the ability to fight or the commitment to do so. The authors of the current study aimed to disentangle the capacity for violence from the motivation to employ it in defense of a partner. They sought to understand if the mere willingness to face a threat is sufficient to increase a person’s desirability as a friend or mate.
“Nowadays, many of us live in societies where violence is exceedingly rare, and protection from violence is considered the responsibility of police and courts. As such, you wouldn’t really predict that people should care if their romantic partner or friends are or are not willing to step up to protect them during an altercation,” said study author Michael Barlev, a research assistant professor at Arizona State University.
“However, for almost the entire history of our species, for hundreds of thousands of years, we lived in a social world scarred by violence, multiple orders of magnitude higher than it is today, and where protection was the responsibility of romantic partners, family, friends, and coalitional allies. Our psychology, including what we look for in romantic partners and friends, evolved to survive in such a world.”
To investigate this, the research team conducted a series of seven experiments involving a total of 4,508 adults from the United States. Participants were recruited through Amazon Mechanical Turk. The study utilized a vignette-based methodology where participants read detailed scenarios asking them to imagine they were with a partner, either a date or a friend.
In the primary scenario used across the experiments, the participant and their partner are described leaving a restaurant. They are then approached by an intoxicated aggressor who attempts to strike the participant. The researchers systematically manipulated the partner’s reaction to this immediate threat.
In the “willing” condition, the partner notices the danger and physically intervenes to shield the participant. In the “unwilling” condition, the partner sees the threat but steps away, leaving the participant exposed. A control condition was also included where the partner simply does not see the threat in time to react. In addition to these behavioral variations, the researchers modified the descriptions of the partner’s physical strength, labeling them as weaker than average, average, or stronger than average.
The data revealed that discovering a person is willing to protect significantly increased their attractiveness rating as a romantic partner or friend. This effect appeared consistent regardless of the partner’s described physical strength. The findings suggest that the intent to defend an ally is a highly valued trait in itself. In contrast, partners who stepped away from the threat saw a sharp decline in their desirability ratings compared to the control condition.
“We present evidence that our partner choice preferences—what we look for in romantic partners and friends—are adapted to ancestral environments,” Barlev told PsyPost. “I think that is a very important—and generally unappreciated—fact about partner choice preferences, and psychology more generally.”
The researchers also uncovered distinct patterns based on gender, particularly regarding the penalty for unwillingness. When women evaluated male dates, a refusal to protect acted as a severe penalty to attractiveness. The ratings for unwilling men dropped precipitously, suggesting that for women seeking male partners, a lack of protective instinct is effectively a dealbreaker.
Men also valued willingness in female partners, but they were more lenient toward unwillingness. When men evaluated female dates who stepped away from the threat, the decline in attractiveness was less severe than what women reported for unwilling men. This asymmetry aligns with evolutionary theories regarding sexual dimorphism and the historical division of risk in physical conflicts.
“We found that willingness was hugely important, for raters of both sexes, and when rating both male and female friends and dates,” Barlev said. “In particular, when women rated male dates, willingness to protect was very attractive, whereas failure to do so—stepping away—was a deal-breaker (the attractive of unwilling to protect men plummeted compared to when no information about willingness or unwillingness to protect was given).”
The researchers also explored the role of physical strength. While women did express a preference for stronger men, a mediation analysis clarified the underlying psychological mechanism. The analysis suggested that women tended to infer that stronger men would be more willing to protect them.
Once this inference of willingness was statistically controlled, physical strength itself had a much smaller independent effect on attraction. This indicates that strength is attractive largely because it signals a higher probability of protective behavior.
Subsequent experiments tested the limits of this preference by manipulating the outcome of the confrontation. The researchers introduced scenarios where the partner attempts to intervene but is overpowered and pushed to the ground. Surprisingly, the data showed that a partner who tries to help but fails is still viewed as highly attractive. The attempt itself appeared to be the primary driver of the positive rating, rather than the successful neutralization of the threat.
A final experiment examined the most extreme scenario where the partner fails to stop the attack and the participant is physically harmed. In this condition, the aggressor strikes the participant after the partner’s failed intervention.
Even in cases where the participant suffered physical harm because the partner failed, the partner remained significantly more attractive than one who was unwilling to act. This suggests that the signal of commitment inherent in the act of defense carries more weight in partner evaluation than the immediate physical outcome.
The study also compared preferences for friends versus romantic partners. While willingness to protect was valued in both categories, the standards for friends were generally more relaxed. The penalty for being unwilling to protect was nearly three times more severe for romantic partners than for friends. This difference implies that while protection is a valued attribute in all close alliances, it is considered a more critical requirement for long-term mates.
“Strength—or more generally, ability to protect—mattered only little, much less than we thought it would,” Barlev explained. “In our earlier experiments, women showed a weak preference for strength in male dates, but most of this had to do with the underlying inference that stronger men would be more willing—rather than more able—to protect them. In fact, in our later experiments, women found dates attractive even if they tried to protect but failed, and such dates were not less attractive than dated who tried to protect and succeeded.”
“That’s surprising, because whether you protect someone is a function of both your willingness and ability to do so. But here’s one way to think about this: If the aggressor is a rational decision-maker, his decision of whether to fight or retreat depends not only on his strength relative to yours but also on how much each side is willing to risk. So, he should not attack you even if you are weaker if you show that you are willing to risk a lot. Meaning, potentially even more important than how strong you are is your readiness to step up and fight when it’s needed.”
As with all research, there are some limitations to keep in mind. The study relied on hypothetical vignettes rather than real-world behavioral observations. While imagined scenarios allow for precise control over variables, they may not perfectly capture how individuals react during actual violent encounters. Participants might overestimate or underestimate their emotional reactions to such visceral events when reading about them on a screen.
Additionally, the sample consisted entirely of participants from the United States. This geographic focus means the results reflect preferences in a modern Western society where rates of interpersonal violence are historically low compared to ancestral environments. It remains to be seen whether these preferences would differ in cultures with higher rates of daily violence. Preferences for physical strength might be more pronounced in environments where physical safety is less assured by external institutions.
“One big next step is to ask how preferences for physical strength and willingness to protect vary across societies,” Barlev told PsyPost. “Both preferences are likely tuned to some extent to the social and physical environment in which people live, such as how dangerous it is. Strength in particular can be an asset or a liability—strong individuals, especially men, would be better able to protect themselves and others from violence, but such men might also be more violent toward their romantic partners and friends.”
“Because most of our American participants live in relatively safe environments, their weaker preference for strength may partially reflect this down-regulation. If that’s right, we’d predict that people in more dangerous environments will value both strength and willingness to protect somewhat more.”
The study, “Willingness to protect from violence, independent of strength, guides partner choice,” was authored by Michael Barlev, Sakura Arai, John Tooby, and Leda Cosmides.


Chronic stress and social isolation are frequently cited as precursors to physical illness, yet the biological machinery driving this connection has remained partially obscured. A new scientific review proposes that mitochondria, the energy-generating structures within cells, serve as the primary translator between psychological experience and physical health. By altering their function in response to stress, these cellular components may drive conditions ranging from depression to cardiovascular disease. The paper detailing these connections was published in Current Directions in Psychological Science.
For decades, researchers have utilized the biopsychosocial model to understand how social and psychological factors influence the body. This framework links biological processes with social environments, yet it has historically lacked specific details on how feelings physically alter cells. Critics of the model note that it offers limited mechanistic specificity regarding how an experience like loneliness translates into molecular change. Without identifying the precise biological pathways, it is difficult to predict or treat stress-related diseases effectively.
To address this gap, a team of researchers synthesized evidence linking cellular biology with psychology. Christopher P. Fagundes, a professor in the Department of Psychological Sciences at Rice University, led the review. He collaborated with E. Lydia Wu-Chung from the University of Pittsburgh and Cobi J. Heijnen from Rice University. They sought to identify a cellular system sensitive enough to respond to mood but powerful enough to regulate whole-body health.
The researchers conducted their review by examining existing literature from the fields of psychoneuroimmunology and mitochondrial biology. They analyzed data from preclinical animal models and human studies to construct a clearer picture of cellular adaptation. Their analysis focused on how mitochondria function as a hub for stress physiology, immune regulation, and energy balance.
Mitochondria are often called the powerhouses of the cell because they generate adenosine triphosphate, or ATP. This molecule fuels nearly all biological activity, including brain function and muscle movement. The review highlights that these structures do much more than produce fuel.
They serve as sophisticated sensors that detect hormonal signals and environmental shifts. Mitochondria possess the ability to adjust their activity based on the body’s immediate needs. This adaptability is known as metabolic flexibility.
During moments of acute stress, the body releases hormones like cortisol and catecholamines. These hormones prompt mitochondria to increase energy production to handle the immediate challenge. This rapid adjustment supports resilience by providing the resources needed for a “fight or flight” response.
However, the authors note that chronic stress creates a vastly different outcome. Prolonged exposure to stress hormones causes mitochondrial efficiency to plummet. Instead of adapting, the machinery begins to malfunction.
When these structures become overworked, they produce excess reactive oxygen species. These are volatile by-products that function like cellular exhaust fumes. While small amounts are necessary for signaling, an accumulation leads to oxidative stress.
This damage disrupts the balance of energy and leads to cellular dysfunction. The researchers point to this breakdown as a potential root cause of fatigue and cognitive decline. The brain is particularly susceptible to these energy deficits because of its immense fuel requirements.
Even slight mitochondrial impairments can limit the energy available for neurotransmission. This can undermine the neural processes that support mood regulation and memory. Consequently, mitochondrial dysfunction is increasingly linked to psychiatric conditions such as anxiety and depression.
The review also details how mitochondria communicate with the immune system. When mitochondria sustain damage, they can release fragments of their own DNA into the bloodstream. They may also release other internal molecules that are usually contained within the cell.
The immune system perceives these fragments as danger signals. This triggers an inflammatory response similar to how the body reacts to a virus. Chronic inflammation is a well-established risk factor for heart disease, diabetes, and neurodegenerative disorders.
This pathway suggests that psychological stress creates physical inflammation through mitochondrial damage. Fagundes and his colleagues cite studies involving human subjects to illustrate this connection. One highlighted area of research involves caregivers for family members with dementia.
Caregiving is often used as a model for chronic psychological stress. Research indicates that caregivers often display lower mitochondrial health indices compared to non-caregivers. Those with lower mitochondrial efficiency reported worse physical functioning.
Conversely, caregivers with higher mitochondrial capacity appeared more resilient. They were better buffered against the negative emotional effects of their heavy burden. This suggests that cellular health may dictate how well a person withstands psychological pressure.
Social isolation also appears to leave a biological mark on these cellular structures. The review mentions that individuals reporting high levels of loneliness possess lower levels of specific mitochondrial proteins in the brain. This creates a feedback loop where social disconnection degrades physical health.
Fagundes notes the importance of this cellular perspective in understanding disease. He states, “The actual cellular machinery that links these experiences to disease really starts at the level of the mitochondria.” This insight moves the field beyond vague associations to concrete mechanisms.
The authors argue that this helps explain the overlap between mental health disorders and physical ailments. Conditions like anxiety and diabetes may share this common cellular origin. It provides a unified theory for why emotional distress so often accompanies physical illness.
The team also reviewed interventions that might restore mitochondrial health. Exercise provided the most consistent results in the analyzed literature. Endurance training boosts the number of mitochondria and improves their efficiency.
Physical activity stimulates a process called mitochondrial biogenesis. This creates new power plants within the cell to replace old or damaged ones. The authors suggest this is a primary reason why exercise supports both physical and psychological resilience.
Mindfulness and psychotherapy showed potential but lacked robust evidence in the current literature. Some studies indicated biological changes following these interventions. For example, a mindfulness program was associated with altered oxidative metabolism markers.
However, these biological shifts did not always align with reported symptom improvement. In some cases, the studies lacked necessary control groups to confirm causality. The researchers characterize these findings as promising proof of concept rather than definitive proof.
Social support is another theorized intervention. It is believed to protect mitochondrial health by reducing cortisol and dampening inflammatory activity. However, the authors note that very few studies have measured mitochondrial outcomes directly in relation to social support.
The authors acknowledge that much of the current evidence relies on correlations. It remains unclear if mitochondrial dysfunction causes psychological distress or if distress drives the dysfunction. There is likely a bidirectional relationship that exacerbates over time.
Most human studies reviewed were cross-sectional, meaning they looked at a single point in time. This limits the ability to determine the direction of the effect. The researchers emphasize the need for longitudinal designs to clarify these pathways.
Future work must integrate mitochondrial measures with broader systems. These include the immune system, the autonomic nervous system, and the brain. Studying these systems in isolation often misses the complexity of the human stress response.
The authors also call for standardized ways to measure mitochondrial health in psychological studies. Current methods vary widely in cost and accessibility. Developing consistent biomarkers will allow for larger studies that reflect diverse populations.
Fagundes emphasizes the potential of this approach for future medicine. He says, “If we focus more at the cellular level, we’ll have a much deeper understanding of underlying processes.” This could lead to new treatments that target the cell to heal the mind.
By establishing mitochondria as a key player, this review refines the biopsychosocial model. It offers a testable biological mechanism for decades of psychological theory. Ultimately, it suggests that resilience is not just a state of mind but a state of cellular energy.
The paper, “Psychological Science at the Cellular Level: Mitochondria’s Role in Health and Behavior,” was authored by Christopher P. Fagundes, E. Lydia Wu-Chung, and Cobi J. Heijnen.

A new comprehensive analysis suggests that maternal use of antibiotics during pregnancy is associated with a slightly elevated likelihood of the child receiving a diagnosis of attention-deficit/hyperactivity disorder (ADHD). The research indicates that this statistical link is stronger when antibiotics are administered during the second or third trimesters. These findings were published recently in the Journal of Affective Disorders.
ADHD is a neurodevelopmental condition that has become increasingly common in recent years. It is characterized by symptoms such as difficulty sustaining attention, impulsive actions, and hyperactivity. While genetics play a major role in the development of the disorder, scientists believe that environmental factors also contribute. Researchers have increasingly focused on exposures that occur before birth.
Antibiotics are among the most frequently prescribed medications for pregnant women. They are essential for treating bacterial infections that could otherwise harm the mother or the fetus. However, these drugs do not only target harmful bacteria. They also affect the vast community of helpful microbes living in the human gut, known as the microbiota.
There is a growing body of evidence suggesting a connection between the gut and the brain. This concept is often referred to as the gut-brain axis. The theory posits that the composition of gut bacteria can influence brain development and function. This influence may occur through various biological pathways, such as the production of neurotransmitters or the regulation of inflammation.
Mothers pass aspects of their microbiota to their children. Additionally, the environment within the womb influences the initial development of the fetus’s own biological systems. Consequently, some scientists hypothesize that disrupting the maternal microbiome with antibiotics could have downstream effects on the child’s neurodevelopment. Previous studies on this topic have produced conflicting results, with some finding a risk and others finding none.
To address these inconsistencies, a research team led by Jiali Fan from West China Second University Hospital at Sichuan University initiated a new investigation. They sought to clarify the potential relationship by combining data from many different sources. This approach allows for a more robust statistical analysis than any single study could provide on its own.
The researchers conducted a meta-analysis. This is a scientific method that pools statistical data from multiple independent studies to identify broader trends. The team searched major medical databases for observational cohort studies published up to October 2024. They followed strict guidelines to select high-quality research.
The final analysis included nine major studies. These studies represented a massive combined pool of participants, totaling more than 6.1 million mother-child pairs. The data encompassed populations from several different regions. These included countries in North America, Europe, and Asia.
The researchers used a scoring system called the Newcastle-Ottawa Scale to evaluate the quality of the included research. This scale assesses how well a study selected its participants and how accurately it measured outcomes. The team found that the included studies were generally of moderate to high methodological quality.
The primary finding of the analysis identified a positive association. The overall data showed that children exposed to antibiotics in the womb had a hazard ratio of 1.15 compared to those who were not exposed. This figure represents a 15 percent increase in the relative risk of developing ADHD. Another statistical measure used in the study, the odds ratio, placed this increased likelihood at 28 percent.
The researchers then broke down the data to see if the timing of the exposure mattered. Pregnancy is divided into three distinct periods known as trimesters. The analysis found no statistical connection between antibiotic use in the first trimester and a later ADHD diagnosis. This lack of association in early pregnancy was consistent across the data.
However, a different pattern emerged for the later stages of pregnancy. The study identified a link when antibiotics were used during the mid-pregnancy period. A similar association was observed for antibiotic use during late pregnancy. This suggests that the timing of exposure may be a relevant factor in this potential relationship.
In addition to timing, the team investigated the frequency of antibiotic use. They wanted to know if taking more courses of medication changed the risk profile. The data showed that a single course of antibiotics was not statistically linked to an increased risk of ADHD. The association only became apparent with repeated use.
When mothers received two separate courses of antibiotics, the risk of their children developing ADHD rose. The risk appeared to increase further for those who received three or more courses. This finding hints at a potential cumulative effect. It suggests that more frequent disruptions to the maternal microbiome might correspond to a higher probability of the neurodevelopmental outcome.
The researchers performed sensitivity analyses to test the strength of their conclusions. This process involves removing one study at a time from the calculations to ensure no single dataset is skewing the results. The findings remained stable throughout this process. This consistency suggests that the observed link is robust across the different study populations included.
Despite these findings, the authors emphasize that the results must be interpreted with caution. The study design is observational. This means it can detect a correlation between two events, but it cannot prove that one caused the other. There are other factors that could explain the association.
The most prominent alternative explanation is the underlying infection itself. Women are prescribed antibiotics because they are sick. Infections trigger immune responses and inflammation in the body. It is possible that the maternal fever or inflammation affects fetal brain development, rather than the medication.
Some of the studies included in the analysis attempted to adjust for this factor. For instance, one study accounted for maternal infection and still found a link to the medication. However, not all studies could fully separate the effects of the illness from the effects of the cure. This remains a primary challenge in this field of research.
Another limitation of the analysis is the lack of detail regarding specific drugs. Antibiotics are a diverse class of medications. Different types of antibiotics target bacteria in different ways and have varying effects on the microbiome. The current data did not allow the researchers to determine if specific classes of drugs carried higher risks than others.
The study also lacked precise information on dosages. Without knowing the exact amount of medication taken, it is difficult to determine a precise biological threshold for risk. The researchers relied on prescription records and medical files. These records confirm a prescription was filled but do not always guarantee it was taken as directed.
The biological mechanisms remain theoretical. While animal studies have shown that antibiotics can alter behavior in mice by changing gut bacteria, this has not been definitively proven in humans. The pathway from maternal gut bacteria to fetal brain development is a subject of ongoing scientific inquiry.
The authors recommend that future research should be prospective in nature. This means designing studies that recruit pregnant women and follow them forward in time. Such studies should meticulously record the specific type, dosage, and duration of antibiotic use. This would allow for a much finer-grained analysis of the risks.
The researchers also suggest using advanced study designs to rule out genetic factors. Sibling comparisons can be a powerful tool. By comparing one sibling who was exposed to antibiotics to another who was not, scientists can control for shared genetics and household environments. This would help isolate the effect of the medication.
In clinical practice, antibiotics remain vital tools. The risks of leaving a bacterial infection untreated during pregnancy are well-documented and can be severe. The authors state that their findings should not discourage necessary treatment. Instead, they suggest the results highlight the need for prudent prescribing.
Physicians should continue to weigh the benefits and risks. The study supports the idea that antibiotics should be used only when clearly indicated. Avoiding unnecessary or repeated courses of these drugs may be beneficial. This aligns with general medical guidance regarding antibiotic stewardship.
The study, “Meta-analysis of the association between prenatal antibiotic exposure and risk of childhood attention-deficit/hyperactivity disorder,” was authored by Jiali Fan, Shanshan Wu, Chengshuang Huang, Dongqiong Xiao, and Fajuan Tang.

A new series of studies published in Computers in Human Behavior has found that keeping tabs on a former romantic partner through social media hinders emotional recovery. The findings indicate that both intentional surveillance and accidental exposure to an ex-partner’s content are associated with increased distress, jealousy, and negative mood.
Tara C. Marshall, an associate professor at McMaster University, conducted this research to understand the psychological aftermath of maintaining digital connections with former partners. While social media platforms allow users to maintain contact with friends and family, they also create an archive of information about past relationships. Marshall sought to clarify whether observing an ex-partner actively or passively leads to worse recovery outcomes over time.
Previous research on this topic often relied on data collected at a single point in time, which makes it difficult to determine if social media use causes emotional distress or if distressed individuals simply use social media more often. By examining the timing of these behaviors, the study intends to determine if observing an ex-partner precedes a decline in well-being. The research also explores whether personality traits like attachment anxiety, characterized by a fear of rejection and a desire for extreme closeness, worsen these effects.
Marshall conducted four separate studies to address these questions using different methodologies. The first study employed a longitudinal design to assess changes over time. Marshall recruited 194 adults through Amazon Mechanical Turk who had experienced a romantic breakup within the previous three months.
To be included, participants had to be registered Facebook users who had viewed their ex-partner’s profile at least once. Participants completed an initial survey measuring their attachment style, active Facebook surveillance, and current levels of distress. Six months later, they completed the same measures.
The results from the first study showed that frequent monitoring of an ex-partner’s Facebook page was associated with higher levels of distress and jealousy at both the beginning of the study and six months later. While feelings of distress generally declined over time for most participants, active observation moderated the change in negative affect.
Specifically, individuals who engaged in high levels of surveillance saw their negative mood increase over the six-month period. The data also revealed that the link between active observation and breakup distress was stronger for people with high attachment anxiety. This suggests that for individuals who already fear abandonment, seeing reminders of an ex-partner online is particularly painful.
To better understand the immediate emotional impact of social media exposure, Marshall conducted a second study using an experimental design. This study involved 407 adults recruited from the United States who had experienced a breakup within the last year.
Participants were randomly assigned to one of three conditions. One group was instructed to imagine looking at their ex-partner’s Facebook profile, including photos and relationship status. A second group imagined looking at an acquaintance’s Facebook profile. The third group imagined their ex-partner in a school or workplace setting, without any social media context.
The experiment revealed that participants who visualized their ex-partner’s Facebook page reported significantly higher levels of jealousy compared to those who imagined an acquaintance or the ex-partner in a real-world setting.
This increased jealousy was statistically linked to higher levels of negative affect and breakup distress. The findings indicate that there is something uniquely triggering about social media observation. It is not simply thinking about the ex-partner that causes jealousy, but rather the specific context of social media, which often displays personal information and interactions with potential new romantic rivals.
The third study utilized a daily diary method to capture real-time fluctuations in mood and behavior. Marshall recruited 77 undergraduate students in the United Kingdom who had gone through a breakup in the last two years. For seven consecutive days, participants completed a survey every night before bed. They reported whether they had engaged in active observation, defined as deliberately searching for their ex-partner’s profile, or passive observation, defined as the ex-partner’s posts appearing in their feed without a search. They also rated their daily negative emotions and specific distress regarding the breakup.
This daily tracking provided evidence for the timing of these emotional shifts. On days when participants passively observed their ex-partner on platforms like Facebook, Instagram, or Snapchat, they reported higher negative affect for that same day. This suggests that even unintentional exposure can dampen one’s mood. When participants engaged in active observation, the consequences appeared more severe.
Active searching was associated with higher breakup distress on the same day and predicted higher distress on the following day. This finding supports the idea that surveillance does not just reflect current pain but contributes to future pain.
To replicate and expand upon these findings, Marshall conducted a fourth study with a sample of 84 undergraduate students from a Canadian university. The procedure mirrored the third study but extended the diary period to ten days and included newer platforms such as TikTok and VSCO. This study also included daily measures of jealousy to see how they fluctuated with social media use.
The results of the fourth study aligned with the previous findings. On days when participants engaged in active observation, they reported greater negative affect, breakup distress, and jealousy. Similar to the third study, active observation predicted greater breakup distress on the next day.
The study also found that attachment anxiety played a significant role in daily reactions. For participants with high attachment anxiety, both active and passive observation were significantly associated with feelings of jealousy. This reinforces the conclusion that anxious individuals are more vulnerable to the negative effects of digital exposure to an ex-partner.
The collective findings across all four studies present a consistent pattern. Observing an ex-partner on social media tends to be associated with poorer recovery from a breakup. This relationship holds true across different countries and platforms.
The research highlights that passive observation is not harmless. Simply remaining friends with an ex-partner or following them allows their content to infiltrate one’s feed, which is linked to daily spikes in negative emotion. Active surveillance appears to be more detrimental, as it predicts lingering distress that carries over into subsequent days.
There are limitations to this research that should be noted. The samples were drawn primarily from Western nations, and the latter two studies relied exclusively on university students. This demographic profile may not represent the experiences of older adults or individuals from different cultural backgrounds. Additionally, the measure for passive observation in the diary studies relied on self-reporting, which is subject to memory errors. Participants may not always recall every instance of passive exposure throughout the day.
Future research could address these gaps by recruiting more diverse samples. It would also be beneficial to investigate whether these patterns hold true for other types of relationship dissolution, such as the end of a close friendship or family estrangement. Another potential avenue for investigation would be an intervention study. Researchers could randomly assign participants to block or unfollow an ex-partner and measure whether this disconnection leads to faster emotional recovery compared to those who maintain digital ties.
The study, “Social Media Observation of Ex-Partners is Associated with Greater Breakup Distress, Negative Affect, and Jealousy,” was authored by Tara C. Marshall.

A new review published in Current Opinion in Psychology suggests that community gardens function as vital social infrastructure that contributes significantly to individual and collective health. The analysis indicates that these shared green spaces foster psychological well-being, strengthen social connections, and promote civic engagement by cultivating specific forms of social capital.
Many modern societies are currently experiencing a period of transformation defined by profound challenges. These challenges include widespread social isolation, political polarization, and a decline in public trust. While community gardens are frequently established to improve neighborhood aesthetics or provide fresh food, the authors of this study argue that these goals often obscure a more profound impact.
The researchers sought to bridge the gap between the practical experience of garden-based community building and the theoretical understanding of how these bonds are formed. They aimed to provide a comprehensive explanation for how shared gardening activities transform into community resilience.
“Community gardens are often praised for producing food or beautifying neighborhoods, but those explanations felt incomplete. In my real-world experience with the Community Ecology Institute and beyond, gardens consistently function as places where trust, cooperation, and a sense of shared responsibility emerge—often among people who might not otherwise connect,” said first author Chiara D’Amore, the executive director of the Community Ecology Institute.
“At the same time, much of the research treated these outcomes as secondary or incidental. This study was motivated by a gap between practice and theory: we lacked a clear psychological explanation for how community gardens build social capital and why those relationships matter for individual and community well-being. The article brings together psychological theory and on-the-ground evidence to make those mechanisms more visible and legible.”
The authors synthesize findings from 50 studies published over the past decade to examine the social benefits of community gardens. They frame their analysis using social capital theory, specifically the framework established by Aldrich and Meyer. This framework identifies three distinct types of social capital that enable communities to engage in cooperative behavior. These are bonding social capital, bridging social capital, and linking social capital.
Bonding social capital refers to the strong ties that develop within a specific group. Bridging social capital describes the connections formed between diverse groups who might not otherwise interact. Linking social capital involves relationships between individuals and larger institutions or those in positions of power. The review suggests that community gardens are uniquely positioned to foster all three types because they are intentionally designed for people to gather and share resources.
Community gardening is consistently associated with enhanced psychological well-being across diverse populations. These benefits often stem from what the authors term the “gardening triad.” This triad consists of caretaking, a sense of accomplishment, and a connection to nature.
For children, the garden environment appears to stimulate curiosity and joy. These experiences tend to foster emotional development and learning. Adults frequently report reduced feelings of loneliness and an increased sense of purpose. Participants also describe elevated levels of happiness and self-esteem.
Community gardens often serve as places of refuge and restoration. Participants frequently describe these spaces as locations where they can experience safety and mental clarity. The act of being in the garden allows for a break from the stressors of daily life.
For immigrant, refugee, and Indigenous communities, these gardens can function as sites of cultural refuge. They allow for the healing affirmation of identity and the preservation of traditions. During collective crises, such as the COVID-19 pandemic, gardens offered a sense of continuity. They provided emotional grounding when other social structures were disrupted.
The process of gardening also promotes a sense of agency and pride. This occurs through the rhythms of plant care and participation in the food system. These experiences tend to increase self-esteem and motivation. This is particularly true in underserved contexts where individuals may face systemic barriers.
In the Global South, the review notes that community gardens have enabled marginalized groups to reclaim land. This process fosters a sense of control over personal health. Participants describe heightened belonging and self-worth as they see the tangible results of their labor.
The review also highlights evidence that community gardens significantly enhance social connectedness. The shared nature of the work cultivates repeated and cooperative interactions. These interactions nurture trust and reciprocity among neighbors.
One of the primary ways this occurs is through social learning. Gardens enable mentorship and the transmission of knowledge between generations. Older adults are able to pass down cultural and ecological wisdom to younger participants.
For youth, gardening often leads to stronger relationships with peers. It also fosters informal mentorships with adults in the community. School and campus gardens facilitate bridging social capital by linking students with families and educators.
The inclusive nature of these spaces helps to reduce social isolation. This is particularly relevant for urban residents and the elderly. Gardens create environments where individuals from diverse backgrounds can interact.
These interactions foster intercultural trust and dialogue. By working toward a common goal, participants bridge demographic differences. This helps to reduce prejudice and strengthens the overall social fabric of the neighborhood.
The review also highlights the role of community gardens in fostering civic engagement. The authors argue that these spaces act as sites for empathic growth and civic formation. This is especially observed among students and marginalized populations.
Engaging in local food systems tends to promote a grounded sense of social responsibility. It exposes participants to issues regarding sustainability and environmental justice. Students involved in these programs frequently report increased empathy toward underserved communities.
Gardens can also operate as spaces for grassroots leadership. Participants often assume roles in governance or advocacy. This generates linking social capital by connecting residents to policy networks and civic institutions.
Gardening might also deepen the connection participants feel to ecological systems, leading to a stronger environmental identity. Individuals are more likely to engage in pro-environmental behaviors outside of the garden context.
During the COVID-19 pandemic, community gardens demonstrated their capacity as resilient civic infrastructure. They provided food and sustained mutual aid networks. This highlighted their role in both immediate relief and long-term systemic resilience.
“Community gardens don’t just grow food—they grow connection,” D’Amore told PsyPost. “When people work side by side caring for shared land, they build trust, belonging, and mutual support in ways that are difficult to replicate through other programs or policies. These relationships help communities become healthier, more resilient, and better able to face challenges together. The takeaway is simple but powerful: investing in shared, place-based activities like community gardening is an effective way to rebuild social ties at a time when many people feel increasingly isolated.”
Despite the positive findings, the authors acknowledge several limitations in the current body of research. They note that more rigorous data collection is needed to fully understand the scope of these benefits. Future research would benefit from a combination of pre- and post-surveys alongside direct observation.
There is a need to examine how intersecting identities influence access to these spaces. Factors such as race, class, gender, and immigration status likely shape the gardening experience. Comparative studies across different geographic contexts could reveal important variations in outcomes.
The specific mechanisms that cultivate different forms of social capital also require further clarification. It is not yet fully understood which specific activities or leadership styles are most effective at building trust. Understanding these nuances is necessary for optimizing the design of future programs.
The authors also point out the need to explore barriers to garden establishment. Issues such as access to space and funding present significant challenges. Identifying strategies to overcome these obstacles is necessary for creating equitable opportunities for all communities.
The authors conclude that community gardens are a vital form of social infrastructure. They argue that the value of these spaces lies not only in the produce they grow but in the networks they nourish. They encourage continued investment in community gardens as a strategy to address both individual well-being and community resilience.
“As the Founder and Director of the Community Ecology Institute it is our goal to continue to cultivate community garden spaces in our community in Howard County, Maryland and to create tools and resources that help other communities do the same in ways that are connected to research based best practices,” D’Amore added.
The study, “Community Gardens and the Cultivation of Social Capital,” was authored by Chiara D’Amore, Loni Cohen, Justin Chen, Paige Owen, and Calvin Ball.

Researchers in Turkey have identified a potential biological link between early fetal development and the later emergence of gender dysphoria. The study indicates that adults diagnosed with gender dysphoria possess a higher frequency of subtle physical irregularities, known as minor physical anomalies, compared to cisgender individuals.
These physical traits develop during the initial stages of pregnancy and may serve as external markers for variations in brain development that occur during the same prenatal window. The research findings appear in the Journal of Sex & Marital Therapy.
The origins of gender dysphoria remain a subject of persistent scientific inquiry. Current theoretical models often divide potential causes into biological influences and psychosocial factors. A growing subset of neuroscience research examines whether the condition arises from variations in how the brain develops before birth. This perspective suggests that the biological pathways shaping the brain might also leave physical traces elsewhere on the body. This concept relies on the biological reality of fetal development.
During the early weeks of gestation, the human embryo consists of distinct tissue layers. One of these layers, the ectoderm, eventually differentiates to form both the skin and the central nervous system. Because these systems share a common embryological origin, disruptions or variations affecting one system often impact the other. Scientists have previously utilized this connection to study conditions such as schizophrenia and autism spectrum disorder. The presence of minute physical irregularities is often interpreted as a record of developmental stability in the womb.
These irregularities are classified as minor physical anomalies. They are slight deviations in morphology that do not cause medical problems or cosmetic concerns. Examples include low-set ears, specific hair whorl patterns, or a high arch in the palate. These features form primarily during the first and second trimesters of pregnancy. This timeframe overlaps with critical periods of fetal brain architecture formation. By quantifying these traits, researchers attempt to estimate the degree of neurodevelopmental deviation that occurred prior to birth.
Psychiatrist Yasin Kavla and his colleagues at Istanbul University-Cerrahpasa sought to apply this framework to the study of gender identity. They reasoned that if gender dysphoria has a neurodevelopmental basis, individuals with the diagnosis might exhibit these physical markers at higher rates than the general population. The team designed a case-control study to test this hypothesis. They aimed to determine if there is a measurable difference in the prevalence of these anomalies between transgender and cisgender adults.
The investigators recruited 108 adults diagnosed with gender dysphoria. These participants were patients at a university clinic who had not yet undergone hormonal or surgical gender-affirming treatments. The exclusion of individuals on hormone therapy was necessary to ensure that any observed physical traits were congenital rather than acquired. The group included 60 individuals assigned female at birth and 48 assigned male at birth. Most participants in this group reported experiencing gender dysphoria since early childhood.
For comparison, the researchers recruited a control group of 117 cisgender individuals. This group consisted of people who sought administrative health documents from the hospital. The control group included 60 females and 57 males who reported attraction to the opposite sex. The researchers implemented strict exclusion criteria for the control group. They removed any potential candidates who had a personal or family history of neurodevelopmental disorders, such as autism or attention deficit hyperactivity disorder.
Two psychiatrists examined each participant using the Waldrop Minor Physical Anomaly Scale. This assessment tool is a standardized method for evaluating 18 specific physical features across six body regions. The regions include the head, eyes, ears, mouth, hands, and feet. To ensure objectivity, the examiners used precise tools like calipers and tape measures for items requiring specific dimensions. They looked for specific signs such as a curved fifth finger, a gap between the first and second toes, or asymmetrical ears.
The analysis revealed distinct differences between the groups regarding the total number of anomalies. Individuals diagnosed with gender dysphoria had higher total scores for physical anomalies compared to the cisgender control group. This trend held true for both those assigned female at birth and those assigned male at birth. The data suggests a generalized increase in these developmental markers among the transgender participants. The researchers then broke down the data by specific body regions to identify patterns.
The disparity was most evident in the craniofacial region. This area includes the head, eyes, ears, and mouth. Both groups of transgender participants showed elevated scores in this region relative to the cisgender participants. Specific anomalies appeared more frequently in the gender dysphoria group. These included a furrowed tongue and skin folds covering the inner corner of the eye, known as epicanthus. The study notes that the face and brain exert reciprocal influences on each other during early embryogenesis.
The researchers also examined peripheral anomalies located on the hands and feet. Participants assigned female at birth showed higher scores in this category than both cisgender males and females. The results for participants assigned male at birth were more nuanced. Their peripheral scores were not statistically distinct from cisgender males. However, their scores were higher than those of the cisgender female control group. This suggests that the distribution of these traits may vary based on biological sex as well as gender identity.
Another measurement taken was head circumference. The study found that individuals assigned male at birth had larger head circumferences than those assigned female at birth, regardless of gender identity. There was no statistical difference in head size between cisgender males and transgender women. Similarly, there was no statistical difference between cisgender females and transgender men. This specific metric appeared to align with biological sex rather than gender identity or developmental instability.
The authors interpret these findings as support for a neurodevelopmental etiology of gender dysphoria. They propose that genetic and environmental factors in the womb likely drive the observed patterns. The presence of craniofacial anomalies specifically points to developmental variations occurring in the first two trimesters. This timing aligns with the period when the brain undergoes sexual differentiation. The findings challenge the notion that gender dysphoria is purely a psychosocial phenomenon.
However, the authors note several limitations that contextualize their results. The control group excluded anyone with a history of neurodevelopmental disorders. This exclusion might have artificially lowered the average anomaly score for the cisgender group. A control group including such histories might have produced different comparisons. Comparing the gender dysphoria group to a clinical psychiatric control group would clarify if these high scores are unique to gender dysphoria.
Additionally, the examiners could not be fully blinded to the participants’ gender presentation. This visibility might have introduced unconscious bias during the physical measurements. The study population also came from a single tertiary care center in Turkey. This sample may not represent the global diversity of gender-diverse individuals. Cultural and genetic background can influence the baseline prevalence of certain minor physical anomalies.
Sexual orientation represents another variable to consider. The majority of the transgender participants in the study were attracted to their same biological sex. The cisgender control group consisted entirely of heterosexual individuals. Future investigations would benefit from including cisgender control groups with same-sex attractions. This would help researchers isolate gender identity from sexual orientation as the primary variable.
The study concludes that minor physical anomalies are more prevalent in this specific cohort of individuals with gender dysphoria. This suggests that the biological roots of the condition may lie in early prenatal development. The authors emphasize that these anomalies are likely markers of underlying genetic or epigenetic processes. They call for future research to integrate genetic analysis to map the specific pathways involved.
The study, “Minor Physical Anomalies as a Gateway to Understanding the Neurodevelopmental Roots of Gender Dysphoria,” was authored by Yasin Kavla, Tuncay Sandıkçı, and Şenol Turan.

An experimental study of heavy drinkers found that smoking cannabis with 7.2% THC reduced their alcohol urge immediately after smoking. These participants consumed 27% less alcohol after smoking, while those smoking cannabis with 3.1% THC consumed 19% less alcohol. The research was published in the American Journal of Psychiatry.
Cannabis is a psychoactive plant that contains chemical compounds called cannabinoids. The most well-known cannabinoids are tetrahydrocannabinol (THC) and cannabidiol (CBD). THC is primarily responsible for the intoxicating effects, while CBD has non-intoxicating properties. CBD is studied for potential therapeutic uses.
Cannabis can be consumed by smoking, vaporizing, or ingesting extracts and edibles. The effects vary widely based on dose, potency, and individual sensitivity. Cannabis use leads to short-term changes in attention, memory, and coordination. In some individuals, it can trigger anxiety or paranoia. Long-term heavy use tends to lead to dependence, impaired cognitive functioning, or exacerbation of mental health conditions, particularly in vulnerable populations.
Some individuals use cannabis and alcohol together, but co-use can amplify impairment more than either drug alone. Alcohol is generally more often used as it is more socially accepted and is also legal in most jurisdictions. In contrast, cannabis is illegal in many countries around the world and more heavily stigmatized. However, the number of jurisdictions legalizing the use of cannabis is increasing.
Study author Jane Metrik and her colleagues wanted to explore how the consumption of cannabis affects cravings for alcohol and subsequent alcohol consumption. These authors note that results of previous studies on this are inconclusive. While some studies indicated that cannabis use might hinder alcohol dependence treatment and reduce abstinence, other studies reported no effects or even reduced consumption of alcohol after cannabis use.
The study randomized 157 individuals recruited from the community who endorsed the use of alcohol and cannabis; 138 participants completed at least two sessions and were included in the final analysis. Participants were required to be English speakers between 21 and 44 years of age, who had used cannabis during the past month two or more times weekly (and also two or more times weekly during the past 6 months), who had THC in their urine, who were familiar with smoking cannabis, and who were also prone to heavy episodic drinking of alcohol. Heavy episodic drinking was defined as having 5 or more alcoholic drinks per occasion for men and 4 or more drinks for women.
A standard alcoholic drink is defined as containing about 14 grams of pure ethanol. This is roughly equivalent to a small beer (350 ml), a glass of wine (150 ml), or a shot of spirits (45 ml).
All participants completed three experimental sessions in randomized orders. In one session they smoked cannabis with 3.1% THC, in another they smoked cannabis with 7.2% THC, and the third experimental session was a placebo session during which they smoked cannabis with almost no THC (0.03%). Experimental sessions were at least 5 days and no more than 3 weeks apart.
Before each experimental session, participants were told to abstain from cannabis and tobacco for 15 hours and from alcohol for 24 hours. They were also instructed to abstain from drinking caffeinated beverages for 2 hours before an experimental session.
Each experimental session started with a set of tests related to alcohol and cannabis levels, states related to the use of these substances, and several other assessments, including an assessment of alcohol craving. They also consumed lunch.
After this, participants smoked their assigned cannabis cigarettes. Next, they completed a test examining participants’ reactivity to alcohol cues (sight and smell of alcohol). In the end, they completed two alcohol choice tasks during which they were given an opportunity to either drink up to 8 mini-drinks of their choice or to receive $3 per drink not consumed.
Results showed that smoking cannabis had no effect on general cravings for alcohol (measured by the Alcohol Craving Questionnaire). Alcohol craving levels were similar in all three experimental conditions. However, after smoking 7.2% THC cannabis cigarettes, the specific urge to drink alcohol (measured by a single “urge” question) decreased significantly. Participants who smoked these cigarettes drank 27% less alcohol on average (compared to the placebo condition), while their alcohol consumption was 19% lower after smoking 3.1% THC cigarettes.
“Following overnight cannabis abstinence, smoking cannabis acutely decreased alcohol consumption compared to placebo,” study authors concluded.
The study contributes to the scientific understanding of the effects of cannabis on alcohol consumption. However, it should be noted that study participants were individuals who used cannabis frequently, with over 3 in 4 meeting criteria for current cannabis use disorder. They were also heavy alcohol drinkers. Because of this, they might have developed tolerance for the effects of these two substances. Results in individuals who use alcohol and cannabis less frequently might differ.
The paper, “Acute Effects of Cannabis on Alcohol Craving and Consumption: A Randomized Controlled Crossover Trial,” was authored by Jane Metrik, Elizabeth R. Aston, Rachel L. Gunn, Robert Swift, James MacKillop, and Christopher W. Kahler.

A new study suggests that a moderate dose of psilocybin can effectively reduce symptoms of obsessive-compulsive disorder for a short period. The findings indicate that the improvement is most pronounced in compulsive behaviors rather than obsessive thoughts. These results were published in the journal Comprehensive Psychiatry.
Obsessive-compulsive disorder, commonly known as OCD, is a chronic mental health condition. It is characterized by uncontrollable, recurring thoughts and repetitive behaviors. People with the disorder often feel the urge to repeat these behaviors to alleviate anxiety.
Standard treatments for the condition usually involve selective serotonin reuptake inhibitors or cognitive behavioral therapy. These treatments are not effective for everyone. A significant portion of patients do not find relief through traditional means. This has led scientists to explore alternative therapeutic options.
Psilocybin is the active psychoactive compound found in “magic mushrooms.” It has gained attention in recent years as a potential treatment for various psychiatric conditions. These conditions include depression, anxiety, and addiction.
Most research into psilocybin has focused on high doses that induce a profound psychedelic experience. However, there are concerns about using high doses for patients with OCD. Individuals with this disorder often struggle with a fear of losing control. The intense psychological effects of a high dose could theoretically be distressing for them.
Luca Pellegrini and his colleagues designed a study to test a different approach. Pellegrini is a researcher associated with the University of Hertfordshire and Imperial College London. The research team wanted to see if a moderate dose of psilocybin could offer therapeutic benefits without a potentially overwhelming psychedelic experience.
The researchers also aimed to determine if the biological effects of the drug could reduce symptoms independently of a “mystical” experience. This is a departure from many depression studies, which often link the therapeutic outcome to the intensity of the psychedelic journey.
The study involved 19 adult participants. All participants had a primary diagnosis of obsessive-compulsive disorder. The severity of their condition ranged from moderate to severe. They had been living with the disorder for at least one year.
The researchers employed a fixed-order, within-subject design. This means that every participant received the same treatments in the same order. There was no randomization of the dosage sequence.
Participants attended two dosing sessions separated by at least four weeks. In the first session, they received a very low dose of 1 mg of psilocybin. This served as a control or active placebo. It was expected to have minimal physiological or psychological effects.
In the second session, participants received a moderate dose of 10 mg of psilocybin. This dose was chosen to be high enough to potentially have a biological effect but low enough to minimize the risk of a challenging psychological experience.
The researchers engaged in extensive preparation with the participants. They provided psychological support before, during, and after the dosing sessions. However, this support was non-interventional. The therapists did not provide specific cognitive behavioral therapy or exposure therapy during the sessions.
During the dosing days, participants stayed in a calm and comfortable room. They were encouraged to lie down and listen to music. Therapists were present to ensure their safety and provide reassurance if needed.
The primary measure of success was the Yale-Brown Obsessive Compulsive Scale (Y-BOCS). This is a standardized clinical tool used to rate the severity of OCD symptoms. The scale measures both obsessions and compulsions separately.
The researchers assessed the participants at several time points. These included the day before dosing, the day of dosing, and then one week, two weeks, and four weeks after each dose.
The results showed a clear difference between the two doses. The 10 mg dose led to a significant reduction in OCD symptoms one week after administration. The magnitude of this improvement was considered large in statistical terms.
In contrast, the 1 mg dose resulted in much smaller changes. The difference between the effects of the 10 mg dose and the 1 mg dose was statistically significant at the one-week mark.
The researchers observed that the beneficial effects began to fade after the first week. By the two-week mark, the difference between the two doses was no longer statistically significant. By four weeks, symptom levels had largely returned to baseline.
A specific finding regarding the nature of the symptom relief stands out. The data revealed that the reduction in total scores was driven primarily by a decrease in compulsions. The scores for obsessions did show some improvement, but the change was not statistically significant.
This suggests that psilocybin might have a specific effect on the mechanisms that drive repetitive behaviors. It appears to make it easier for patients to resist the urge to perform rituals. It seems less effective at stopping the intrusive thoughts themselves.
The study also measured symptoms of depression using the Montgomery-Åsberg Depression Rating Scale. Many patients with OCD also suffer from depression. However, the researchers found no significant change in depression scores following the 10 mg dose.
This lack of effect on depression contrasts with other studies on psilocybin. Those studies typically use higher doses, such as 25 mg. The participants in this study generally had low levels of depression to begin with. This may explain why no significant improvement was observed.
The safety profile of the 10 mg dose was favorable. Participants tolerated the drug well. There were few adverse events reported.
No serious adverse events occurred during the study. Some participants experienced mild anxiety or headaches. One participant experienced a brief anxiety attack after the 1 mg dose, but it was resolved quickly.
The absence of distressing perceptual abnormalities was a key outcome. The 10 mg dose did not induce hallucinations or the intense “trip” associated with higher doses. This supports the idea that a moderate dose is a feasible option for patients who might be afraid of losing control.
The study builds on a growing body of evidence regarding psychedelics and OCD. Previous research has hinted at the potential of these compounds.
A seminal study conducted by Moreno and colleagues in 2006 investigated psilocybin in nine patients. That study found that symptoms decreased markedly after psilocybin administration. It tested various doses and found that even lower doses offered some relief. The current study by Pellegrini and team validates those earlier findings with a larger sample and a focused 10 mg dose.
Other lines of research also support the idea that the psychedelic experience itself may not be necessary for symptom relief in OCD. A preclinical study on mice published in Translational Psychiatry explored this concept.
In that animal study, researchers used a “marble burying” test. This is a common behavior in mice used to model obsessive-compulsive traits. Mice treated with psilocybin buried significantly fewer marbles.
The researchers in the mouse study then administered a drug called buspirone alongside the psilocybin. Buspirone blocks the specific serotonin receptors responsible for the hallucinogenic effects. The mice still showed a reduction in marble burying. This suggests that the anti-compulsive effects of psilocybin might work through a different biological pathway than the psychedelic effects.
Case reports have also appeared in medical literature documenting similar effects. A report in the Journal of Psychoactive Drugs detailed the case of a 30-year-old man with severe, treatment-resistant OCD.
This patient consumed psilocybin mushrooms on his own. He reported that his symptoms completely disappeared during the experience. He also noted a lasting reduction in symptom severity for months afterward. His scores on the Y-BOCS dropped from the “extreme” range to the “mild” range.
Survey data provides additional context. A study published in Scientific Reports surveyed 174 people who had used psychedelics. Over 30% of participants with OCD symptoms reported positive effects lasting more than three months.
These participants reported reduced anxiety and a decreased need to engage in rituals. This retrospective data supports the clinical findings that psilocybin targets the behavioral aspects of the disorder.
Despite the promising results, the current study by Pellegrini has several limitations. The sample size was small, with only 19 participants. This limits the statistical power of the analysis.
The study did not use a randomized design. All participants received the 1 mg dose first and the 10 mg dose second. This was done to prevent any potential long-term effects of the higher dose from influencing the results of the lower dose.
However, this fixed order introduces potential bias. Participants may have expected the second dose to be more effective. The researchers attempted to blind the participants to the dose, but most participants correctly guessed when they received the higher dose.
The duration of the effect was relatively short. The significant improvement lasted only one week. This suggests that a single dose may not be a long-term cure.
The short duration implies that repeated dosing might be necessary to maintain the benefits. Future research will need to investigate the safety and efficacy of administering psilocybin on a recurring basis.
The distinction between obsessions and compulsions also requires further study. The finding that compulsions improved more than obsessions is largely preliminary. Larger studies are needed to confirm if this is a consistent pattern.
The researchers suggest that combining psilocybin with psychotherapy could enhance the results. While this study used non-interventional support, active therapy might help patients integrate the experience. Therapies like Exposure and Response Prevention could be more effective during the window of reduced symptoms.
Future clinical trials should use larger samples and randomized designs. They should also explore the potential of multiple doses. Comparing the 10 mg dose directly against the standard 25 mg dose would also be valuable.
The study, “Single-dose (10 mg) psilocybin reduces symptoms in adults with obsessive-compulsive disorder: A pharmacological challenge study,” was authored by Luca Pellegrini, Naomi A. Fineberg, Sorcha O’Connor, Ana Maria Frota Lisboa Pereira De Souza, Kate Godfrey, Sara Reed, Joseph Peill, Mairead Healy, Cyrus Rohani-Shukla, Hakjun Lee, Robin Carhart-Harris, Trevor W. Robbins, David Nutt, and David Erritzoe.

Listening to music immediately after learning new information may help improve memory retention in older adults and individuals with mild Alzheimer’s disease. A new study published in the journal Memory provides evidence that emotionally stimulating music can enhance specific types of memory recall, while relaxing music might help fade negative memories. These findings suggest that low-cost, music-based interventions could play a supportive role in managing cognitive decline.
Alzheimer’s disease is a progressive condition that damages neurological structures essential for processing information. This damage typically begins in the hippocampus and entorhinal cortex. These areas are vital for forming new episodic memories. As the disease advances, individuals often struggle to recall specific events or details from their recent past.
A common symptom in the early stages of Alzheimer’s is false recognition. This occurs when a person incorrectly identifies a new object or event as something they have seen before. Memory scientists explain this through dual-process theories. These theories distinguish between recollection and familiarity. Recollection involves retrieving specific details about an event. Familiarity is a general sense that one has encountered something previously.
In Alzheimer’s disease, the capacity for detailed recollection often declines before the sense of familiarity does. Patients may rely on that vague sense of familiarity when trying to recognize information. This reliance can lead them to believe they have seen a new image or heard a new story when they have not. Reducing these false recognition errors is a key goal for cognitive interventions.
While specific memory systems degrade, the brain’s ability to process emotions often remains relatively intact for longer. Research indicates that emotional events are generally easier to remember than neutral ones. This emotional memory enhancement relies on the amygdala. This small, almond-shaped structure in the brain processes emotional arousal.
The amygdala interacts with the hippocampus to strengthen the storage of emotional memories. Activity in the amygdala can trigger the release of adrenal hormones and neurotransmitters like dopamine and norepinephrine. These chemicals help solidify neural connections. This process suggests that stimulating the amygdala might help strengthen associated memories.
Researchers have explored whether music can serve as that stimulus. Music is known to induce strong emotional responses and activate the brain’s reward systems. Previous studies with young adults found that listening to music after learning can improve memory retention. The research team behind the current study aimed to see if this effect extended to older adults and those with Alzheimer’s.
“Our team, led by Dr. Wanda Rubinstein, began researching music-based interventions to improve memory around ten years ago, with a focus on emotional memory. The results regarding the effect of music on younger adults’ memory were promising. When presented after the learning phase, music improved visual and verbal memory,” said study author Julieta Moltrasio, a postdoctoral researcher affiliated with the National Council for Scientific and Technical Research, the University of Palermo, and University of Buenos Aires.
“Additionally, several studies have shown that people with dementia can remember familiar songs even when they forget important events from their past. My supervisor and I work with people with dementia, so we wanted to further explore the use of music as an intervention for this population. Specifically, we wanted to explore whether music could help them learn new emotional material, such as emotional pictures.”
The study included 186 participants living in Argentina. The sample consisted of 93 individuals diagnosed with mild Alzheimer’s disease and 93 healthy older adults. A notable aspect of this group was their educational background. Many participants had lower levels of formal education than is typical in neuroscience research. This inclusion helps broaden the applicability of the scientific findings to a more diverse population.
The researchers engaged the participants in two sessions separated by one week. In the first session, participants viewed a series of 36 pictures. These images were drawn from a standardized database used in psychological research. The pictures varied in emotional content. Some were positive, some were negative, and others were neutral.
After viewing the images, the researchers divided the participants into three groups. Each group experienced a different auditory condition for three minutes. One group listened to emotionally arousing music. The researchers selected the third movement of Haydn’s Symphony No. 70 for this condition. This piece features unexpected changes in volume and rhythm intended to create high energy.
A second group listened to relaxing music. The researchers used Pachelbel’s Canon in D Major for this condition. This piece is characterized by a slow tempo and repetitive, predictable patterns. The third group served as a control and listened to white noise. White noise provides a constant background sound without musical structure.
Immediately after this listening phase, participants performed memory tasks. They were asked to describe as many pictures as they could remember. They also completed a recognition task. The researchers showed them the original pictures mixed with new ones. Participants had to identify which images they had seen before.
One week later, the participants returned for the second session. They repeated the recall and recognition tasks to test their long-term memory of the images. They did not listen to the music again during this second session. This design allowed the researchers to test whether the music played immediately after learning helped consolidate the memories over time.
The results showed that emotional memory was largely preserved in both groups. Both the healthy older adults and the patients with Alzheimer’s remembered emotional pictures better than neutral ones. This confirms that the ability to prioritize emotional information remains functional even when other cognitive processes decline.
The type of music played after the learning phase had distinct effects on memory performance one week later. For healthy older adults, listening to the emotionally arousing music led to better delayed recall. They were able to describe more of the positive and neutral pictures compared to those who listened to white noise. This suggests that the physiological arousal caused by the music helped lock in the memories formed just moments before.
For the participants with Alzheimer’s disease, the benefit manifested differently. The arousing music did not increase the total number of items they could recall. It did, however, improve their accuracy in the recognition task. Patients who listened to the stimulating music made fewer false recognition errors one week later. They were less likely to incorrectly confuse a new picture for an old one.
This reduction in false recognition implies that the music may have strengthened the specific details of the memory. By boosting the recollection process, the intervention helped patients distinguish between what they had actually seen and what merely felt familiar. This specific improvement in discrimination is significant for a condition defined by memory blurring.
The researchers also found a distinct effect for the relaxing music condition. Participants who listened to Pachelbel’s Canon showed a decrease in their ability to recognize negative pictures one week later. This finding was consistent across both the healthy older adults and those with Alzheimer’s.
“Our findings showed that emotionally arousing music improves memory in older adults and patients with dementia, while relaxing music decreases negative memories,” Moltrasio told PsyPost. “Based on previous research, we already knew that relaxing music could decrease memory, but we did not expect to find that it could specifically reduce negative memories in the populations we studied. If relaxing music can reduce the memory of negative images, these findings could be useful in developing treatments for people with negative memories, such as those with PTSD.”
“I also want to highlight that although the effects of highly familiar music on emotion and memory have been well-studied, our research proved that non-familiar music can also have a significant impact. This is important because it shows that music can have a powerful effect even if you don’t have a special connection to it.”
These observed effects align with the synaptic tagging hypothesis. This biological theory suggests that creating a memory involves a temporary tag at the synapse, the connection between neurons. If a strong stimulus follows the initial event, it triggers protein synthesis that stabilizes that tag. In this study, the music likely provided the necessary stimulation to solidify the preceding visual memories.
The research indicates that “even low-cost, easily replicable interventions, such as listening to music, can positively impact the memory of individuals experiencing memory loss,” Moltrasio explained. “These findings may help other researchers and developers create targeted treatments. Furthermore, certain brain regions (e.g., those related to music listening) can remain intact even when memory is impaired. We hope these findings offer researchers, caregivers, health professionals, and relatives of people with Alzheimer’s disease a glimmer of hope.”
“Although the results were promising, the size of the effects was not large. This means that the difference between the group that received the musical treatment and the group that did not is not very big. However, it is worth noting that we did find differences between the groups. This is the first study to prove that a music intervention after learning improves memory in Alzheimer’s disease.”
Additionally, the control condition used white noise. While standard in research, white noise can sometimes be aversive to listeners. Future studies might compare music to silence to ensure the effects are driven by the music itself and not a reaction to the noise. The researchers also note that they did not directly measure physiological arousal, such as heart rate, to confirm the participants’ physical reactions to the music.
Future research aims to explore these mechanisms further. The research team is interested in how familiar music might affect memory and whether active engagement, such as singing or playing instruments, might offer more potent benefits. They are also investigating how the ability to recognize emotions in music changes with dementia. Understanding these nuances could lead to more targeted, non-pharmacological therapies for memory loss.
“We are currently investigating how music is processed and the effects of musical training on cognition,” Moltrasio told PsyPost. “One line of research focuses on how young and older adults, as well as people with dementia, process emotions in music. Among younger adults, we have examined differences in music emotion recognition and other cognitive domains, such as short-term memory and verbal and nonverbal reasoning, between musicians and non-musicians. We have also examined how personality traits may affect the recognition of emotions in music.”
“Regarding Alzheimer’s disease, we are investigating whether the ability to detect emotions in music is impaired. Even when their ability to process other emotional cues, such as those expressed through facial expressions, is impaired, they may still be able to distinguish certain emotions in music. This could be useful for developing music-based interventions that build on these participants’ abilities.”
“Another line of research that I would like to pursue is the effect of familiar music on memory,” Moltrasio continued. “Based on this research, we could develop specific interventions for people with dementia using familiar music. I am not currently working on this line of research, but it could be the next step for me and my team.”
“Our study emphasizes the importance of researching simple, low-cost interventions for dementia. This is particularly relevant when considering the demographics of individuals living with dementia in countries like Argentina. Most neuroscience research does not include individuals with low educational levels, despite the fact that they represent the majority of older adults in our country. Therefore, it is crucial to encourage and support research incorporating more diverse populations.”
The study, “The soundtrack of memory: the effect of music on emotional memory in Alzheimer’s disease and older adults,” was authored by Julieta Moltrasio and Wanda Rubinstein.

Recent psychological research suggests that for some White Americans, expressing anger at individual acts of racism may actually decrease their motivation to support broader systemic change. The study indicates that voicing outrage at a specific bigot can serve as a psychological release that alleviates feelings of guilt associated with racial privilege, thereby reducing the drive to take further reparative action. These findings were published in the Personality and Social Psychology Bulletin.
The year 2020 saw a global surge in protests following high-profile incidents of police violence against Black individuals. This period introduced many Americans to concepts such as structural racism and White privilege. White privilege refers to the unearned societal advantages that White individuals experience simply due to their racial identity.
Psychological theory posits that acknowledging these unearned advantages can be psychologically threatening to a person’s moral self-image. This awareness often triggers a specific emotion known as White collective guilt. This is not guilt over one’s own personal actions, but rather distress arising from the advantages one receives at the expense of another group.
Psychologists have previously established that this type of guilt can be a powerful motivator. It often drives individuals to support policies or organizations aimed at restoring equity. However, the discomfort of guilt also motivates people to find ways to reduce the negative feeling.
Zachary K. Rothschild, a researcher at Bowdoin College, sought to understand how this dynamic plays out in the age of viral news stories. Rothschild and his colleague, Myles Hugee, investigated whether focusing anger on a specific “bad apple”—an individual acting in a clearly racist manner—might function as a defense mechanism.
The researchers proposed that expressing outrage at a third party could allow individuals to separate themselves from the problem of racism. By condemning a specific bigot, a person reaffirms their own moral standing. This “moral cleansing” might satisfy the internal need to address the threat of racism, leaving the individual with less motivation to contribute to solving systemic issues.
To test this hypothesis, the researchers conducted three separate experiments involving White American adults. The first study involved 896 participants recruited through an online platform. The team first measured the participants’ “justice sensitivity,” which is a personality trait reflecting how strongly a person reacts to unfairness faced by others.
The researchers then manipulated whether the participants felt a sense of racial privilege. Half of the group completed a survey designed to make them think about the unearned advantages they possess as White Americans. The other half completed a control survey about the privileges of being an adult.
Following this, all participants read a news story based on real events. The article described a White woman falsely accusing a Black man of threatening her and then assaulting him. This scenario was chosen to mirror viral incidents that typically spark public anger.
After reading the story, the researchers divided the participants again. One group was asked to write a short paragraph expressing their feelings about the woman in the story. This gave them an opportunity to vent their outrage. The other group was asked to write an objective summary of the events, denying them the chance to express emotion.
Finally, the researchers gave the participants a bonus payment for the study. They offered the participants the option to donate some or all of this money to the National Association for the Advancement of Colored People (NAACP). This donation served as a concrete measure of their willingness to address racial inequity.
The results revealed a specific pattern among participants who scored low in justice sensitivity. For these individuals, being reminded of their White privilege increased their feelings of guilt. If they were not given the chance to express outrage, this guilt drove them to donate more money to the NAACP.
However, the dynamic changed for those who were allowed to vent their anger. Among the low justice sensitivity participants, the opportunity to express outrage at the woman in the story completely eliminated the privilege-induced increase in donations. The act of condemning the individual racist appeared to neutralize the motivating power of their collective guilt.
This effect was not present among participants who scored high in justice sensitivity. For those individuals, the motivation to support racial justice appeared to be intrinsic. Their donations were less dependent on momentary feelings of guilt or the opportunity to express outrage.
The second study, involving 1,344 participants, aimed to determine if this effect was specific to racial issues. The researchers followed a similar procedure but introduced a variation in the news story. Half the participants read the original story about a White woman and a Black man. The other half read a modified version where both the perpetrator and the victim were White.
The researchers found that expressing outrage reduced donations only when the injustice was racial in nature. When the story involved white-on-white conflict, expressing anger did not lower the donation amounts. This suggests that the “moral cleansing” function of outrage is specific to the domain where the person feels a moral threat.
The third study was designed to address potential limitations of the first two. The researchers recruited 1,133 participants and used a more controlled method to measure outrage. Instead of an open-ended writing task, participants in the “expression” condition completed a survey explicitly rating their anger at the perpetrator’s racism.
The researchers also changed the outcome measure to something more substantial than a small donation. They presented participants with a campaign by the American Civil Liberties Union (ACLU) focused on systemic equality. Participants could choose to sign a pledge and select specific volunteer activities they would commit to over the coming year.
The findings from the third study replicated the earlier results. For participants with low justice sensitivity, being reminded of White privilege increased their willingness to volunteer for the ACLU. However, if these participants were first given the opportunity to report their outrage at the individual racist, their willingness to volunteer dropped significantly.
The study provides evidence for what the authors call “defensive outrage.” It suggests that for some people, participating in the public condemnation of racist individuals may serve a self-serving psychological function. It allows them to feel that they have handled their moral obligation, thereby reducing their engagement with the more difficult work of addressing systemic inequality.
There are several caveats to consider regarding this research. The participants were recruited online, which may not perfectly represent the general population. Additionally, the third study relied on self-reported intentions to volunteer, which does not always guarantee that the participants would follow through with the actions.
The study focused exclusively on White Americans. The psychological dynamics of guilt and outrage may function differently in other racial or ethnic groups. Future research would need to investigate whether similar patterns exist in different cultural contexts or regarding other types of social inequality.
The authors note that these findings should not be interpreted to mean that all outrage is counterproductive. For many people, anger is a genuine fuel for sustained activism. The study specifically highlights a mechanism where outrage replaces, rather than complements, constructive action among those who are less naturally inclined toward justice concerns.
The study, “Demotivating Justice: White Americans’ Outrage at Individual Bigotry May Reduce Action to Address Systematic Racial Inequity,” was authored by Zachary K. Rothschild and Myles Hugee.

Increased intake of dietary selenium is associated with a lower likelihood of reporting suicidal thoughts among American adults. A recent analysis of population health data indicates that as consumption of this trace mineral rises, the odds of experiencing suicidal ideation decrease. These findings were published in the Journal of Affective Disorders.
Suicide remains a persistent public health challenge around the world. Public health officials and medical professionals prioritize identifying early warning signs to prevent tragic outcomes. Suicidal ideation, characterized by thinking about self-harm or ending one’s life, is a primary indicator of future suicide attempts.
Most prevention strategies currently focus on psychological and social risk factors. Mental health professionals typically look for signs of depression, anxiety, or social isolation. However, researchers are increasingly investigating how physical health and nutrition influence psychiatric well-being.
Trace minerals play specific roles in brain function and mood regulation. Selenium is one such essential element. It is found naturally in soil and appears in foods such as nuts, seafood, meats, and whole grains.
The body utilizes selenium to create selenoproteins. These proteins help manage oxidative stress and regulate the immune system. Previous research has hinted at a link between low selenium levels and mood disorders like depression.
Haobiao Liu of Xi’an Jiaotong University and Zhuohang Chen of Fudan University sought to explore this connection specifically regarding suicidal ideation. They noted that prior studies on trace elements and suicide yielded inconsistent results. Some earlier investigations were limited by small participant numbers or specific demographic focuses.
Liu and Chen designed their study to analyze a much larger and more representative group of people. They utilized data from the National Health and Nutrition Examination Survey (NHANES). This program collects health and nutritional information from a cross-section of the United States population.
The researchers aggregated data from survey cycles spanning from 2005 to 2016. They applied strict exclusion criteria to ensure the reliability of their dataset. For example, they removed individuals with implausible daily calorie counts to avoid data errors.
The final analysis included 23,942 participants. To assess what these individuals ate, the survey employed a dietary recall interview. Participants described all food and beverages consumed over the preceding 24 hours.
Interviewers conducted two separate recalls for each participant to improve accuracy. The first took place in person, and the second occurred via telephone days later. The researchers calculated average daily selenium intake from these reports.
To measure mental health outcomes, the study relied on the Patient Health Questionnaire (PHQ-9). This is a standard screening tool used by doctors to identify depression. The researchers focused specifically on the ninth item of this questionnaire.
This specific question asks participants if they have been bothered by thoughts that they would be “better off dead” or of hurting themselves. Respondents answered based on their experience over the previous two weeks. Those who reported having these thoughts for several days or more were classified as having suicidal ideation.
The researchers used statistical models to look for associations between selenium levels and these reported thoughts. They accounted for various confounding factors that could skew the results. These included age, gender, income, body mass index, and overall diet quality.
The analysis showed an inverse relationship. Participants with higher levels of dietary selenium were less likely to report suicidal ideation. This association persisted even after the researchers adjusted for the other demographic and health variables.
The researchers calculated the change in risk based on units of selenium intake. In their fully adjusted model, they found that a specific unit increase in intake corresponded to a 41 percent decrease in the odds of suicidal ideation. This suggests a strong statistical link between the nutrient and mental health status.
To understand the trend better, the researchers divided participants into four groups based on intake levels. These groups are known as quartiles. The first quartile had the lowest selenium intake, while the fourth had the highest.
Comparing these groups revealed a consistent pattern. Individuals in the top three groups all had a lower risk of suicidal thoughts compared to the bottom group. The risk reduction was most pronounced in the group with the highest consumption.
The study also tested for a “dose-response” relationship. The analysis indicated a linear negative association. As the amount of selenium in the diet went up, the reports of suicidal thinking went down.
The authors propose several biological reasons why this might happen. One theory involves oxidative stress. The brain is sensitive to damage from free radicals, and selenium-based enzymes help neutralize these threats.
Another potential mechanism involves inflammation. High levels of inflammation in the body are often found in people with depression and suicidal behaviors. Selenium has anti-inflammatory properties that might help protect the brain from these effects.
Neurotransmitters may also play a role. These are the chemical messengers that allow nerve cells to communicate. The study authors note that selenium might influence the regulation of serotonin and dopamine, which are critical for mood stability.
Despite these promising findings, the study has several limitations. The research was cross-sectional in design. This means it captured a snapshot of data at a single point in time rather than following people over years.
Because of this design, the study cannot prove that low selenium causes suicidal thoughts. It only shows that the two things are related mathematically. It is possible that people who are depressed simply eat fewer nutrient-rich foods.
Another limitation is the reliance on memory for dietary data. It is difficult for people to remember exactly what they ate in the last 24 hours. This can lead to inaccuracies in the estimated nutrient intake.
The assessment of suicidal ideation also had constraints. Using a single question from a depression screener provides a limited view of a complex behavior. It does not capture the severity or duration of the thoughts in detail.
The researchers also acknowledged that individual biology varies. People absorb nutrients differently based on their genetics and gut health. The study could not account for how well each participant’s body utilized the selenium they consumed.
Future research is necessary to confirm these results. The authors suggest that prospective studies are needed. These would follow large groups of people over time to see if baseline selenium levels predict future mental health issues.
Clinical trials could also provide stronger evidence. In such studies, researchers would provide selenium supplements to some participants and placebos to others. This would help determine if increasing intake directly improves mental well-being.
Investigating the biological pathways is another priority. Scientists need to understand exactly how selenium interacts with brain chemistry. This could lead to new treatments or dietary recommendations for people at risk of suicide.
Until then, the findings add to a growing body of evidence linking diet to mental health. They highlight the potential importance of proper nutrition in maintaining psychological resilience. Public health strategies might one day include dietary optimization as part of suicide prevention efforts.
The study, “Does dietary selenium protect against suicidal ideation? Findings from a U.S. population study,” was authored by Haobiao Liu and Zhuohang Chen.

Christmas is often considered a time of connection, warmth and belonging. That’s the script, anyway. But for many people, the reality feels different; isolating, emotionally weighted and filled with comparisons that sting.
Whether you’re spending Christmas alone, navigating grief, or simply don’t feel “festive,” it can feel like you’ve slipped out of sync with the rest of the world. However, that feeling isn’t the same as being alone. Loneliness, isn’t about the number of people around you. It’s about connection, and the absence of it.
This time of year intensifies emotional experience. Rituals such as decorating a tree or watching a favourite film may bring up memories. These could be of people, or they could be of former versions of ourselves.
We measure time differently in December, a phenomenon psychologists refer to as “temporal anchoring”. The season acts as a golden thread spanning our lives, pulling us back to the past. We often use it to reflect on what we’ve lost, who we’ve become, and what didn’t happen. It can cut deeply.
It is a sharp counterpoint to the cultural messaging: people coming together, the push to be joyful and the idea that gratitude must prevail. It’s not just tinsel that is expected to sparkle. We are, too.
Some people are more vulnerable at this time of year, particularly those in flux or transitioning. A recent breakup, moving house, a medical diagnosis or redundancy can often lead to feeling emotionally unanchored. Others carry complex feelings about family, grief or past trauma, which make forced joy or cheerfulness jarring.
Personality plays a role too. People high in traits such as neuroticism or socially prescribed perfectionism can be more vulnerable to distress and loneliness when life does not live up to their expectations.
Studies have shown that chronic loneliness can increase stress hormones such as cortisol, impair immune function and even affect cardiovascular health. Social neuroscientist John Cacioppo described loneliness as “a biological warning system” that our need for connection isn’t being met.
Loneliness, though, is a normal human response. It is a reaction to a mismatch between our desired social experience and our reality. Self-discrepancy theory helps explain why this mismatch causes emotional pain. When there’s a gap between who we are and who we feel we should be, whether it is socially, emotionally or even seasonally, discomfort follows. Christmas, with all its trimmings, amplifies that gap.
That said, being alone at Christmas doesn’t automatically mean something’s wrong.
In fact, it might be exactly what you need.
For many, this time can be a rare opportunity for space, stillness and healing. It
might be the only time of year when you get the space to hear your own thoughts, reflect or reset. Choosing solitude purposefully can be deeply restorative.
Connecting with yourself can be just as important as connecting with others.
Research into self-determination theory also highlights autonomy, competence and relatedness as core psychological needs.
Autonomy, in particular, means honouring your own choices, not other people’s expectations. For example, choosing to spend the day quietly reading, cooking for yourself, or creating a personal ritual supports both autonomy and competence. These acts reinforce your ability to care for yourself and reduce the pressure to seek validation from others.
Philosophers such as 19th-century Danish thinker Søren Kierkegaard and ancient stoic Epictetus emphasised the importance of tuning into your own inner life rather than being governed by external forces. They remind us that authenticity doesn’t come from performing joy for others, but from noticing what we need and choosing to honour it.
The key is alignment. Do what nourishes you, not what performs well on
Instagram, and let the societal pressures wash over you rather than be driven by
them.
Trying to “fix” loneliness with a to-do list isn’t the answer. It’s about tuning into what you need. These approaches are rooted in psychological and philosophical insight. They are not quick fixes.
1. Let yourself feel it
Loneliness hurts. It’s okay to name it. Pushing it away rarely works. Accepting and sitting with it can be the first step toward softening its grip.
2. Create micro-rituals
Small routines bring meaning and structure. Brew a
particular tea. Rewatch a film that resonates. Light a candle for someone you miss. Rituals connect you to something larger but also connect you to yourself.
3. Reframe connection
Closeness doesn’t have to mean crowds. It might mean sending a message, joining a quiet online space or simply being present with yourself. Journaling, voice notes or reflective walks can all be forms of inward connection.
4. Celebrate your uniqueness
You are not a statistic. You don’t need to aim for the “average” mental health baseline. Your emotional life is yours alone. A little variation, a little eccentricity, these are signs of being alive.
5. Find what works for you
There’s no one right way to do Christmas. Whether it’s a solo walk, a day in pyjamas, or calling one person you trust, the point is to honour your individuality.
If you’re feeling out of step this Christmas, that doesn’t make you broken. It makes you aware. You’re noticing what’s missing; you are listening. That’s not weakness, it’s one of the greatest sources of wisdom.
In The Book of Disquiet, Portuguese poet and philosopher Fernando Pessoa wrote: “To feel today what one felt yesterday isn’t to feel – it’s to remember today what was felt yesterday, to be today’s living corpse of what yesterday was lived and lost.”
It’s a stark image, but a truthful one. At Christmas, we often try to summon old
feelings, those of joy, warmth, and belonging, as if they can be reactivated on
command. But what if we didn’t force it? Christmas doesn’t have to be remembered joy. It can be present truth.
Loneliness isn’t something to be solved or suppressed. It’s a companion on the
journey inward.
And sometimes, the most meaningful connection we can make is with ourselves.
This article is republished from The Conversation under a Creative Commons license. Read the original article.

A study investigating the effects of alcohol on emotion recognition and empathy found that alcohol impairs the recognition of anger, but not other specific emotions. Participants who drank alcohol also reported higher affective empathy, i.e., relating better to other study participants. The research was published in Scientific Reports.
After a person drinks alcohol, it is rapidly absorbed through the stomach and small intestine into the bloodstream. The alcohol then travels to the brain, where it affects the release of neurotransmitters, producing relaxation and reduced inhibition. As blood alcohol concentration rises, judgment, coordination, and reaction time become increasingly impaired. The liver begins metabolizing alcohol, but it can only process a limited amount per hour, causing excess alcohol to circulate in the body.
Alcohol also affects the cardiovascular system by dilating blood vessels, which can create a sensation of warmth while actually lowering core body temperature. In the short term, drinking can increase urine production, leading to dehydration and electrolyte imbalance. The gastrointestinal system may become irritated, resulting in nausea or vomiting at higher doses.
As alcohol continues to circulate, it disrupts normal sleep, reducing restorative REM sleep despite making people feel sleepy. When blood alcohol levels begin to fall, withdrawal-like symptoms such as anxiety or irritability may appear in some individuals.
Study author Lakshmi Kumar and their colleagues investigated how an intoxicating dose of alcohol (0.74 g/kg in females and 0.82 g/kg in males) affects cognitive and affective empathy. As prior studies were inconclusive, they started this investigation with no specific hypotheses about the directions of the expected effect.
Study participants were 156 individuals who reported drinking at least one day per week and binge drinking at least four times in the past month. Participants’ average age was 23 years. Thirty-one percent of participants were women.
Binge drinking was defined as 5 or more standard alcoholic drinks on the same occasion for men and 4 or more for women. A standard alcoholic drink is a drink containing about 14 grams of pure ethanol. This is roughly equivalent to a small beer (350 ml), a glass of wine (150 ml), or a shot of spirits (45 ml).
Participants were randomly assigned to groups of 3 unacquainted persons. Each of these 3-person groups was then randomly assigned to either drink an alcoholic beverage or a placebo beverage. In this way, 117 participants were assigned to drink alcohol and 39 to drink the placebo beverage. However, participants did not know which beverage they would be drinking – they all believed that they would be drinking alcohol.
The alcoholic beverage was a cranberry-vodka cocktail dosed for each participant to achieve a peak blood alcohol concentration of 0.08%. The placebo drink was flattened tonic water, and study authors showed these participants false blood alcohol level recordings to maintain their belief that they were drinking alcohol.
After drinking the assigned drink, participants completed 3 assessments of subjective intoxication experience and blood alcohol level (using a breathalyzer) in 30-minute intervals. While the blood alcohol level was increasing after drinking, participants completed assessments of empathy and emotion recognition (MET and GERT, tests based on recognizing emotions of people in photographs and in short video clips).
Results showed that participants who drank alcohol had impaired recognition of anger, but no other specific emotions. These individuals also reported higher affective empathy, i.e., that they related well to another participant, in response to direct interactions with other participants.
“Findings suggest alcohol worsens anger recognition and increases perceptions of relating to another,” the study authors concluded.
The study contributes to the scientific understanding of the psychological effects of alcohol. However, study authors note that participants interacted in groups of strangers (other study participants they were not previously acquainted with) prior to completing the emotion recognition and empathy assessments. Differences in these interaction experiences could have affected participants’ levels of engagement and subsequently reported empathy.
The paper, “Alcohol’s acute effects on emotion recognition and empathy in heavy-drinking young adults,” was authored by Lakshmi Kumar, Kasey G. Creswell, Kirk W. Brown, Greta Lyons, and Brooke C. Feeney.

A new study published in The Journal of Sex Research has found that men who use sexual technology are viewed with more disgust than women who engage in the same behaviors. The findings indicate a “reverse sexual double standard” in which men face harsher social penalties for using devices like sex toys, chatbots, and robots, particularly as the technology becomes more humanlike. This research suggests that deep-seated gender norms continue to influence how society perceives sexual expression and the integration of technology into intimate lives.
The intersection of technology and human sexuality is expanding rapidly. Sex technology, or “sextech,” encompasses a wide range of devices designed to enhance sexual experiences. These range from traditional vibrators and dildos to advanced artificial intelligence chatbots and lifelike sex robots. Although the use of such devices is becoming increasingly common in solitary and partnered sexual activities, a social stigma remains attached to their use. Many users keep their habits discreet to avoid judgment.
Previous observations suggest that this stigma is not applied equally across genders. While the use of vibrators by women has been largely normalized and framed as a tool for empowerment or sexual wellness, men’s use of similar devices often lacks the same social acceptance. Media depictions frequently portray men who use sex robots or dolls as socially isolated or unable to form human connections.
“Anecdotally, but also in research, discussions around using sexech tend to highlight vibrator use as a positive and empowering addition to female sexuality, while the use of devices designed for male anatomy (like sex dolls or Fleshlights) is more often viewed negatively or as unnecessary,” said study author Madison E. Williams, a PhD student at the University of New Brunswick and member of the Sex Meets Relationships Lab.
“In the same vein, sex toys tend to be more socially accepted than more advanced forms of sextech (which are also typically marketed toward male users). Our study aimed to examine whether this apparent sexual double standard could be demonstrated empirically, and if women and men held different opinions.”
The researchers focused specifically on disgust, an emotion deeply linked to the avoidance of pathogens and the policing of social norms. Disgust serves as a psychological behavioral immune system, but it also reinforces moral boundaries. They proposed that sextech might trigger disgust by violating traditional sexual norms or by evoking the “uncanny valley” effect associated with humanlike robots.
A key rationale for the study was to understand how traditional gender scripts influence these perceptions. Conventional heterosexual scripts often position men as sexual experts who should always be ready for sex and capable of pursuing women. In this context, a man’s use of a sex toy might be interpreted as a failure to secure a human partner or a lack of sexual prowess.
To investigate these questions, the researchers recruited a sample of 371 adults through the crowdsourcing platform Prolific. The participants ranged in age from 18 to 81 years, with an average age of approximately 45. The sample was relatively balanced in terms of gender, consisting of 190 women and 181 men. The majority of participants identified as heterosexual and White.
The study employed a survey design to measure disgust sensitivity in response to specific scenarios. Participants were presented with six different items describing a person using sextech. These scenarios varied based on the gender of the user and the type of technology involved. The three types of technology assessed were sex toys, which represent the least humanlike option, erotic chatbots, which offer some conversational interaction, and sex robots, which are the most humanlike.
For each scenario, participants rated how disgusting they found the behavior on a scale from 1 to 7. A rating of 1 indicated “not at all disgusting,” while a rating of 7 indicated “extremely disgusting.” This measurement approach was adapted from established scales used to assess disgust sensitivity in other psychological research. The researchers compared these ratings to determine if the gender of the user or the type of device significantly influenced the emotional reaction of the observer.
The results provided clear evidence of a double standard. Across the board, participants rated men who used sextech as more disgusting than women who used the same devices. This effect was consistent regardless of the participant’s own gender. Both men and women viewed male sextech users more negatively. This confirms the hypothesis that men are penalized more heavily for incorporating technology into their sexual lives.
“Our findings suggest that men who use sex toys, exchange sexual messages with AI companions, or have sex with robots are perceived as more disgusting than women who engage in equivalent acts,” Williams told PsyPost. “This highlights a troubling double standard that penalizes men for using sexual devices, even though research has found they can offer both women and men similar sexual benefits.”
The study also found a clear hierarchy of disgust related to the type of device. Participants rated the use of simple sex toys as the least disgusting behavior. Engaging with an erotic chatbot elicited higher disgust ratings. The use of sex robots generated the strongest feelings of disgust. This supports the idea that as sexual technology becomes more humanlike, it triggers stronger negative emotional responses. This may be due to the eerie nature of artificial humans or concerns about technology replacing genuine human intimacy.
An interaction between the target’s gender and the type of device offered further nuance to the findings. The gap in disgust ratings between male and female users was widest regarding sex toys. While men were judged more harshly in all categories, the double standard was most pronounced for the simplest technology. As the technology became more advanced and stigmatized—such as with sex robots—the judgment became high for everyone, narrowing the gender gap slightly. However, men were still consistently rated as more disgusting than women even in the robot condition.
“Interestingly, although men were perceived to be more disgusting than women for their use of all forms of sextech, the gap was especially large for sex toys,” Williams said. “In other words, while overall reactions were generally more negative for more advanced technology like erotic chatbots and robots, the strongest gender difference appeared in the sex toy condition.”
The researchers also analyzed differences based on the gender of the participant. Consistent with previous psychological research on disgust sensitivity, women participants reported higher levels of disgust overall than men did. Women expressed stronger negative reactions to the depictions of sextech use across the scenarios. Despite this higher baseline of disgust among women, the pattern of judging men more harshly than women remained the same.
The researchers noted that while the double standard is statistically significant, the average disgust ratings were generally near the midpoint of the scale. The ratings indicate a moderate aversion that varies significantly based on context.
“It is important to note that for all items, disgust ratings remained around or below the midpoint of our scale – this indicates that while men were judged more harshly for their sextech use, these behaviours weren’t rated as extremely disgusting, overall,” Williams explained. “Additionally, people also considered women’s use of sextech to be somewhat disgusting, but on average men were judged more negatively.”
As with all research, there are some limitations to consider. The research relied on self-reported data, which can be influenced by social desirability bias. Participants might have modulated their answers to appear more open-minded or consistent with perceived norms. Additionally, the sample was predominantly heterosexual and Western. Perceptions of sextech and gender roles likely vary across different cultures and sexual orientations.
The study also measured disgust as a general concept without distinguishing between different types. Disgust can be driven by concerns about hygiene, violations of moral codes, or aversion to specific sexual acts. It is unclear which of these specific domains was the primary driver of the negative ratings. Future research could investigate whether the disgust comes from a perceived lack of cleanliness, a sense of unnaturalness, or a moral judgment against the user’s character.
The researchers suggest that future studies should explore how these perceptions change over time. As artificial intelligence and robotics become more integrated into daily life, the stigma surrounding their use in sexual contexts may shift. Longitudinal research could track whether familiarity with these technologies reduces the disgust response. It would also be beneficial to examine whether the context of use matters. For example, using a device alone versus using it with a partner might elicit different social judgments.
“We hope this work encourages more open, evidence-based conversations about men’s use of sextech, with the ultimate goal of reducing the stigma surrounding it,” Williams said. “Understanding that this double standard exists is the first step to normalizing and accepting all forms of sextech use, by all genders.”
The study, “Gross Double Standard! Men Using Sextech Elicit Stronger Disgust Ratings Than Do Women,” was authored by Madison E. Williams, Gabriella Petruzzello, and Lucia F. O’Sullivan.



A recent study published in Molecular Psychiatry provides evidence that exposure to cannabis during pregnancy may alter the trajectory of brain development in offspring from the fetal stage through adulthood. The findings indicate that high concentrations of the drug can lead to sustained reductions in brain volume and anxiety-like behaviors, particularly in females. This research utilizes advanced imaging techniques in mice to track these developmental changes over time.
Cannabis contains delta-9-tetrahydrocannabinol, commonly referred to as THC. This compound is the primary psychoactive ingredient responsible for the effects associated with the drug. It works by interacting with the endocannabinoid system, a biological network that plays a role in regulating various physiological processes. This system helps guide how the brain grows and organizes itself before birth. It influences essential mechanisms such as the creation of new neurons and the formation of connections between them.
Public perception regarding the safety of cannabis has shifted alongside legal changes in many regions. As the drug becomes more accessible, usage rates among pregnant individuals have increased. Some use it to manage symptoms such as morning sickness, anxiety, or pain. However, modern cannabis products often contain significantly higher concentrations of THC than those available in previous decades.
Medical professionals need to understand how these potent formulations might influence a developing fetus over the long term. Existing data has been limited, often relying on observational studies in humans that cannot fully isolate the effects of the drug from other environmental factors. Most previous research has also looked at the brain at a single point in time rather than following its growth continuously.
“As cannabis is legalized in more countries around the world and U.S. States, it is also increasingly being viewed as natural and safe. More people, including pregnant people, are using cannabis, and the concentration of delta-9-tetrahydrocannabinol (THC), the main psychoactive component in cannabis, is increasing too,” said study author Lani Cupo, a postdoctoral researcher at McGill University and member of the Computational Brain Anatomy Laboratory.
“Pregnant people may use cannabis for a variety of reasons, either because they don’t know they are pregnant, to help manage mood changes, or to help treat symptoms associated with early pregnancy, such as nausea and vomiting accompanying morning sickness. People should be able to make their own informed decisions about what they do during pregnancy, but there is still a major gap in the scientific understanding of some of the long-term effects of cannabis exposure during pregnancy on brain development.”
The research team employed a mouse model to simulate prenatal exposure. Pregnant mice received daily injections of THC at a dose of 5 milligrams per kilogram from gestational day 3 to 10. This period corresponds roughly to the first trimester in human pregnancy. The dosage was intended to model moderate-to-heavy use, comparable to consuming high-potency cannabis products daily. A control group of pregnant mice received saline injections to provide a baseline for comparison.
To observe brain development, the scientists used magnetic resonance imaging, or MRI. They scanned the offspring at multiple time points to create a longitudinal dataset. The first set of images came from embryos extracted on gestational day 17. A second cohort of pups underwent scanning on alternate days from postnatal day 3 to 10. A third group was imaged during adolescence and adulthood, specifically on postnatal days 25, 35, 60, and 90. This approach allowed the team to track the growth curves of individual subjects throughout their lives.
Analysis of the embryonic images revealed that exposure to the drug affected physical development in the womb. Embryos exposed to THC had smaller overall body volumes compared to the control group. Despite the smaller body size, their brains showed enlargement in specific areas. The lateral ventricles, which are fluid-filled cavities within the brain, were significantly larger in the THC-exposed group. The corpus callosum, a bundle of nerve fibers connecting the brain’s hemispheres, also appeared larger at this stage.
As the mice entered the neonatal period, the pattern of growth shifted. The THC-exposed pups experienced a period of “catch-up” growth regarding their body weight. However, their brain development followed a different path. The rate of brain growth decelerated compared to the unexposed mice. This slowing of growth affected multiple regions, including the hippocampus, amygdala, and striatum.
By the time the animals reached adulthood, the structural differences remained evident. The reduction in brain volume persisted in regions such as the hippocampus and the hypothalamus. The data indicated a sex-dependent effect in the long-term outcomes. Female mice exposed to THC tended to show more pronounced volume reductions in adulthood compared to males. While male mice did exhibit some volume loss, they showed less severe reductions in specific areas like the cerebellum and olfactory bulbs compared to females.
“I was surprised by the apparent vulnerability in female mice compared to male mice when it came to effects in adulthood,” Cupo told PsyPost. “It is very clear from previous studies that sex as a biological variable is important in considering the impact of prenatal cannabis exposure, but the literature shows mixed results depending on the domain being investigated and the timing of outcomes and exposures.”
“Sometimes males are more impacted, sometimes females are more impacted. I think this highlights how critical it is to consider both biological sex and, in humans, gender, when studying prenatal exposures like cannabis. Unfortunately, some research still ignores this important consideration.”
The researchers also assessed behavior to see if these structural changes corresponded to functional differences. In the neonatal phase, researchers recorded ultrasonic vocalizations when pups were separated from their mothers. These high-frequency sounds serve as a form of communication for the young mice. Female pups exposed to THC produced fewer calls, which the authors suggest could indicate deficits in social communication. Conversely, male pups exposed to THC made more calls, potentially signaling increased anxiety or distress.
Later in adolescence, the mice underwent an open-field test to measure anxiety-like behavior. This test involves placing a mouse in a large box and observing its movement patterns. Animals that are anxious tend to stay near the walls and avoid the open center of the arena. The offspring exposed to THC moved less overall and spent significantly less time in the center of the box. This behavior is interpreted as an anxiety-like phenotype. The results provide evidence that the structural brain changes were accompanied by lasting behavioral alterations.
To investigate the cellular mechanisms behind these changes, the researchers used scanning electron microscopy. They examined brain tissue from the hippocampus at a very high resolution. In the embryonic stage, the THC group showed an increased number of dividing cells. This suggests that the drug might trigger premature cell proliferation. However, in the neonatal stage, they did not find a significant difference in the number of dying cells. This implies that the reduced brain volume observed later was likely not caused by mass cell death but perhaps by altered developmental timing.
“In short, we found that exposure to a high concentration of THC early in pregnancy can affect the brain until adulthood,” Cupo explained. “Specifically, we found larger volume of the ventricles, or fluid-filled cavities within the brain, before birth. Then, as the baby mice aged over the first two weeks of life, the brain of THC-exposed pups showed a decreased growth rate compared to the unexposed controls. This smaller volume was sustained until adulthood, especially in female mice.”
“Further, during adolescence the mice showed anxiety-like behavior. Notably, these results are fairly subtle, but they suggest that the trajectory of brain development itself can be impacted by exposure to cannabis early in pregnancy.”
While this study offers detailed insights into brain development, it relies on a rodent model. Mice and humans share many biological similarities, particularly in the endocannabinoid system, which makes them useful for studying basic developmental processes. However, the complexity of the human brain and environmental influences cannot be fully replicated in animal studies. For instance, the study used injections to deliver the drug, whereas humans typically inhale or ingest cannabis. The metabolism and concentration of the drug in the blood can differ based on the method of administration.
Despite these differences, animal models allow scientists to control variables that are impossible to manage in human research. They permit the isolation of a specific chemical’s effect without the confounding variables of diet, socioeconomic status, or other drug use that often complicate human studies. This specific study provided a level of anatomical detail through longitudinal imaging and microscopy that would be unethical or impossible to perform in living humans. The findings serve as a biological proof of principle that prenatal exposure can alter neurodevelopmental trajectories.
The study also utilized a relatively high dose of THC. While this was intended to mimic heavy usage, it may not reflect the effects of occasional or lower-dose use. Additionally, the study focused on THC in isolation. Commercial cannabis products contain a complex mixture of compounds, including cannabidiol (CBD) and terpenes, which might interact with THC to produce different effects.
“It can be easy to put a lot of pressure or even blame on people who use cannabis during their pregnancies, but the reality of the human experience is complex, especially during what can be such a transitional and tumultuous time,” Cupo said. “Although our results do show long-term impacts of cannabis exposure on brain outcomes, the reality of a human choosing to use cannabis or not is much more nuanced than we can recapitulate in a laboratory setting with rodents as a model.”
“In no way do I think these results should be used to shame or blame pregnant people. Instead I hope they can be seen as part of a bigger picture emerging to help supply pregnant people and their care providers with some useful information.”
Future research aims to address some of the current study’s limitations. The authors suggest investigating different methods of administration, such as vaporized cannabis, to better mimic human usage patterns. They also plan to examine the effects of other cannabinoids, such as CBD.
“We would also like to explore the timing of exposure, for example if it begins before conception, or if the father mouse consumes cannabis before conception,” Cupo added. “We would also like to explore more complex models, such as whether early life environmental enrichment can prevent some of the long-term impacts of cannabis exposure.”
“I would just like to re-emphasize that our study is a small piece of a much larger picture that researchers have been approaching from many different angles.”
The study, “Impact of prenatal delta-9-tetrahydrocannabinol exposure on mouse brain development: a fetal-to-adulthood magnetic resonance imaging study,” was authored by Lani Cupo, Haley A. Vecchiarelli, Daniel Gallino, Jared VanderZwaag, Katerina Bradshaw, Annie Phan, Mohammadparsa Khakpour, Benneth Ben-Azu, Elisa Guma, Jérémie P. Fouquet, Shoshana Spring, Brian J. Nieman, Gabriel A. Devenyi, Marie-Eve Tremblay, and M. Mallar Chakravarty.


A new analysis of global data reveals that while men score higher on a majority of specific wellbeing metrics, women tend to report higher overall life satisfaction. The findings suggest that females often fare better on social relationship indicators, which appear to carry significant weight in subjective assessments of a good life. These results were published in The Journal of Positive Psychology.
Societal debates regarding how men and women fare relative to one another are common. However, existing scientific literature on this topic often suffers from specific limitations. Many studies rely on narrow definitions of wellbeing that focus heavily on mental or physical health diagnoses rather than a holistic view of human flourishing.
Additionally, much of the psychological research is conducted on Western, Educated, Industrialized, Rich, and Democratic (WEIRD) populations. This geographic bias limits the ability of scientists to make universal claims about human experience across different cultures.
Tim Lomas, a psychology research scientist at the Human Flourishing Program at Harvard University, aimed to address these gaps by applying a broad conceptual framework to a truly international dataset.
“For wellbeing researchers, any sociodemographic differences—such as between males and females in the present paper—are inherently interesting and valuable in terms of furthering our understanding of the topic,” Lomas explained. “More importantly, though, one would ideally hope that such research can actually help improve people’s lives in the world. So, if we have a better sense of the ways in which males and females might respectively be particularly struggling, then that ideally helps people (e.g., policy makers) address these issues more effectively.”
Lomas utilized data collected by the Gallup World Poll, which relies on nationally representative, probability-based samples of adults aged 15 and older. The methodology typically involves surveying approximately 1,000 individuals per country to ensure the data accurately reflects the broader population.
The analysis spanned three years of data collection from 2020 through 2022, a period that necessitated a mix of telephone and face-to-face interviews depending on local pandemic restrictions. The final aggregated sample included exactly 391,656 individual participants across 142 countries.
Lomas selected 31 specific items from the poll to assess wellbeing comprehensively. These items were categorized into three main areas: life evaluation, daily emotions and experiences, and quality of life factors. Life evaluation was measured using Cantril’s Ladder, a tool where participants rate their current and future lives on a scale from zero to ten.
Daily experiences were assessed by asking if participants felt specific emotions or had specific experiences “yesterday.” These included positive states like feeling well-rested, being treated with respect, smiling or laughing, and learning something interesting. They also included negative states such as physical pain, worry, sadness, stress, and anger.
Quality of life measures examined broader factors beyond immediate emotional states. These included satisfaction with standard of living, feelings of safety while walking alone, and satisfaction with the freedom to choose what work to do. The survey also asked about objective hardships, such as not having enough money for food or shelter.
The statistical analysis revealed that males scored more favorably than females on 21 of the 31 variables. Men were more likely to report feeling well-rested, learning something new, and experiencing enjoyment. They also reported lower levels of negative emotions like pain, worry, sadness, stress, and anger compared to women.
Men also scored higher on measures of personal safety and autonomy. For instance, men were more likely to feel safe walking alone at night. They were also more likely to report being satisfied with their freedom to make life choices.
Despite scoring lower on a greater number of individual metrics, females reported higher scores on overall life evaluation. This finding presents a paradox where men appear to have more advantages in daily experiences and safety, yet women rate their lives more positively overall.
“Curiously and significantly…females have higher life evaluation (both present, future, and combined) on Cantril’s (1965) ‘ladder’ item. The ‘curiosity’ aspect of that sentence is that life evaluation is often regarded and used as the single best summary measure of a person’s subjective wellbeing,” Lomas wrote in the study. “…while females would seem to have greater wellbeing if just based on the life evaluation metrics alone, when structuring wellbeing into different components, males appear to do better, at least numerically. It is possible however that even though males place higher on more items, the third of items on which females excel may be more important for wellbeing.”
The data indicates that women tended to fare better on outcomes related to social connection. Females were more likely to report being treated with respect and having friends or relatives they could count on in times of trouble. They also scored higher on measures of “outer harmony,” which relates to getting along with others. Lomas suggests that because social relationships are often the strongest predictors of subjective wellbeing, strength in this area might outweigh deficits in other domains for women.
“Overall, the differences between males and females on most outcomes are not especially large, and on the whole their levels of wellbeing are fairly comparable,” he told PsyPost. “But the differences, such as they are, are still interesting and moreover actionable (e.g., with policy implications).”
These patterns were not uniform across the globe. Cultural context appeared to play a role in how sex differences manifested. South Asia was the region where males fared best relative to females.
In contrast, East Asia was the region where females fared best relative to males. This geographic variation provides evidence that sex differences in wellbeing are not purely biological but are heavily influenced by societal structures. Lomas also compared Iceland and Afghanistan to illustrate the impact of societal gender equality.
In Afghanistan, males scored higher than females on every single wellbeing metric measured. This reflects the severe restrictions and hardships faced by women in that nation. In Iceland, which is ranked highly for gender equality, females often outperformed males even on metrics where men typically lead globally.
Demographic factors such as age and education also influenced the results. The data showed that getting older tended to favor males more than females regarding wellbeing outcomes. As age increased, the gap between men and women often widened in favor of men on various metrics.
However, higher levels of education and income appeared to benefit females slightly more than males. When comparing the most educated participants to the least educated, the relative position of women improved on 16 variables. A similar pattern emerged when comparing the richest quintile of participants to the poorest.
“Wellbeing is multifaceted, and people—from the individual up to whole societies—can be doing well in some ways and less well in others,” Lomas said. “This applies to comparisons between males and females, where overall both groups seem to experience advantages and disadvantages in relation to wellbeing.”
The study has some limitations that provide context for the findings. Lomas notes that the analysis relies on a specific set of 31 items available in the Gallup World Poll. It is possible that a different selection of questions could yield different results.
For example, if the survey included more nuanced questions about relationship quality, women might have outperformed men on even more metrics. The study is also cross-sectional, meaning it captures a snapshot in time rather than tracking individuals over years. This design makes it difficult to determine causal directions for the observed differences.
“Although it’s obvious to most people, I’d emphasize that the results in the paper involve averages, and there will always be exceptions and counterexamples,” Lomas noted. “This applies both at an individual level (e.g., even if males generally tend to struggle on a particular outcome, a minority will excel on it), but also at a societal level (i.e., the findings in the paper are averaged across all the countries in the World Poll, but one can usually find exceptions where countries go against the general trend).”
For future research, Lomas intends to expand this line of inquiry by conducting longitudinal analyses. “Firstly, it would be good to explore trends over time using the Gallup World Poll, which goes back to 2006,” he explained. “Additionally, we plan to use panel data from the Global Flourishing Study (for which I’m the project manager) for the same purpose, and although it has fewer years of data (its first wave was in 2023), it is a genuine panel study (unlike the World Poll, which is cross sectional), so we may get some better insights into causal dynamics.”
The study, “Global sex-based wellbeing differences in the Gallup World Poll: males do better on more metrics, but females generally do better on those that may matter most,” was authored by Tim Lomas.




























