Reading view

Sex differences in brain volume emerge before birth, groundbreaking research suggests

A new study published in Scientific Reports provides a detailed model of how the human brain develops during the transition from the womb to early infancy. The findings indicate that distinct growth patterns for different brain tissues and sex-based differences in brain volume are established between mid-pregnancy and the first weeks of life. This research offers a continuous view of how the brain expands during a foundational period that was previously difficult to map.

The perinatal period involves rapid biological changes that establish the core architecture of the human brain. This phase includes the processes where cells proliferate, migrate to their correct locations, and begin forming complex connections. Scientists have often studied prenatal development and postnatal development separately because of the technical challenges involved in imaging fetuses compared to newborns. This separation has historically made it difficult to understand exactly how growth trajectories evolve as a fetus becomes an infant.

To bridge this gap, a research team led by the University of Cambridge aimed to create a unified model of early brain growth. Yumnah T. Khan, a PhD student at the Autism Research Centre at the University of Cambridge, led the investigation. The team sought to determine when specific tissues dominate growth and when sex differences in brain size first appear. By combining data from before and after birth, they hoped to capture the dynamic nature of brain structural changes.

The researchers utilized data from the Developing Human Connectome Project, which is a large-scale initiative designed to map brain connectivity. The final dataset included 798 magnetic resonance imaging scans collected from 699 unique individuals. These participants included 263 fetuses scanned while in the womb and 535 newborns.

The sample consisted of 380 males and 319 females. The scans covered a developmental window ranging from just over 21 weeks to nearly 45 weeks after conception. This allowed the team to track changes across the second and third trimesters of pregnancy and into the first month after birth.

The team used advanced statistical modeling to chart the volume of different brain tissues against the age of the individuals. They applied corrections to account for the natural variance that increases as infants grow older. The analysis focused on total brain volume as well as specific compartments like gray matter, white matter, and cerebrospinal fluid.

The analysis revealed that the total volume of the brain grows at an increasing rate leading up to birth. When the researchers accounted for the exact age at the time of the scan, they observed a slight slowing of this growth rate in the weeks immediately following birth. This suggests the most rapid expansion occurs just before and shortly after delivery.

Different types of brain tissue followed their own unique timelines. White matter, which forms the connections between brain cells, was the primary driver of growth during mid-pregnancy. However, its proportional contribution to the total brain size decreased over time. This suggests the brain prioritizes establishing core connectivity pathways early in gestation.

In contrast, gray matter, which contains the cell bodies of neurons and is involved in processing information, became the dominant driver of growth during late pregnancy and the postnatal period. This shift indicates a transition from laying down connections to the proliferation and maturation of processing centers. The rapid growth of gray matter likely supports the development of sensory and motor abilities needed for survival after birth.

The study also looked at deep brain structures known as subcortical regions. These areas, such as the amygdala and thalamus, showed an earlier peak in their growth rates compared to the outer layer of the brain, the cortex. The cortex is typically associated with higher-level cognitive functions.

The finding that subcortical structures mature faster aligns with the understanding that regions responsible for basic physiological and sensory functions develop before those involved in complex thought. The researchers observed that the cerebellum, a region critical for motor control, showed exponential growth throughout the studied period. This rapid expansion likely facilitates the early coordination required for an infant’s movements.

A major component of the analysis involved comparing brain development between males and females. The data showed that, on average, males experienced greater increases in brain volume as they aged compared to females. This difference was observable across the entire brain and within specific regions.

The researchers found that these sex differences were generally linear, meaning males consistently showed faster growth. This provides evidence that sex differences in brain structure are not solely a result of social or environmental influences after birth. Instead, biological factors present during pregnancy appear to initiate these divergence patterns.

While males exhibited faster overall growth, the shape of the growth trajectories was largely similar between the sexes. Both males and females followed the same general patterns of tissue expansion. However, there were specific exceptions in regional development.

For example, parts of the temporal lobe showed more pronounced gray matter increases in males. Additionally, the team identified a distinct growth pattern in the left anterior cingulate gyrus. In this region, males showed an S-shaped growth curve, whereas females showed a linear trajectory.

The study faces certain limitations regarding the available data. The scans for fetuses did not begin until after 21 weeks of gestation, leaving the first half of pregnancy unmapped in this analysis. Additionally, the number of scans available for younger fetuses was smaller than for older infants, which could impact the precision of the early growth models.

The researchers also noted technical differences between how fetal and neonatal scans were acquired. Although the same scanner was used, the settings had to be adjusted for the different environments of the womb and the nursery. This could potentially introduce variations in the measurements, though the team observed strong continuity in the data.

While the study documents when sex differences emerge, it does not confirm the biological mechanisms causing them. The authors suggest that prenatal hormones like testosterone likely play a role. Male fetuses are exposed to a surge of testosterone between 14 and 18 weeks of gestation.

The timing of the observed structural differences, appearing after 18 weeks, corresponds with the aftermath of this hormonal surge. Future research will need to directly investigate the link between hormone levels and these structural changes to confirm causality. The researchers emphasize that understanding these typical growth trajectories provides a baseline for identifying atypical development.

This baseline could eventually help explain why certain neurodevelopmental conditions are more common in one sex than the other. For instance, autism is diagnosed more frequently in males. Understanding if and how early brain overgrowth relates to these conditions remains a priority for the field.

The team calls for further longitudinal studies to validate these findings over longer periods. Following the same individuals from pregnancy through childhood would provide even stronger evidence for these developmental patterns. The current study represents a significant step toward a complete map of early human brain development.

The study, “Mapping brain growth and sex differences across prenatal to postnatal development,” was authored by Yumnah T. Khan, Alex Tsompanidis, Marcin A. Radecki, Carrie Allison, Meng-Chuan Lai, Richard A. I. Bethlehem and Simon Baron-Cohen.

Changes in breathing patterns may predict moments of joy before they happen

Recent research suggests that the way a person breathes does more than simply sustain life. Respiratory patterns may actually predict moments of joy and excitement before they occur. A study published in the Journal of Affective Disorders found that specific changes in breathing dynamics are linked to surges in high-energy positive emotions. This connection appears to be particularly strong for individuals with a history of depression.

The findings offer a fresh perspective on the relationship between physiological processes and mental health. While traditional advice often focuses on slow breathing to calm the nerves, this new data indicates that more active breathing patterns may precede positive states of high arousal. The study was conducted by a team of researchers led by Sean A. Minns and Jonathan P. Stange from the University of Southern California.

Mental health professionals have long recognized a connection between the lungs and the mind. The field of psychology itself derives its name from the Greek word psyche, which shares a root with the word for breath. This relationship is often studied in the context of Major Depressive Disorder. This condition is characterized by persistent sadness and a broad impairment in daily functioning.

One of the most debilitating aspects of depression is anhedonia. This symptom refers to a reduced ability to experience pleasure or interest in life. Even after a person has recovered from a depressive episode, they may still struggle to experience positive emotions. This lingering deficit can increase the risk of the depression returning.

Most previous research has focused on how negative emotions alter breathing. For example, stress might cause a person to sigh more often or breathe erratically. There has been less investigation into how breathing relates to positive moods. This represents a gap in scientific understanding. Positive affect is a strong predictor of long-term recovery.

Psychologists often categorize emotions using a model that includes two dimensions. The first dimension is valence, which ranges from pleasant to unpleasant. The second dimension is arousal, which ranges from low energy to high energy. Joy and excitement are examples of high-arousal positive affect. Calmness and contentment are examples of low-arousal positive affect.

Individuals with depression often show a specific reduction in high-arousal positive emotions. They may feel calm, but they rarely feel enthusiastic. The researchers wanted to see if breathing patterns in daily life could predict these elusive states of high energy. They also wanted to know if this relationship worked differently for people who had previously suffered from depression compared to those who had not.

To investigate these questions, the team recruited seventy-three adults. The participants were divided into two groups. One group consisted of thirty-six individuals with a history of Major Depressive Disorder who were currently in remission. The second group consisted of thirty-seven healthy volunteers with no history of psychiatric issues.

The study employed a method known as Ecological Momentary Assessment. This approach allows scientists to collect data in the real world rather than in an artificial laboratory setting. For seven days, participants went about their normal lives while wearing a specialized piece of technology. This device was a “smart shirt” called the Hexoskin.

The Hexoskin is a garment worn under regular clothes. It contains sensors woven into the fabric that measure the expansion and contraction of the chest and abdomen. This allowed the researchers to continuously monitor respiratory metrics. The device measured breathing rate and the volume of air moved with each breath.

While wearing the shirts, participants received surveys on their smartphones at random times throughout the day. These surveys asked them to rate their current mood. The participants rated the intensity of various emotions, such as feeling cheerful, happy, or confident. They also reported on the strategies they were using to manage their emotions.

The researchers focused their analysis on the thirty-minute window immediately preceding each survey. By looking at the physiological data leading up to the mood report, they hoped to see if breathing changes happened before the emotional shift. This time-lagged design helps clarify the direction of the relationship.

The results revealed a clear pattern. When participants exhibited increases in minute ventilation and breathing rate, they were more likely to report high-arousal positive emotions thirty minutes later. Minute ventilation refers to the total amount of air a person breathes in one minute. Essentially, breathing faster and moving more air was a precursor to feeling joy and excitement.

The researchers then compared the two groups of participants. They found that this physiological link was present in both groups. However, the strength of the connection varied based on the participant’s medical history. The relationship between breathing and positive mood was notably stronger in the group with a history of depression.

For healthy controls, an increase in ventilation predicted a subtle increase in positive mood. For those with remitted depression, the same increase in ventilation predicted a much larger boost in positive mood. This suggests that for these individuals, physiological activation may be a requisite for experiencing joy.

The study also examined the role of emotion regulation strategies. The researchers looked specifically at a strategy called acceptance. Acceptance involves experiencing thoughts and feelings without judging them or trying to change them. It emphasizes openness to the present moment.

Participants who reported using acceptance more frequently showed a stronger link between their breathing and their mood. For those who rarely used acceptance, the connection between minute ventilation and positive emotion was statistically insignificant. This suggests that being open to one’s internal experience may allow physiological changes to more effectively influence emotional states.

The team also found a connection between breathing variability and regulation style. At the level of individual differences, people who had more variable depth of breath tended to use acceptance more often. This variability might reflect a flexible physiological system that adapts readily to different situations.

These findings challenge the common assumption that slower breathing is always better for mental health. While slow breathing can help reduce anxiety, it may not be the best tool for generating excitement or enthusiasm. High-energy positive states appear to be supported by a more active respiratory pattern.

The authors propose that individuals with a history of depression may rely more heavily on this physiological “ramp-up” to feel good. In healthy individuals, positive emotions might arise more easily without requiring such a strong physiological push. For those in remission, the body may need to work harder to generate the same level of joy.

There are several caveats to consider regarding this research. The study relied on wearable sensors that come in standard sizes. This led to issues with sensor fit for some participants with atypical body proportions. As a result, a portion of the respiratory data had to be excluded to ensure accuracy.

Additionally, the study was observational. It showed that breathing changes predict mood changes, but it cannot definitively prove that breathing causes the mood to change. It is possible that an unmeasured third variable influences both factors. The sample size was also relatively small, which limits how broadly the results can be generalized.

Despite these limitations, the implications for treatment are promising. The study suggests that respiratory patterns could serve as a target for new interventions. Therapies could potentially harness breathing techniques to help individuals with depression access high-energy positive states.

The researchers envision the possibility of “just-in-time” interventions. Wearable devices could monitor a person’s breathing in real time. If the device detects a pattern associated with low mood or disengagement, it could prompt the user to engage in specific breathing exercises. These exercises would be designed to increase ventilation and potentially spark a positive emotional shift.

This approach could be particularly useful for preventing relapse. Since the loss of joy is a major risk factor for the return of depression, finding ways to boost positive affect is a treatment priority. By understanding the physiological precursors of joy, clinicians may be able to offer more precise tools to their patients.

Future research will need to confirm these findings in larger groups. Scientists also need to determine if these patterns hold true for people currently experiencing a major depressive episode. The current study focused only on those in remission. It remains to be seen if the same dynamics apply during the acute phase of the illness.

The study provides a first step toward understanding the dynamic interplay between breath and joy in everyday life. It highlights the importance of looking beyond the laboratory to see how physiology functions in the real world. As technology improves, the ability to monitor and influence these processes will likely expand.

The study, “When breath lifts your mood: Dynamic everyday links between breathing, affect, and emotion regulation in remitted depression,” was authored by Sean A. Minns, Bruna Martins-Klein, Sarah L. Zapetis, Ellie P. Xu, Jiani Li, Gabriel A. León, Margarid R. Turnamian, Desiree Webb, Archita Tharanipathy, Emily Givens, and Jonathan P. Stange.

Attachment anxiety shapes how emotions interfere with self-control

Attachment anxiety shapes how people handle emotional conflict, and brief reminders of security or threat can shift that balance, according to research published in Cognition & Emotion.

Everyday life requires us to focus on what matters while ignoring emotionally distracting information; this is known as emotional conflict control. Previous research shows that people differ in how well they manage this kind of emotional interference, and attachment theory suggests that these differences may stem from how secure or insecure people feel in close relationships. Individuals with anxious attachment, for example, tend to be highly sensitive to emotional cues, whereas avoidantly attached individuals often suppress emotional information in favor of control.

Drawing on the functional neuro-anatomical model of attachment, Mengke Zhang and colleagues conducted two experiments to examine how attachment styles and short-term attachment “priming” experiences relate to emotional conflict control.

In Experiment 1, 225 Chinese undergraduate students completed the Experiences in Close Relationships questionnaire, which assesses two core dimensions of adult attachment, including attachment anxiety and attachment avoidance. Participants then completed an emotional face-word Stroop task that required them to identify whether a face displayed a happy or fearful expression while ignoring a word superimposed on the face.

These words varied in emotional valence and in whether they were related to close relationships, allowing the task to generate emotional conflict when facial expressions and words conveyed mismatched emotional information.

Performance on the Stroop task was used to index emotional interference, with slower or less accurate responses on emotionally incongruent trials indicating greater difficulty resolving conflict between emotional and task-relevant information.

The second experiment extended this approach by examining situational influences on emotional conflict control. A separate sample of 185 undergraduates first completed the same attachment questionnaire and baseline mood ratings, then completed a brief writing-based priming task. Participants were randomly assigned to recall either a supportive attachment-related experience (attachment security priming), a distressing attachment-related experience (attachment threat priming), or a neutral interpersonal memory.

Following the priming manipulation, participants reported their momentary sense of attachment security or insecurity as well as changes in positive and negative emotions. They also completed a modified version of the emotional face-word Stroop task using attachment-related words only. This design allowed the researchers to test whether temporary shifts in attachment-related feelings altered emotional conflict control beyond individuals’ baseline attachment styles.

Across both experiments, attachment anxiety consistently emerged as the most important individual difference shaping emotional conflict control.

In the first experiment, individuals higher in attachment anxiety showed greater emotional interference on the Stroop task, particularly when distracting words were positive in emotional tone. This pattern suggests that anxiously attached individuals were more likely to have their attention drawn toward emotionally salient information, making it harder to suppress distractions and focus on the task at hand.

Attachment avoidance, in contrast, was not reliably associated with reduced emotional interference, indicating that the emotional demands of the face-word Stroop task may overwhelm avoidant individuals’ typical tendency to disengage from emotional material.

The second experiment showed that attachment security priming successfully increased participants’ immediate sense of attachment security, but it did not lead to uniform improvements in emotional control. Instead, among individuals high in attachment anxiety, greater feelings of security were associated with increased emotional interference, suggesting that security cues may heighten emotional engagement rather than dampen it for those who are chronically sensitive to relationship concerns. For individuals lower in attachment anxiety, security priming had little effect on emotional interference.

Attachment threat priming produced a different pattern. Compared to the neutral condition, threat priming reduced emotional interference overall, indicating improved emotional conflict control. This effect was especially pronounced among individuals low in attachment anxiety, who showed clear reductions in interference following threat cues.

Among individuals high in attachment anxiety, threat priming worked indirectly; increased feelings of attachment insecurity were associated with reduced emotional interference, suggesting that threat cues may shift attention away from emotional evaluation and toward cognitive control in this group.

Of note is that the study relied on undergraduate samples and laboratory-based tasks, which may limit how well the findings generalize to other populations or to real-world emotional challenges.

The research “Attachment styles and attachment (in)security priming in relation to emotional conflict control,” was authored by Mengke Zhang, Song Li, Xinyi Liu, Qingting Tang, Qing Li, and Xu Chen.

Study reports associations between infants’ head growth patterns and risk of autism

A study of infants during their first year of life conducted in Israel found that children with consistently small or large head circumferences had around three times higher odds of being diagnosed with autism compared to infants whose head circumference was consistently medium. These odds were 6–10 times higher in the 5% of infants with the smallest head circumferences and the 5% of infants with the largest head circumferences. The research was published in Autism Research.

Autism, or autism spectrum disorder, is a neurodevelopmental condition characterized by differences in social communication, social interaction, and patterns of behavior, interests, or sensory processing. It is described as a spectrum because the type and intensity of characteristics vary widely between individuals.

Autism typically emerges in early childhood, although it may be formally diagnosed later in life. Researchers have investigated ways to detect autism in early childhood, and some studies suggested that abnormal head growth patterns in infancy may be associated with a subsequent diagnosis of autism.

Other studies have reported that children later diagnosed with autism spectrum disorder sometimes have very small heads at birth, followed by a period of accelerated growth of the head during infancy. There is some evidence that such an accelerated pace of head growth might begin before birth.

Study author Rewaa Balaum and her colleagues wanted to explore the relationship between head growth patterns during the first year of life and a later diagnosis of autism. They conducted a longitudinal study in which they looked into head circumference and height development trajectories.

Study participants included 262 children with autism and 560 non-autistic children born in the Negev, southern Israel, between 2014 and 2017. Their head circumference and height data during the first year of life were available in the databases of mother-child health clinics operated by the Israeli Ministry of Health.

Seventy-eight percent of participating children were boys, and 77% were Jewish. The ethnic groups living in the Negev are mainly Jews and Bedouin Arabs. Children with autism were less likely to come from families of high socioeconomic status compared to the control group. They also tended to have somewhat lower weight at birth (3.24 kg vs 3.32 kg) and somewhat lower head circumference (34.18 cm vs 34.88 cm).

Head circumference and height measurements of these infants were taken on multiple occasions during their first year of life. Using these data, study authors grouped participating infants into seven categories based on their head growth trajectories.

These trajectories were: infants with consistently small heads, infants with medium head circumference throughout infancy, infants with consistently large heads, infants whose head circumference increased from small to medium, those whose heads increased from medium to large, infants whose heads were large in the early days but decreased to medium by the end of the first year, and those whose heads were medium at birth but decreased to small near the end of the first year.

Results showed that infants with consistently large and consistently small heads were the most likely to be diagnosed with autism later. Their odds of being diagnosed with autism were around three times higher compared to infants with consistently medium-sized heads. These odds were 6–10 times higher in the 5% of infants with the smallest heads and the 5% of infants with the largest head circumferences.

Crucially, the researchers found that these head growth patterns were strongly linked to height. Children with atypical head sizes also tended to have atypical heights. The highest risk for autism was observed in children who had both atypical head size and atypical height, rather than those with isolated head growth issues.

“Our findings suggest that the reported associations between atypical head growth during infancy and ASD [autism spectrum disorder] may be attributed to broader physical growth anomalies. This conclusion highlights the importance of a multifaceted, longitudinal examination of such anthropometric measures in studies of child development,” the study authors concluded.

The study contributes to the scientific understanding of autism. However, it should be noted that the study only looked at children in the first year of life. It remains unknown whether these growth patterns continue beyond this period. It also remains unknown how much these findings can be generalized to human populations outside southern Israel.

The paper, “Head Growth Trajectories During the First Year of Life and Risk of Autism Spectrum Disorder,” was authored by Rewaa Balaum, Leena Elbedour, Einav Alhozyel, Gal Meiri, Dikla Zigdon, Analya Michaelovski, Orly Kerub, and Idan Menashe.

A common enzyme linked to diabetes may offer a new path for treating Alzheimer’s

A protein long implicated in diabetes and obesity may hold the key to treating Alzheimer’s disease by reinvigorating the brain’s immune system. New research suggests that blocking this protein, known as PTP1B, allows immune cells to clear toxic waste more effectively and restores cognitive function in mice. The findings were published in the Proceedings of the National Academy of Sciences.

Alzheimer’s disease is characterized by the accumulation of sticky protein clumps called amyloid-beta. These plaques disrupt communication between brain cells and are widely believed to drive memory loss and neurodegeneration. The brain relies on specialized immune cells called microglia to maintain a healthy environment. In a healthy brain, microglia locate and engulf toxic clumps like amyloid-beta through a process called phagocytosis.

However, in patients with Alzheimer’s, these immune cells often become lethargic. They fail to keep up with the accumulating waste, allowing plaques to spread. Scientists have struggled to find ways to safely reactivate these cells without causing damaging inflammation.

There is a growing body of evidence linking Alzheimer’s to metabolic disorders. Conditions like type 2 diabetes are well-established risk factors for dementia. This connection led researchers to investigate a specific enzyme called protein tyrosine phosphatase 1B, or PTP1B.

This enzyme acts as a brake on signaling pathways that control how cells use energy and respond to insulin. Nicholas K. Tonks, a professor at Cold Spring Harbor Laboratory who discovered PTP1B in 1988, led the investigation along with graduate student Yuxin Cen. They hypothesized that PTP1B might be preventing microglia from doing their job.

To test this theory, the team used a mouse model genetically engineered to develop Alzheimer’s-like symptoms. These mice, known as APP/PS1 mice, typically develop amyloid plaques and memory deficits as they age. The researchers created a group of these mice that lacked the gene responsible for producing PTP1B. When these mice reached an age where memory loss typically begins, the researchers assessed their cognitive abilities.

The mice lacking the enzyme performed better on memory tests than the standard Alzheimer’s mice. One test involved a water maze where mice had to remember the location of a hidden platform. The mice without PTP1B found the escape route faster, indicating superior spatial learning. Another test measured how much time mice spent exploring a new object versus a familiar one. The genetically modified mice showed a clear preference for the new object, a sign of intact recognition memory.

The team also tested a drug designed to inhibit PTP1B to see if pharmacological intervention could mimic the genetic deletion. They administered a compound called DPM1003 to older mice that had already developed plaques. After five weeks of treatment, these mice showed similar improvements in memory and learning. This suggested that blocking the enzyme could reverse existing deficits and was not just a preventative measure.

Next, the investigators examined the brains of the animals to understand the biological changes behind these behavioral improvements. They used staining techniques to visualize amyloid plaques. Both the mice lacking the PTP1B gene and those treated with the inhibitor had considerably fewer plaques in the hippocampus. This region of the brain is essential for forming new memories.

To understand how the plaques were being cleared, the researchers analyzed the gene activity in individual brain cells. They performed single-cell RNA sequencing to look at the genetic profiles of thousands of cells. They found that PTP1B is highly expressed in microglia. When the enzyme was absent, the microglia shifted into a unique state.

These cells began expressing genes associated with the consumption of cellular debris. This state is often referred to as “disease-associated microglia,” or DAM. While the name sounds negative, this profile indicates cells that are primed to respond to injury. The lack of PTP1B appeared to push the microglia toward this beneficial, cleaning-focused phenotype.

The researchers then isolated microglia in a dish and exposed them to amyloid-beta to observe their behavior directly. Cells lacking PTP1B were much more efficient at swallowing the toxic proteins. “Over the course of the disease, these cells become exhausted and less effective,” says Cen. “Our results suggest that PTP1B inhibition can improve microglial function, clearing up Aβ plaques.”

The study revealed that this boost in activity was powered by a change in cellular metabolism. Phagocytosis is an energy-intensive process. The immune cells without PTP1B were able to ramp up their energy production to meet this demand. They increased both their glucose consumption and their oxygen use.

This metabolic surge was driven by the PI3K-AKT-mTOR signaling pathway. This is a well-known cellular circuit that regulates growth and energy survival. In the absence of PTP1B, this pathway remained active, providing the fuel necessary for the microglia to function.

Finally, the team identified the specific molecular switch that PTP1B controls to regulate this process. They found that the enzyme directly interacts with a protein called spleen tyrosine kinase, or SYK. SYK is a central regulator that tells microglia to activate and start eating. PTP1B normally removes phosphate groups from SYK, which keeps the kinase in an inactive state.

When PTP1B is removed or inhibited, SYK becomes overactive. This triggers a cascade of signals that instructs the cell to produce more energy and engulf amyloid. The researchers confirmed this by adding a drug that blocks SYK to the cells. When SYK was blocked, the benefits of removing PTP1B disappeared, and the microglia stopped clearing the plaque. This proved that PTP1B works by suppressing SYK.

The researchers utilized a “substrate-trapping” technique to confirm this direct interaction. They created a mutant version of PTP1B that can grab onto its target protein but cannot let go. This allowed them to isolate the PTP1B enzyme and see exactly what it was holding. They found it was bound tightly to SYK, confirming the direct relationship between the two proteins.

While these results are promising, the study was conducted in mice. Animal models mimic certain aspects of Alzheimer’s pathology but do not perfectly replicate the human disease. Future research will need to determine if similar metabolic and immune pathways are active in human patients. Additionally, PTP1B regulates many systems in the body, so widespread inhibition must be tested for safety.

The researchers are now interested in developing inhibitors that can specifically target the brain to minimize potential side effects. The Tonks lab is working to refine these compounds for potential clinical use. Tonks envisions a strategy where these inhibitors are used alongside existing treatments. “The goal is to slow Alzheimer’s progression and improve quality of life of the patients,” says Tonks. “Using PTP1B inhibitors that target multiple aspects of the pathology, including Aβ clearance, might provide an additional impact,” says Ribeiro Alves.

The study, “PTP1B inhibition promotes microglial phagocytosis in Alzheimer’s disease models by enhancing SYK signaling,” was authored by Yuxin Cen, Steven R. Alves, Dongyan Song, Jonathan Preall, Linda Van Aelst, and Nicholas K. Tonks.

Blood test might detect Parkinson’s disease years before physical symptoms appear

A new analysis of gene expression in blood samples suggests that specific biological signs of Parkinson’s disease are detectable years before physical symptoms appear. These molecular signatures, related to how cells repair DNA and handle stress, seem to fade once the disease is fully established. The findings were published in npj Parkinson’s Disease.

Parkinson’s disease is traditionally diagnosed only after significant brain damage has occurred, typically manifested by tremors, stiffness, and slowness of movement. Scientists have long sought ways to identify the condition during the “prodromal” phase. This phase represents a period when internal biological changes are happening, but the classic motor symptoms have not yet surfaced. Identifying the disease at this stage is a major goal for medical science because it offers a potential window for early intervention.

Danish Anwer, a doctoral student at the Department of Life Sciences at Chalmers University of Technology in Sweden, led a team to investigate whether these early internal changes could be tracked in the blood. The research team operated on the hypothesis that the body’s genetic instructions for repairing DNA might be overactive or dysregulated early in the disease process.

Dopamine-producing neurons in the brain are high-energy cells that naturally produce toxic byproducts during their activity. These byproducts can damage DNA, requiring a robust repair system to keep the cells healthy.

The researchers theorized that in the early stages of Parkinson’s, these repair systems might be working overtime to save the dying cells. If this activity could be detected in the blood, it would serve as an early warning system. To test this, they needed to look at how these biological processes change over time rather than just taking a single snapshot.

The research team utilized data from the Parkinson’s Progression Markers Initiative, a large-scale observational study that tracks the evolution of the disease. They analyzed blood samples collected over a period of up to three years. The study included 188 healthy individuals to serve as a control group.

In addition to the healthy controls, the study analyzed 393 patients who had already been diagnosed with established Parkinson’s disease. Crucially, the researchers also included 58 individuals in the prodromal phase. These are people who do not yet have the motor symptoms of Parkinson’s but exhibit early warning signs such as REM sleep behavior disorder or loss of smell.

The researchers used a technique called RNA sequencing to look at the activity levels of thousands of genes in these blood samples. While DNA is the instruction manual, RNA is the message that tells the cell what to do at any given moment. By sequencing the RNA, the team could see which genes were being turned on or off.

They specifically examined genes responsible for three key biological pathways. The first was mitochondrial DNA repair, which maintains the energy generators of the cell. The second was nuclear DNA repair, which protects the main genetic code. The third was the integrated stress response, a safety mechanism cells use to handle dangerous conditions.

To analyze this vast amount of data, the team employed machine learning algorithms known as logistic regression classifiers. These computer models were trained to distinguish between the different groups based on their gene expression profiles. The researchers assessed how accurately these models could identify a person as healthy, prodromal, or having established Parkinson’s based solely on their blood data.

The investigation revealed that gene activity related to DNA repair and stress responses could accurately distinguish prodromal individuals from healthy controls. The models achieved high accuracy in identifying those in the early, pre-symptomatic stages. The accuracy of these predictions tended to improve as the participants moved closer to the typical time of diagnosis.

In contrast, these same gene patterns could not effectively separate patients with established Parkinson’s disease from healthy people. This suggests that the molecular signals are strong and distinct during the early development of the disease but quiet down later. Once the disease is clinically apparent, the gene expression in the blood appears to return to a state similar to that of healthy individuals.

The researchers observed that gene expression in the prodromal group was highly variable at the beginning of the study. Over the course of two to three years, this variability decreased significantly. This pattern indicates that the body initially mounts a chaotic or intense effort to repair cellular damage. As the disease progresses, this protective response appears to burn out or fail.

This concept was further supported by the observation of non-linear patterns in gene activity. About half of the DNA repair genes did not simply increase or decrease in a straight line. Instead, they followed complex trajectories, rising and then falling, or vice versa. This suggests a dynamic and transient biological struggle occurring before the onset of motor symptoms.

The study highlighted specific genes that were particularly predictive of the prodromal state. These included ERCC6 and NEIL2, both of which are involved in fixing damage to DNA. ERCC6 is known to be important for repairing active genes and is linked to conditions involving premature aging. NEIL2 helps repair damage caused by oxidative stress, which is a known factor in the death of dopamine neurons.

Another notable gene identified was NTHL1. This gene showed high importance as a predictor early in the prodromal phase. However, its relevance declined sharply as time passed. This decline supports the theory that specific repair mechanisms are recruited early on but eventually become overwhelmed or inactivated as the neurodegeneration advances.

The team also compared these specific stress and repair genes against broader sets of genes usually associated with Parkinson’s disease. They found that the repair and stress response genes were superior at identifying the prodromal phase. This indicates that general Parkinson’s risk genes might be less useful for tracking the active disease process in its earliest stages compared to these specific repair pathways.

The inability of the models to distinguish established Parkinson’s from controls is a significant finding. It implies that by the time a patient sees a doctor for tremors, the systemic battle in the blood has largely subsided. This highlights a limited temporal window where blood tests based on these markers would be effective.

There are limitations to this research that should be considered when interpreting the results. Blood samples serve as a proxy and do not always perfectly reflect what is happening inside the brain. It is possible that the signals detected in the blood are distinct from the specific degeneration occurring in central nervous system cells. The changes in the blood might reflect a systemic response to the disease rather than the direct brain pathology.

Additionally, the sample size for the prodromal group was relatively small compared to the other groups. While the statistical methods used were robust, larger studies will be necessary to confirm these patterns. The researchers also noted that external factors like medication could influence gene expression in established patients, potentially masking some signals.

The researchers did not perform functional tests to see if the changes in RNA levels resulted in changes in actual protein levels or cellular function. Gene expression is only the first step in protein production. Future studies will need to bridge the gap between these genetic signals and the actual cellular machinery.

Despite these limitations, the study provides evidence that the prodromal phase of Parkinson’s is biologically distinct from the established phase. It suggests that the body fights the disease aggressively in the beginning. This insight could help in the design of clinical trials by allowing researchers to select patients who are in this active, early phase.

The research team aims to understand exactly how these early repair mechanisms work and why they eventually fail. Developing these findings into a practical blood test for clinical use will require further testing and regulatory approval. The scientists estimate that such a test could potentially begin trials in healthcare settings within five years.

The study, “Longitudinal assessment of DNA repair signature trajectory in prodromal versus established Parkinson’s disease,” was authored by Danish Anwer, Nicola Pietro Montaldo, Elva Maria Novoa-del-Toro, Diana Domanska, Hilde Loge Nilsen, and Annikka Polster.

Narcissistic students perceive student-professor flirting as less morally troubling

New research suggests that a college student’s level of narcissism plays a role in how they perceive and participate in flirtatious interactions with their professors. The findings indicate that students with high levels of grandiose narcissism are more likely to report flirting with faculty and believe faculty are flirting back, whereas those with vulnerable narcissism tend to perceive such behavior as common among their peers but not within their own interactions. The study was published in The Journal of Social Psychology.

The dynamics of student-professor relationships have long been a subject of concern within higher education. While most interactions remain professional, sexual or romantic engagements do occur and can lead to serious consequences. These include lawsuits, conflicts of interest, and the erosion of a safe learning environment.

Despite the gravity of these issues, there has been very little empirical research into which individual personality traits might predict the initiation of such behaviors. Previous research from the early 1980s suggested that a significant portion of students had flirted with professors, but modern data on the psychological drivers behind these actions has been sparse.

“While researchers are often interested in how narcissism influences behavior within academia, previous research has focused on academic success (e.g., GPA) and/or academic misconduct (e.g., cheating),” explained study author Braden T. Hall, a PhD student at the University of Alabama.

“However, flirting between students and professors is a real-world problem with serious consequences (e.g., damage to reputation, severe power imbalances, damage to academic integrity, lawsuits, etc.), and no research has examined the types of students that may be more likely to engage in such behavior, perceive such behavior from their professors, or perceive such behavior as prevalent on their campus and/or less morally inappropriate.”

Narcissism is generally understood as a personality trait characterized by a sense of entitlement and self-importance. However, psychologists recognize two distinct forms: grandiose and vulnerable.

Grandiose narcissism is associated with boldness, charm, and a desire for admiration. Vulnerable narcissism involves similar entitlement but is coupled with insecurity, anxiety, and a sense of victimization. The research team proposed that these two types of narcissism would manifest differently regarding academic flirting.

The researchers hypothesized that grandiose individuals would be bold enough to flirt personally, while both types would view the behavior as more acceptable and prevalent among others. To test their hypotheses, the researchers recruited 233 undergraduate psychology students from the University of Alabama.

The sample was predominantly female and white, with an average age of 19. Participants began by completing the Five-Factor Narcissism Inventory – Short Form, a standardized measure designed to assess levels of both grandiose and vulnerable narcissism. This allowed the team to score each participant on the specific dimensions of the personality trait.

The core of the study involved a detailed assessment of flirting behaviors. To ensure the behaviors listed were relevant, the researchers first conducted a pilot study to identify actions that students and faculty agreed constituted flirting. This resulted in a list of 12 specific behaviors for classroom settings, such as complimenting appearance, and 12 for office settings, such as sitting on a desk. Importantly, these behaviors were designed to be mild to moderate in nature rather than explicit sexual harassment or coercion.

Participants reviewed these behaviors and provided frequency estimates across several different scenarios. They rated how often they engaged in these behaviors toward professors and how often professors engaged in them toward the students. They also provided estimates for how often they believed their peers engaged in these behaviors with professors. Finally, the students rated the moral appropriateness of the behaviors. The researchers used statistical models to analyze how narcissism scores predicted these frequency estimates and moral judgments.

The results provided evidence that narcissism influences how students view academic boundaries. Students with higher levels of grandiose narcissism reported engaging in flirting behavior with professors more frequently. They also reported that professors flirted with them more often.

This pattern was consistent regardless of whether the interaction took place in a classroom or an office. This finding aligns with the profile of grandiose narcissists as individuals who seek attention, lack fear of social rejection, and may view themselves as exceptionally attractive or desirable to authority figures.

The findings for vulnerable narcissism were distinct. Students scoring high in vulnerable narcissism did not report higher frequencies of flirting with professors themselves. This is likely due to the social anxiety and fear of rejection that characterizes this form of narcissism. Although they may desire special treatment, the risk of awkwardness or dismissal likely inhibits them from acting on those desires.

However, vulnerable narcissism did predict how students viewed the behavior of others. High levels of vulnerable narcissism were associated with the belief that peers were frequently flirting with professors and that professors were flirting with peers. This suggests a cynical worldview where these students believe others are getting ahead through manipulative or immoral means, even if they are not doing so themselves.

When it comes to moral judgment, both forms of narcissism showed similar patterns. Higher levels of both grandiose and vulnerable narcissism were associated with viewing student-professor flirting as less inappropriate.

While the average student in the study viewed these behaviors as generally inappropriate, narcissistic students were more tolerant of them. This aligns with previous research suggesting that narcissism is linked to “moral disengagement,” or the tendency to excuse unethical behavior when it serves one’s interests or matches one’s worldview.

“Most of the effects of narcissism we found were medium-to-large, so these effects seem robust, and the effects of grandiose narcissism were consistent across contexts (e.g., classroom and offices), suggesting that these effects are due to trait-level differences rather than situations,” Hall told PsyPost.

The study also revealed general trends regarding the context of these interactions. Participants tended to view flirting as less inappropriate when it occurred in a classroom compared to a private office. The researchers suggest this might be because classroom interactions are public and may be interpreted as trying to be entertaining or engaging, whereas private office interactions imply a higher level of intimacy and potential for misconduct.

“Flirting between students and professors, while oftentimes seemingly benign, can be misinterpreted and have serious consequences in academic settings,” Hall explained. “The present study offers novel insight into the types of students (grandiose and vulnerable narcissistic students) who are more likely to see this behavior as less morally troubling and believe that flirting between students and professors is more typical. Additionally, we draw an important distinction wherein only grandiose narcissistic students are more likely to see flirting as typical of themselves.”

But it is important to contextualize these findings within the broader scope of the data. The average frequency estimates for flirting were low across the board. This means that while narcissistic students reported more flirting than their less narcissistic counterparts, the absolute reported frequency was still relatively rare.

Most students do not flirt with professors, and most view it as wrong. The study does not suggest that universities are overrun with flirtatious exchanges, but rather that when they do occur, specific personality traits are likely involved.

Even more narcissistic students “did not rate flirting between students and professors as appropriate, just less inappropriate,” Hall noted.

As with all research, there are also some limitations to consider. The research relied entirely on self-reported data. It is possible that grandiose narcissistic students merely believe they are flirting or being flirted with due to their inflated ego, rather than accurately reporting reality. The study was also cross-sectional, meaning it captured a snapshot in time and cannot definitively prove that narcissism causes the behavior, only that they are related.

Additionally, the sample was drawn from a large state university in the southeastern United States. “It would be interesting to see if these effects replicate at smaller universities where students and professors may have closer one-on-one relationships, which may lend itself to stronger effects,” Hall said.

The study, “‘Your desk or mine?’: narcissism predicts student-professor flirting frequency and perceptions of its appropriateness,” was authored by Braden T. Hall, William Hart, Joshua T. Lambert, and Bella C. Roberts.

Evolutionary psychology’s “macho” face ratio theory has a major flaw

For years, evolutionary psychologists and biologists have investigated the idea that the shape of a man’s face can predict his behavior. A specific measurement known as the facial width-to-height ratio has garnered attention as a potential biological billboard for aggression and dominance. A new comprehensive analysis, however, challenges the validity of this metric.

The research suggests that this specific ratio is not a reliable marker of sexual difference. Instead, the study points toward a simpler measurement that may hold the key to understanding facial evolution. These findings were published in the journal Evolution and Human Behavior.

The human face is a complex landscape that conveys biological information to others. We instinctively look at faces to judge health, age, and emotion. Beyond these immediate signals, researchers have hypothesized that facial structure reveals deeper evolutionary traits. The primary metric used to test this is the facial width-to-height ratio, often abbreviated as fWHR. To get this number, a researcher measures the distance between the cheekbones and divides it by the distance between the brow and the upper lip.

The prevailing theory has been that men with wider, shorter faces possess higher levels of testosterone and are more formidable. Previous studies have linked a high ratio in men to aggressive behavior in sports and financial success in business. The underlying assumption is that this facial structure evolved because it signaled a competitive advantage to potential mates or rivals. This concept relies on the existence of sexual dimorphism, which is the condition where the two sexes of the same species exhibit different characteristics.

Despite the popularity of this theory, the scientific evidence has been inconsistent. Some studies find a strong link between the ratio and masculine traits, while others find no connection at all. A major issue in past research is the inconsistent definition of the ratio itself. Different scientists measure the height of the face using different landmarks, such as the eyelids, the brow, or the hairline. Furthermore, many studies fail to account for the overall size of the person.

To address these inconsistencies, a team of researchers led by Alex L. Jones from the School of Psychology at Swansea University conducted a rigorous re-examination of the evidence. The team included Tobias L. Kordsmeyer, Robin S.S. Kramer, Julia Stern, and Lars Penke. They aimed to apply a more sophisticated statistical approach to determine if the facial width-to-height ratio is truly a sexually dimorphic trait. They also sought to determine if simple facial width might be a more accurate signal of biological differences than the ratio.

The researchers utilized a statistical method known as Bayesian inference. This approach differs from traditional statistics by incorporating prior knowledge into the analysis. It allows researchers to estimate the probability of a hypothesis being true given the available data. This contrasts with standard methods that often focus solely on whether a specific result is statistically significant. The team argues that Bayesian models are better suited for understanding subtle biological patterns because they can simulate data and quantify uncertainty.

In their first study, the group analyzed facial photographs of 1,949 individuals drawn from nine different datasets. The sample included 818 men and 1,131 women from various Western countries. The researchers used computer software to automatically place landmarks on the facial images. This ensured that the measurements were consistent across all photographs. They calculated the width-to-height ratio using five different common definitions of facial height to see if the measurement method mattered.

Crucially, the team controlled for body size in their statistical model. They adjusted the data for both height and weight. This is a vital step because men are generally larger than women. Without this control, a feature might appear to be a specific facial signal when it is actually just a byproduct of having a larger body. The researchers also defined a “region of practical equivalence.” This is a statistical tool used to determine if a difference is large enough to matter in the real world.

The results of this first analysis contradicted the popular evolutionary theory. When controlling for height and weight, the researchers found that men did not have a larger width-to-height ratio than women. In fact, the model showed a small tendency for women to have a larger ratio. However, this difference was so minute that it fell within the region of practical equivalence. This means the difference was effectively zero for any practical purpose.

The study also revealed that the ratio is heavily influenced by general body geometry. The researchers found that as a person’s height increases, their facial width-to-height ratio tends to decrease. Conversely, as body weight increases, the ratio tends to increase. This suggests that previous findings linking the ratio to aggression might have actually been detecting differences in body mass index rather than specific facial architecture. The researchers argue that the ratio is not a standalone signal of masculinity.

Following these results, the team conducted a second study focusing solely on the width of the face. This measurement is known technically as bizygomatic width. It is the distance between the two zygions, or the most outer points of the cheekbones. The researchers hypothesized that raw width might be the sexually selected trait that earlier scientists were trying to capture with the ratio.

For this second analysis, they examined the same large dataset of photographs. They also analyzed a smaller subset of 305 individuals for whom they had detailed measurements of upper body size. This included shoulder width, chest girth, and arm girth. This allowed them to test if facial width is connected to muscularity and physical strength, which are key components of evolutionary dominance.

The findings for facial width were starkly different from those for the ratio. The Bayesian analysis showed a very high probability that men have wider faces than women. This held true even when the researchers adjusted for height and weight. The difference was substantial, amounting to roughly half a standard deviation.

When the researchers looked at the smaller group and controlled for upper body size, the distinction became even clearer. The model indicated that men have almost a two standard deviation greater face width than women. The analysis suggested that an individual man has a 99.9 percent probability of having a wider face than a woman of similar body composition. This indicates that facial width is a robust, sexually dimorphic trait.

The authors propose that the evolutionary signal is driven by the lateral growth of the cheekbones. During puberty, male faces tend to grow wider, a process likely driven by testosterone. This growth trajectory aligns with the development of other skeletal features associated with physical formidability. The study implies that the horizontal width of the face is a reliable indicator of physical size and strength.

There are caveats to this research. The study relied on static two-dimensional photographs. This method cannot capture the dynamic nature of facial expressions or the three-dimensional structure of the skull as effectively as medical imaging. Additionally, the samples were primarily from Western populations. It is possible that facial metrics vary across different ethnic groups and environments. Future research would need to verify these findings in more diverse global populations.

The researchers also noted that facial perception is complex. While physical measurements provide hard data, human social interaction relies on how these features are perceived. It remains to be seen if the human brain specifically attends to raw width when making judgments about dominance or threat. The current study focuses on the physical reality of the face rather than the psychological processing of it.

This research represents a methodological correction for the field of evolutionary psychology. By using advanced Bayesian statistics and proper body size controls, the authors have dismantled a widely held belief about the facial width-to-height ratio. They argue that the ratio is likely a statistical artifact rather than a meaningful biological signal.

The shift in focus toward bizygomatic width offers a clearer path for future investigation. If facial width is the true signal of formidability, previous studies on aggression and leadership may need to be re-evaluated. The authors suggest that researchers should move away from the ratio and focus on simple width in future work. This simplification may lead to more consistent and replicable results in the study of human evolution.

The study, “Updating evidence on facial metrics: A Bayesian perspective on sexual dimorphism in facial width-to-height ratio and bizygomatic width,” was authored by Alex L. Jones, Tobias L. Kordsmeyer, Robin S.S. Kramer, Julia Stern, and Lars Penke.

Reduction in PTSD symptoms linked to better cognitive performance in new study of veterans

A study of U.S. veterans found that their episodic visual memory, motor learning, and sustained visual attention improved after treatment for PTSD. The magnitude of these improvements was associated with PTSD symptom reduction. However, there were no differences in the effects of the two treatments applied – cognitive processing therapy and Sudarshan Kriya yoga. The paper was published in the Journal of Traumatic Stress.

Post-traumatic stress disorder (PTSD) is a mental health condition that can develop after a person experiences or witnesses a psychologically traumatizing event usually involving actual or threatened death, serious injury, or sexual violence. Such events include war and combat, physical or sexual assault, severe accidents, natural disasters, or sudden loss of a loved one.

Symptoms of PTSD include persistent intrusive memories or flashbacks, nightmares, avoidance of reminders of the trauma, negative changes in mood or beliefs, and heightened arousal, such as irritability or hypervigilance. These symptoms last longer than one month and cause significant distress or impairment in daily functioning.

PTSD is common among military veterans, first responders, refugees, and survivors of violence, but it can occur in anyone exposed to trauma. However, not everyone who experiences trauma develops PTSD, as individual vulnerability, prior experiences, and social support play important roles. PTSD often co-occurs with depression, anxiety disorders, or substance use problems.

Study author Zulkayda Mamat and her colleagues wanted to explore the changes in cognitive functioning of U.S. veterans after treatment for PTSD. They hypothesized that cognitive function would improve after treatment across domains known to be impaired in PTSD. These include attention, working memory, episodic memory, information processing speed, and executive functioning. They further hypothesized that these improvements would be proportional to the degree of improvement in PTSD symptoms.

The researchers recruited 85 U.S. veterans with clinically significant PTSD symptoms, 62 of whom completed both the baseline and post-treatment cognitive assessments. Ten of the initial participants were women. They were randomly divided into two groups. The average age of participants in the final analysis was roughly 58 for the cognitive processing therapy group and 61 for the yoga group.

Participants in the first group were assigned to undergo a type of trauma-focused therapy called cognitive processing therapy (CPT). The other group was to undergo Sudarshan Kriya yoga (SKY). The group undergoing CPT had two 1-hour sessions per week for 6 weeks, for a total of 12 hours. The yoga group started with a 5-day intensive workshop that lasted 3 hours per day. This was followed by twice-weekly group sessions for 6 weeks, for an additional total of 25 hours of contact time (approximately 40 hours total).

Cognitive processing therapy is a structured, evidence-based psychotherapy for post-traumatic stress disorder that helps individuals identify and modify unhelpful beliefs related to trauma in order to reduce distress and improve functioning. Sudarshan Kriya yoga is a structured breathing-based practice originating from yogic traditions that uses specific rhythmic breathing patterns to reduce stress, regulate emotions, and support mental well-being.

Before and after the treatments, participants completed assessments of PTSD symptom severity (using CAPS-5), depression (Beck Depression Inventory-II), and cognitive functioning (tests from the CANTAB battery). The cognitive functioning assessment looked into participants’ episodic visual memory and learning; visual, movement, and comprehension difficulties; visual sustained attention; and working memory and strategy use.

The results showed that the cognitive functioning of participants from both groups improved after both treatments. More specifically, participants showed moderate improvements in visual memory, motor learning, and visual sustained attention. However, performance in spatial working memory declined in both groups.

The magnitude of improvements was similar in the two groups – there were no significant differences between participants who underwent cognitive processing therapy and those who participated in yoga workshops regarding the magnitude of cognitive improvements. Changes in overall cognitive functioning were associated with PTSD symptom reduction across the full sample. However, exploratory analyses indicated that this correlation was statistically significant only within the cognitive processing therapy group, not the yoga group.

“Regardless of treatment, cognitive function improved alongside PTSD symptom reduction. These findings provide evidence that treating PTSD not only alleviates PTSD symptoms but may also improve associated cognitive function,” the study authors concluded.

The study contributes to the scientific knowledge about PTSD treatment. However, it should be noted that cognitive improvements were observed equally in both groups without a passive control group (such as a waitlist). Therefore, while the two treatments appeared equally effective, it remains unclear whether the cognitive improvements resulted strictly from the treatments or from other processes not considered in the study, such as practice effects or the natural passage of time.

The paper, “Cognition improvement in U.S. veterans undergoing treatment for posttraumatic stress disorder: Secondary analyses from a randomized controlled trial,” was authored by Zulkayda Mamat, Danielle C. Mathersul, and Peter J. Bayley.

Scientists reveal the alien logic of AI: hyper-rational but stumped by simple concepts

A new study suggests that artificial intelligence systems approach strategic decision-making with a higher degree of mathematical optimization than human players, often outperforming humans in games requiring iterative reasoning. While these large language models demonstrate an ability to adapt to complex rules and specific competitive scenarios, they differ fundamentally from human cognition by failing to identify certain logical shortcuts known as dominant strategies. The findings appear in the Journal of Economic Behavior and Organization.

Large language models are advanced artificial intelligence systems designed to process and generate text based on vast datasets. These models are increasingly integrated into economic workflows, ranging from market analysis to automated negotiation agents. As these tools become more prevalent in settings that involve social interaction and competition, it becomes necessary to understand how their decision-making processes compare to human behavior.

Previous psychological and economic research indicates that humans often rely on bounded rationality, meaning their strategic thinking is limited by cognitive capacity and time. Iuliia Alekseenko, Dmitry Dagaev, Sofiia Paklina, and Petr Parshakov conducted this study to determine if artificial intelligence mirrors these human limitations or operates with a distinct form of logic. The authors are affiliated with HSE University, the University of Lausanne, and the New Economic School.

“This study was motivated by a growing debate about whether large language models can meaningfully serve as substitutes for human decision-makers in economic and behavioral research. While recent work has shown that LLMs can replicate outcomes in some classic experiments, it remains unclear how they reason strategically and whether their behavior truly resembles human bounded rationality,” the researchers told PsyPost.

“We focused on the beauty contest game because it is one of the most extensively studied tools for measuring strategic thinking and iterative reasoning in humans, with decades of experimental evidence across different populations and settings. This made it an ideal benchmark for a direct comparison between human behavior and AI-generated decisions.”

“More broadly, we were motivated by a real-world concern: AI systems are increasingly used in strategic environments such as markets, forecasting, and negotiation. Understanding whether AI models reason like humans, better than humans, or simply differently is crucial for predicting how they may influence outcomes when interacting with people.”

The researchers utilized a classic game theory experiment known as the “beauty contest” or “Guess the Number” game. In this game, participants simultaneously choose an integer between 0 and 100. The winner is the player whose chosen number is closest to a specific fraction of the average of all chosen numbers.

A common version sets the target at two-thirds of the average. If all players chose numbers randomly, the average would be 50, and the target would be 33. A sophisticated player anticipates this and chooses 33. If all players are equally sophisticated, they will all choose 33, making the new target 22. This reasoning process repeats iteratively until it reaches 0, which is the theoretical Nash equilibrium.

To test the capabilities of artificial intelligence, the authors employed five prominent large language models: GPT-4o, GPT-4o-Mini, Gemini-2.5-flash, Claude-Sonnet-4, and Llama-4-Maverick. The researchers replicated sixteen distinct scenarios from classic behavioral economics papers. These scenarios varied the number of players, the target fraction, and the aggregation method used to determine the winner.

The study gathered 50 responses from each model for every scenario to ensure statistical reliability. The temperature parameter for the models was fixed at 1.0 to allow for variability similar to a diverse group of human participants.

The study first replicated an experiment originally conducted by Rosemarie Nagel in 1995. The artificial agents played a version of the game where the target was either one-half or two-thirds of the average. In the scenario where the target was one-half, human participants typically chose numbers averaging around 27.

The artificial intelligence models consistently chose lower numbers. For example, the Llama model averaged a guess of 2.00, while Claude Sonnet averaged 12.72. This pattern persisted in the two-thirds variation. While humans averaged 36.73, the models provided mean guesses ranging from 2.80 to 22.24. This suggests that the models engaged in more steps of iterative reasoning than the average human participant.

The researchers also replicated a study by Duffy and Nagel from 1997 to see how the models handled different winning criteria. In this set of experiments, the winner was determined by being closest to one-half of the median, mean, or maximum of the chosen numbers. Human players tend to choose higher numbers when the target is based on the maximum.

The large language models successfully replicated this comparative static. When the target function changed to the maximum, models like Claude Sonnet and GPT-4o shifted their guesses upward significantly. This indicates that the models are capable of recognizing how changes in the rules should theoretically impact the optimal strategy.

A separate set of experiments focused on two-player games, initially studied by Grosskopf and Nagel in 2008. In a two-player game where the target is two-thirds of the average, choosing 0 is a weakly dominant strategy. This means that choosing 0 is never worse than any other option and is often better.

Despite this mathematical certainty, the models failed to identify the dominant strategy explicitly. The researchers analyzed the reasoning text generated by the models and found no instances where a model correctly explained the concept of a dominant strategy in this context. While the models played low numbers, they arrived at their decisions through probabilistic reasoning rather than by solving the game logically.

“Two things stood out,” the researchers said. “First, we were surprised by how consistently AI models behaved more strategically than humans across very different experimental settings. Second, and more unexpectedly, even the most advanced models failed to explicitly identify a simple dominant strategy in a two-player game, revealing an important gap between sophisticated-looking reasoning and basic game-theoretic logic.”

“Across many settings, AI models behaved much more strategically than humans, often choosing values far closer to the theoretical benchmark, which would meaningfully alter outcomes in real strategic interactions. At the same time, these effects highlight differences rather than superiority, since AI also shows clear limitations in recognizing simple dominant strategies.”

The researchers further investigated whether models could simulate specific human traits, replicating work by Brañas-Garza and colleagues. The prompts were adjusted to describe the artificial agent as having either high or low cognitive reflection scores. When instructed to act as an agent with high cognitive reflection, the models chose lower numbers. When instructed to act as an agent with low cognitive reflection, they chose higher numbers.

This alignment matches the behavioral patterns observed in actual human subjects. The models demonstrated a similar ability to simulate emotional states. When prompted to experience anger, the models chose higher numbers, mirroring findings from Castagnetti and colleagues that showed anger inhibits deep strategic reasoning in humans.

The researchers also examined the effect of model size on performance using the Llama family of models. They tested versions of the model ranging from 1 billion to 405 billion parameters. A clear correlation emerged between model size and strategic behavior.

The smaller models produced guesses that deviated substantially from the Nash equilibrium, often matching or exceeding human averages. The largest models produced results much closer to zero. This implies that as artificial intelligence systems scale in complexity, their behavior in strategic settings tends to converge toward the theoretical mathematical optimum rather than typical human behavior.

“A key takeaway is that modern AI systems can reason strategically and adapt to different situations, but they do not think in the same way humans do,” the researchers told PsyPost. “In our experiments, AI models consistently behaved in a more strategic and calculation-driven manner than people, even compared to well-educated or expert human participants.”

“At the same time, the study shows that AI reasoning is not simply a more advanced version of human reasoning. Despite their sophistication, the models failed to identify a basic dominant strategy in a simple two-player game, highlighting important limitations and blind spots.”

“For the average reader, this means that AI decisions should not be interpreted as direct predictions of human behavior. When AI systems are used in settings that involve judgment, competition, or social interaction, they may push outcomes in directions that differ from what we would expect if only humans were involved.”

There are some limitations to the study’s findings. The artificial agents were not playing for real financial incentives, which is a standard component of behavioral economics experiments with humans. The absence of a tangible reward could influence the depth of reasoning the models employ. Additionally, the study relied on specific phrasing in the prompts to simulate the experimental conditions. While robustness checks with paraphrased prompts showed consistent results, the models exhibited some sensitivity to how the task was framed.

“A common misinterpretation would be to conclude that AI thinks like humans or can be used as a direct proxy for human decision-making,” the researchers noted. “Our results show that while AI can perform well in strategic tasks, its reasoning patterns differ in important ways, and these differences can meaningfully affect outcomes. The key caveat is that strong performance in a task does not necessarily imply human-like cognition.”

“Our next step is to extend this approach to a wider set of strategic games that capture different cognitive demands, such as coordination, cooperation, and dominance reasoning. Ultimately, our goal is to build a systematic benchmark that compares human and AI behavior across multiple economic and psychological games, allowing researchers to better understand where AI aligns with human reasoning and where it diverges.”

The study, “Strategizing with AI: Insights from a beauty contest experiment,” was authored by Iuliia Alekseenko, Dmitry Dagaev, Sofiia Paklina, and Petr Parshakov.

Self-kindness leads to a psychologically rich life for teenagers, new research suggests

New research suggests that teenagers who practice kindness toward themselves are more likely to experience a life filled with variety and perspective-changing events. The findings indicate that specific positive mental habits can predict whether an adolescent develops a sense of psychological richness over time. These results were published in the journal Applied Psychology: Health and Well-Being.

To understand this study, one must first understand that happiness is not a single concept. Traditional psychology often divides a good life into two categories. The first is hedonic well-being, which focuses on feeling pleasure and being satisfied. The second is eudaimonic well-being, which centers on having a sense of purpose and meaning.

However, researchers have recently identified a third type of good life known as psychological richness. A psychologically rich life is characterized by complex mental experiences and a variety of novel events. It is not always comfortable or happy in the traditional sense. Instead, it is defined by experiences that shift a person’s perspective and deepen their understanding of the world.

Adolescence is a specific time when young people are exploring their identities and facing new academic and social challenges. This developmental stage is ripe for cultivating psychological richness because teenagers are constantly encountering new information. The authors of the current study wanted to know what internal tools help adolescents turn these challenges into a rich life rather than a stressful one.

The investigation was led by Yuening Liu and colleagues from Shaanxi Normal University in China. They focused their attention on the concept of self-compassion. This is often described as treating oneself with the same warmth and understanding that one would offer to a close friend.

Self-compassion is not a single trait but rather a system of six distinct parts. Three of these parts are positive, or compassionate. They include self-kindness, mindfulness, and a sense of common humanity.

Self-kindness involves being supportive of oneself during failures. Mindfulness is the ability to observe one’s own pain without ignoring it or exaggerating it. Common humanity is the recognition that suffering is a shared part of the human experience.

The other three parts are negative, or non-compassionate. These include self-judgment, isolation, and over-identification. Self-judgment refers to being harshly critical of one’s own flaws. Isolation is the feeling that one is the only person suffering. Over-identification happens when a person gets swept up in their negative emotions.

Previous research has linked self-compassion to general happiness, but the link to psychological richness was unclear. The researchers hypothesized that the positive components of self-compassion would act as an engine for psychological richness. They also predicted that the negative components would stall this growth.

To test this, the team recruited 528 high school students from western China. The participants ranged in age from 14 to 18 years old. The study was longitudinal, meaning the researchers collected data at more than one point in time.

The students completed detailed surveys at the beginning of the study. They answered questions about how they treated themselves during difficult times. They also rated statements regarding how psychologically rich they felt their lives were.

Four months later, the students completed the same surveys again. This time gap allowed the researchers to see how feelings and behaviors shifted over the semester. It moved the analysis beyond a simple snapshot of a single moment.

The team used a statistical technique called cross-lagged panel network analysis. This method allows scientists to map out psychological traits like a weather system. It shows which traits are the strongest predictors of future changes in other traits.

The results revealed a clear distinction between the positive and negative aspects of self-compassion. The analysis showed that self-kindness was a strong predictor of psychological richness four months later. Students who were kind to themselves reported lives that were more interesting and perspective-changing at the second time point.

Mindfulness also emerged as a significant positive predictor. Adolescents who could observe their difficult emotions with balance were more likely to experience growth in psychological richness. These two traits acted as central hubs in the network.

The study suggests that these positive traits help teenagers process their experiences more effectively. When a student faces a setback, self-kindness may prevent them from shutting down. This openness allows them to learn from the event, adding to the complexity and richness of their worldview.

On the other hand, the researchers found that self-judgment negatively predicted psychological richness. Students who criticized themselves harshly tended to view their lives as less rich over time. This suggests that strict self-criticism may cause teenagers to avoid new challenges.

Isolation also showed a negative connection to future psychological richness. This makes theoretical sense because psychological richness often comes from interacting with diverse viewpoints. If a student feels isolated, they are cut off from the social exchanges that expand their perspective.

The network analysis also revealed how the different parts of self-compassion interact with each other. The researchers found that isolation at the first time point predicted higher self-judgment later on. This indicates a negative cycle where feeling alone leads to being harder on oneself.

Conversely, there was a positive feedback loop between the compassionate components. Self-kindness predicted higher levels of mindfulness in the future. In turn, being mindful predicted higher levels of self-kindness.

These findings support a theory known as the “well-being engine model.” This model suggests that certain personality traits act as inputs that drive positive mental outcomes. In this case, self-kindness and mindfulness serve as the fuel that powers a psychologically rich life.

The results also align with the “bottom-up theory” of well-being. This theory posits that overall well-being comes from the balance of positive and negative daily experiences. Self-compassion appears to help adolescents balance these experiences so that negative events do not overwhelm them.

By regulating their emotions through self-kindness, teenagers can remain open to the world. They can accept uncertainty and change, which are key ingredients for a rich life. Without these tools, they may become rigid or fearful.

The study highlights potential targets for helping adolescents improve their mental health. Interventions that specifically teach self-kindness could be very effective. Teaching students to be mindful of their distress could also yield long-term benefits.

There are some limitations to this research that should be noted. The study relied entirely on self-reports from the students. People do not always view their own behaviors accurately.

Additionally, the study was conducted exclusively with Chinese adolescents. Cultural differences can influence how people experience concepts like self-compassion and well-being. The results might not be exactly the same in other cultural contexts.

The time frame of four months is also relatively short. Adolescence spans many years, and developmental changes can be slow. Future research would benefit from tracking students over a longer period.

The researchers also noted that while they found predictive relationships, this does not strictly prove causation. Other unmeasured factors could influence both self-compassion and psychological richness. Experimental studies would be needed to confirm a direct cause-and-effect link.

Despite these caveats, the study offers a detailed look at the mechanics of adolescent well-being. It moves beyond the idea that self-compassion is just one general thing. Instead, it shows that specific habits, like being kind to oneself, have specific outcomes.

The distinction between simply being happy and having a rich life is important for educators and parents. A teenager might not always be cheerful, but they can still be developing a deep and complex understanding of life. This research suggests that self-compassion is a vital resource for that developmental journey.

The study, “Longitudinal relationship between self-compassion and psychological richness in adolescents: Evidence from a network analysis,” was authored by Yuening Liu, Kaixin Zhong, Ao Ren, Yifan Liu, and Feng Kong.

Borderline personality disorder in youth linked to altered brain activation during self-identity processing

A new neuroimaging study suggests that adolescents with borderline personality disorder exhibit distinct patterns of brain activity when reflecting on their own identity. The findings indicate that these young patients show reduced activation in the dorsolateral prefrontal cortex, a region associated with cognitive control, compared to healthy peers. This research was published in Translational Psychiatry.

Borderline personality disorder is a serious mental health condition. It is characterized by pervasive instability in moods, interpersonal relationships, self-image, and behavior. A central feature of this disorder is a disturbed sense of identity. Individuals often experience shifting goals, values, and vocational aspirations. This instability can manifest early in the course of the disorder.

Many previous studies have investigated the biological roots of the condition. Most of research has focused on emotional dysregulation rather than identity disturbance. Existing functional imaging studies have typically involved adult patients. These adult participants often have a history of medication use or co-occurring psychiatric conditions. These factors can make it difficult to determine which brain abnormalities are specific to borderline personality disorder itself.

To address this gap, a research team led by Pilar Salgado-Pineda from the FIDMAG Germanes Hospitalàries Research Foundation in Barcelona designed a study focusing on adolescents. They specifically sought participants who were in the early stages of the disorder. The team aimed to identify brain regions involved in the identity disturbance seen in the disorder. They focused on a developmental period that is critical for the formation of social cognition and self-concept.

The researchers recruited 27 female adolescents diagnosed with borderline personality disorder. These participants were between the ages of 12 and 18. Crucially, none of the patients had ever received pharmacological treatment for their condition. They were also screened to ensure they did not have any other comorbid psychiatric disorders.

For the control group, the researchers recruited 28 healthy female adolescents. These controls were matched to the patients in terms of age and estimated intelligence quotient. The strict selection criteria aimed to minimize confounding factors such as drug treatment and long-term illness effects.

The participants underwent functional magnetic resonance imaging. This technology measures brain activity by detecting changes associated with blood flow. While inside the scanner, the participants performed a task designed to engage self-reflection and reflection on others.

The task involved viewing a series of statements. Participants were asked to evaluate whether these statements were true or false. The statements belonged to one of three categories. The first category was the “self” condition, consisting of sentences about the participant. The second was the “other” condition, which involved sentences about an acquaintance the participant knew but was not emotionally close to.

The third category was a “facts” condition. This served as a control task and included general knowledge statements. The researchers also included a low-level baseline period where participants simply looked at a fixation cross on the screen. This design allowed the researchers to isolate brain activity specific to thinking about oneself and thinking about others.

The researchers analyzed the brain imaging data by comparing activation patterns between the different conditions. They specifically looked at the contrast between self-reflection and fact-processing. They also examined the contrast between other-reflection and fact-processing.

The analysis revealed differences in the group with borderline personality disorder during the self-reflection task. When comparing self-reflection to fact-processing, the healthy controls showed activation in several specific brain areas. These included the medial frontal cortex and the dorsolateral prefrontal cortex.

In contrast, the patients with borderline personality disorder showed reduced activation in the right dorsolateral prefrontal cortex. The patients also exhibited reduced activation in the left parietal cortex, the calcarine cortex, and the right precuneus.

The researchers conducted further analyses to understand the direction of these changes. They examined the activity levels in these regions relative to the fixation baseline. This revealed that while healthy controls activated the right dorsolateral prefrontal cortex during self-reflection, the patient group actually showed deactivation in this area.

The dorsolateral prefrontal cortex is widely recognized for its role in executive functions. It is heavily involved in top-down cognitive control. The authors suggest that the reduced activation in this region may reflect a diminished capacity for cognitive control over the process of self-reflection.

The study also examined brain activity during the other-reflection task. The results showed a different pattern of abnormality. When comparing other-reflection to fact-processing, the patient group appeared to show reduced activation in the medial frontal cortex. This region is part of the default mode network.

However, a detailed inspection of the data offered a nuanced explanation. The difference was not due to how the patients processed information about others. Instead, it was driven by a difference in the fact-processing condition. The healthy controls showed strong deactivation of the medial frontal cortex during the fact task. The patients failed to deactivate this region to the same extent.

The researchers interpret this specific finding as a failure of deactivation rather than a deficit in social cognition. This suggests that the brain mechanisms for thinking about others may be relatively preserved in these adolescents. The abnormality lay in the inability to suppress certain brain networks during a factual cognitive task.

The study notably found no differences between the groups in the temporoparietal junction. This brain region is known to be involved in understanding the beliefs of others. The lack of difference here implies that some aspects of social cognition might function normally in adolescents with the disorder.

There are limitations to this study that contextualize the findings. The sample included only female participants. Borderline personality disorder is diagnosed more frequently in females, but it does affect males. The findings may not extend to male adolescents with the condition.

The sample size was relatively small, with fewer than 30 participants in each group. Neuroimaging studies often require larger samples to detect subtle effects reliably. The strict exclusion criteria also limit generalizability. Most people with borderline personality disorder have other mental health conditions. Studying a “pure” sample helps isolate biological mechanisms but may not reflect the typical clinical population.

The study also relied on a specific experimental task to measure self-reflection. While this task is established in the field, it serves as an indirect measure of identity disturbance. The researchers did not include a behavioral measure of identity problems to correlate with the brain data.

Future research is needed to replicate these findings in larger and more diverse groups. Longitudinal studies could be particularly informative. Tracking adolescents over time would help clarify whether these brain activity patterns predict the worsening or improvement of symptoms as they enter adulthood.

The study, “Brain functional abnormality in drug naïve adolescents with borderline personality disorder during self- and other-reflection,” was authored by Pilar Salgado-Pineda, Marc Ferrer, Natàlia Calvo, Juan D. Duque-Yemail, Xavier Costa, Àlex Rué, Violeta Pérez-Rodriguez, Josep Antoni Ramos-Quiroga, Cristina Veciana-Verdaguer, Paola Fuentes-Claramonte, Raymond Salvador, Peter J. McKenna, and Edith Pomarol-Clotet.

Biological sex influences how blood markers reflect Alzheimer’s severity

A new study suggests that a promising blood test for Alzheimer’s disease may need to be interpreted differently depending on whether the patient is male or female. The researchers found that for the same concentration of a specific protein in the blood, men exhibited more severe brain damage and cognitive decline than women. These findings were published in the journal Molecular Psychiatry.

Diagnosing Alzheimer’s disease has historically been a difficult and expensive process. Physicians currently rely on a combination of subjective memory tests and invasive or costly biological measures. The most accurate biological tools available today involve positron emission tomography, known as PET scans, or lumbar punctures to analyze cerebrospinal fluid.

PET scans use radioactive tracers to visualize plaques and tangles in the brain, while lumbar punctures require inserting a needle into the lower back to collect fluid for analysis. Because these methods are not easily scalable for routine screening, the medical community has sought a blood-based biomarker that could indicate the presence and severity of neurodegeneration without the need for specialized equipment or invasive procedures.

One of the most promising candidates for such a test is neurofilament light chain, often abbreviated as NfL. This protein acts as a structural component within the axons of neurons, functioning much like a skeleton to provide support and shape to the nerve cells. When neurons are damaged or die due to neurodegenerative diseases, this internal structure breaks down. The neurofilament light chain proteins are then released into the cerebrospinal fluid and eventually make their way into the bloodstream.

Elevated levels of NfL in the blood serve as a signal that injury to the brain’s cellular network is occurring. While the potential of NfL as a diagnostic tool is widely recognized, its clinical application is hindered by a lack of standardized reference ranges. Doctors do not yet have a universal set of numbers to define what constitutes a normal or abnormal level across different demographic groups.

Xiaoqin Cheng, alongside Fang Xie and Peng Yuan from Fudan University in Shanghai, sought to determine if biological sex influences how these protein levels correlate with the actual severity of the disease. Previous research regarding sex differences in NfL levels has produced inconsistent results. Some studies suggested no difference between men and women, while others indicated variations in specific genetic cases. Cheng and colleagues aimed to clarify this relationship by examining whether a specific amount of NfL in the blood reflects the same amount of brain damage in men as it does in women.

The research team began their investigation by analyzing data from the Alzheimer’s Disease Neuroimaging Initiative, a large, long-running study based in North America. They selected 860 participants who had available data on plasma NfL levels, brain imaging, and cognitive assessments.

This group included people with normal cognition, mild cognitive impairment, and diagnosed dementia. The researchers used statistical models to look for interactions between sex and NfL levels regarding their effect on clinical symptoms. They controlled for variables such as age, education, and genetic risk factors to isolate the effect of sex.

The analysis revealed a distinct divergence between men and women. The researchers observed that as NfL levels rose, men experienced a much steeper decline in cognitive function compared to women with similar protein increases.

When the researchers looked at specific cognitive tests, such as the Clinical Dementia Rating or the Mini-Mental State Examination, they found that a unit increase in NfL predicted a more significant drop in performance for male participants. This pattern suggested that the male brain might be more vulnerable to the neurodegenerative processes associated with these elevated protein markers.

To understand the physical changes driving these cognitive differences, the team examined brain scans of the participants. They looked at magnetic resonance imaging data to measure the volume of specific brain regions critical for memory and thinking. The results showed that for every unit increase in plasma NfL, men displayed a greater reduction in the volume of the hippocampus, a brain structure essential for forming new memories.

The team also analyzed metabolic activity in the brain using glucose PET scans. These scans measure how much energy brain cells are consuming, which is a proxy for how healthy and active they are. Men showed more severe hypometabolism, or reduced brain energy use, than women at comparable levels of plasma NfL.

To ensure these results were not specific to one demographic or geographic population, the authors attempted to replicate their findings in a completely different group of people. They utilized the Chinese Preclinical Alzheimer’s Disease Study, a cohort consisting of 619 individuals.

Despite differences in ethnicity and genetic background between the American and Chinese cohorts, the fundamental finding remained the same. In this second group, men again showed more prominent functional and structural deterioration associated with rising NfL levels compared to women. A third, smaller public dataset was also analyzed, which confirmed the pattern once more.

The study also investigated whether this sex difference was unique to neurofilament light chain or if it applied to other Alzheimer’s biomarkers. They repeated their analysis using two other blood markers: phosphorylated tau 181, which is linked to the tangles found in Alzheimer’s brains, and glial fibrillary acidic protein, a marker of brain inflammation. Neither of these markers showed the same sex-dependent effect. This specificity suggests there is a unique biological mechanism linking NfL levels to disease severity that differs between males and females.

The authors also explored the predictive power of the biomarker over time. Using longitudinal data, they tracked how quickly patients progressed from mild impairment to full dementia. The statistical models indicated that an increase in plasma NfL levels was predictive of a faster cognitive decline and a higher likelihood of disease progression in men compared to women. This implies that a high NfL test result in a male patient might warrant a more urgent prognosis than the same result in a female patient.

While the study establishes a correlation, the biological reasons behind this discrepancy remain a subject for future investigation. The researchers propose several hypotheses. One possibility involves the blood-brain barrier, the protective filter that separates the brain’s circulatory system from the rest of the body.

If the blood-brain barrier in men becomes more permeable or dysfunctional during Alzheimer’s disease than in women, it could alter how NfL is released into the blood. Another potential explanation involves microglia, the immune cells of the brain. Sex differences in how these cells react to injury and inflammation could influence the rate of neurodegeneration and the subsequent release of neurofilament proteins.

There are limitations to the study. The cognitive tests used to assess participants can have subjective elements, although the researchers attempted to mitigate this by using composite scores. Additionally, while the statistical methods used to predict disease progression were robust, the sample size for the survival analysis was relatively small, and validation in larger cohorts will be necessary. The authors also note that the mechanism remains theoretical and requires direct testing in laboratory settings to confirm exactly why male physiology reacts differently.

This research highlights a significant need for precision in how blood biomarkers are developed and used. If these findings are further validated, it suggests that using a single cutoff value for plasma NfL to screen for Alzheimer’s disease may be insufficient.

Instead, clinicians may need to use sex-specific reference ranges to accurately assess the level of neurodegeneration in a patient. As the medical field moves closer to routine blood tests for dementia, accounting for biological sex will be essential to ensure that both men and women receive accurate diagnoses and appropriate care.

The study, “Plasma neurofilament light reflects more severe manifestation of Alzheimer’s disease in men,” was authored by Xiaoqin Cheng, Zhenghong Wang, Kun He, Yingfeng Xia, Ying Wang, Qihao Guo, Fang Xie, and Peng Yuan.

The surprising way the brain’s dopamine-rich reward center adapts as a romance matures

A new study published in the journal Social Cognitive and Affective Neuroscience provides evidence that the human brain processes romantic partners differently than close friends, specifically within the reward system. The research suggests that while the brain creates a unique neural signature for a partner early in a relationship, this distinction tends to fade as the bond matures. These findings offer insight into how the biological drivers of romantic love may evolve from passion to companionship over time.

Relationships involve complex psychological states that differentiate a committed partner from a platonic friend. Scientists have sought to map these differences in the brain to understand the biological foundations of human bonding. Much of this research focuses on the nucleus accumbens. This small region deep within the brain, which relies heavily on the neurotransmitter dopamine, plays a central role in processing rewards and motivation.

Evidence from animal studies indicates that the nucleus accumbens is essential for forming pair bonds. Research on monogamous prairie voles shows that neurochemical signaling in this area drives the preference for a specific partner. The brain appears to undergo plastic changes that reinforce the bond.

Human studies have attempted to replicate these findings by comparing brain activity in response to partners versus friends. However, the results have been inconsistent. Some experiments observed higher activity in the nucleus accumbens for partners, while others found no significant difference. This inconsistency might stem from the fact that opposite-sex friends can sometimes be viewed as potential romantic alternatives.

“Romantic relationships are typically characterized by exclusivity, strong commitment, and passionate love, which distinguish them from friendships,” said study author Kenji Fujisaki
of the Department of Psychology at Kyoto University.

“We aimed to identify the neural mechanisms that distinguish romantic partners from friends. In addition, as romantic relationships develop, most people experience psychological fluctuations over time, raising the question of how neural processing of a partner may change as a relationship matures. Finally, given prior theory and evidence that opposite-sex friends can sometimes be potential or alternative partners, we were interested in whether the brain represents an opposite-sex friend more similarly to a romantic partner or to a same-sex friend.”

The study involved 47 heterosexual male participants. All participants were between the ages of 20 and 29 and were currently in a romantic relationship. The average length of these relationships was approximately 18 months. The researchers excluded individuals who were married or had children to control for the effects of long-term domestic partnership or parenthood.

To ensure the study captured genuine social bonds, the participants selected their own close friends to be part of the stimuli. They chose a close female friend and a close male friend. These friends, along with the romantic partners, provided short video clips for the experiment.

The researchers used functional magnetic resonance imaging to monitor brain activity while participants engaged in a specific activity called the social incentive delay task. This task is designed to measure the anticipation of a social reward. Participants saw a cue on a screen indicating which person would appear.

After a short delay, a target appeared on the screen for a fraction of a second. Participants had to press a button as quickly as possible. If they were successful, they saw a video clip of their partner, female friend, or male friend smiling and making a positive gesture. These gestures included waving, clapping, or making a “V-sign.”

If the participants were too slow, they saw a neutral expression instead. This design allowed the researchers to isolate the brain activity associated with anticipating social approval from specific people. The team analyzed the imaging data using a technique known as multivoxel pattern analysis.

Standard analysis looks at whether a brain region is “on” or “off.” In contrast, multivoxel pattern analysis examines the specific pattern of activity across many small segments of brain tissue. This allows researchers to see if the “neural fingerprint” for one person differs from another, even if the overall activity level is the same.

The behavioral results showed that the men were highly motivated by their partners. Participants reacted faster when anticipating a video of their partner compared to either friend. They also rated the videos of their partners as more likeable than those of their friends.

The brain imaging results revealed that the nucleus accumbens encodes the romantic partner in a distinct manner. The computer algorithms used in the analysis successfully differentiated the brain activity patterns associated with the partner from those associated with the female friend. This discrimination was possible across the nucleus accumbens and other related brain structures.

The researchers then assessed the similarity of these neural patterns. They found that in the nucleus accumbens, the representation of the female friend was more similar to the male friend than to the partner. This suggests that the brain categorizes the partner as a unique social entity, distinct from the general category of friendship.

A key finding emerged regarding the duration of the romantic relationships. The researchers analyzed whether the distinctiveness of the partner’s neural signature was related to how long the couple had been together. They observed a negative correlation between relationship length and neural specificity.

Participants who had been in their relationships for a longer time showed less distinct neural differences between their partner and their female friend. In the nucleus accumbens, the unique pattern that separated the partner from the friend appeared to diminish as the relationship length increased. This trend remained statistically significant even after the researchers controlled for self-reported levels of intimacy, passion, and commitment.

These results align with psychological theories describing the trajectory of love. Early stages of romance are often characterized by “passionate love,” which involves intense longing and motivation. This stage likely requires highly specific activity within the brain’s reward system to facilitate bond formation.

As a relationship stabilizes, it often transitions into “companionate love.” This form of love is characterized by deep attachment and friendship. The findings suggest that as this transition occurs, the biological processing of the partner in the reward system becomes less distinguishable from that of a close friend.

This reduction in neural distinctiveness does not imply a decline in the quality of the relationship. It may instead reflect a shift in how the relationship is biologically maintained. The intense, reward-driven signaling required to establish a bond may be less necessary for maintaining a stable, long-term union.

“Our results suggest that the way the brain represents a romantic partner is not fixed, but can evolve as a relationship develops,” Fujisaki told PsyPost. “Early in relationships, a reward-related brain region called the nucleus accumbens showed clearly differentiated activity patterns for a partner compared with an opposite-sex friend. In longer relationships, this neural distinction became less pronounced. This change may reflect a shift from the passionate love characteristic of early-stage relationships toward a more stable, companionate form of love that shares features with close friendship.”

As with all research, there are some limitations. The research relied on cross-sectional data. This means it compared different people at different relationship stages rather than following the same individuals over time. Longitudinal studies would be necessary to confirm that these changes occur within the same person.

The sample consisted entirely of heterosexual males. This decision was made to reduce biological variability in the sample. However, it limits the ability to generalize the findings to women or individuals with different sexual orientations. Future research needs to include more diverse samples to see if these neural patterns are universal.

The study focused primarily on the nucleus accumbens and the dorsal striatum. While these areas are central to reward, other brain regions are involved in social bonding. Areas responsible for emotional regulation or cognitive processing may take on a larger role in long-term relationships.

There is also the potential for misinterpretation regarding the “reduced specificity” finding. “A common misinterpretation would be to assume that reduced neural distinctiveness means that love or relationship quality is declining,” Fujisaki said. “Our findings do not support this conclusion, and the observed pattern should be understood as a group-level tendency that may vary across individuals.”

Future research could investigate identifying these complementary brain systems. It would be valuable to understand what neural mechanisms support enduring bonds once the specific reward processing in the nucleus accumbens becomes less pronounced. Additionally, examining how major life transitions like cohabitation or marriage affect these patterns could provide further insight.

“This study raises a new question: if partner-specific processing in the nucleus accumbens becomes less distinct over time, what neural mechanisms help sustain long-term relationships?” Fujisaki explained. “Moving forward, it would be worth identifying complementary brain systems that support enduring bonds.”

“In addition, further developing this work by examining neural processes underlying cognition and behavior characteristic of romantic relationships, while taking individual differences into account, may deepen our understanding of romantic bonding. Ultimately, this line of research could provide insights that help foster healthier and more satisfying romantic relationships.”

The study, “Reduced neural specificity for a romantic partner in the nucleus accumbens over relationship duration,” was authored by Kenji Fujisaki, Ryuhei Ueda, Ryusuke Nakai, and Nobuhito Abe.

The scientist who predicted AI psychosis has issued another dire warning

More than two years ago, Danish psychiatrist Søren Dinesen Østergaard published a provocative editorial suggesting that the rise of conversational artificial intelligence could have severe mental health consequences. He proposed that the persuasive, human-like nature of chatbots might push vulnerable individuals toward psychosis.

At the time, the idea seemed speculative. In the months that followed, however, clinicians and journalists began documenting real-world cases that mirrored his concerns. Patients were developing fixed, false beliefs after marathon sessions with digital companions. Now, the scientist who foresaw the psychiatric risks of AI has issued a new warning. This time, he is not focusing on mental illness, but on a potential degradation of human intelligence itself.

In a new letter to the editor published in Acta Psychiatrica Scandinavica, Østergaard argues that academia and the sciences are facing a crisis of “cognitive debt.” He posits that the outsourcing of writing and reasoning to generative AI is eroding the fundamental skills required for scientific discovery. The commentary builds upon a growing body of evidence suggesting that while AI can mimic human output, relying on it may physically alter the brain’s ability to think.

Østergaard’s latest writing is a response to a letter by Professor Soichiro Matsubara. Matsubara had previously highlighted that AI chatbots might harm the writing abilities of young doctors and damage the mentorship dynamic in medicine. Østergaard agrees with this assessment but takes the argument a step further. He contends that the danger extends beyond mere writing skills and strikes at the core of the scientific process: reasoning.

The psychiatrist acknowledges the utility of AI for surface-level tasks. He notes that using a tool to proofread a manuscript for grammar is largely harmless. However, he points out that technology companies are actively marketing “reasoning models” designed to solve complex problems and plan workflows. While this sounds efficient, Østergaard suggests it creates a paradox. He questions whether the next generation of scientists will possess the cognitive capacity to make breakthroughs if they never practice the struggle of reasoning themselves.

To illustrate this point, he cites the developers of AlphaFold, an AI program that predicts protein structures. This technology resulted in the 2024 Nobel Prize in Chemistry for researchers from Google DeepMind and the University of Washington.

Østergaard argues that it is not a given that these specific scientists would have achieved such heights if generative AI had been available to do their thinking for them during their formative years. He suggests that scientific reasoning is not an innate talent. It is a skill learned through the rigorous, often tedious practice of reading, thinking, and revising.

The concept of “cognitive debt” is central to this new warning. Østergaard draws attention to a preprint study by Kosmyna and colleagues, titled “Your brain on ChatGPT.” This research attempts to quantify the neurological cost of using AI assistance. The study involved participants writing essays under three conditions: using ChatGPT, using a search engine, or using only their own brains.

The findings of the Kosmyna study provide physical evidence for Østergaard’s concerns. Electroencephalography (EEG) monitoring revealed that participants in the ChatGPT group showed substantially lower brain activation in networks typically engaged during cognitive tasks. The brain was simply doing less work. More alerting was the finding that this “weaker neural connectivity” persisted even when these participants switched to writing essays without AI.

The study also found that those who used the chatbot had significant difficulties recalling the content of the essays they had just produced. The authors of the paper concluded that the results demonstrate a pressing matter of a likely decrease in learning skills. Østergaard describes these findings as deeply concerning. He suggests that if AI use indeed causes such cognitive debt, the educational system may be in a difficult position.

This aligns with other recent papers regarding “cognitive offloading.” A commentary by Umberto León Domínguez published in Neuropsychology explores the idea of AI as a “cognitive prosthesis.” Just as a physical prosthesis replaces a limb, AI replaces mental effort. While this can be efficient, Domínguez warns that it prevents the stimulation of higher-order executive functions. If students do not engage in the mental gymnastics required to solve problems, those cognitive muscles may atrophy.

Real-world examples are already surfacing. Østergaard references a report from the Danish Broadcasting Corporation about a high school student who used ChatGPT to complete approximately 150 assignments. The student was eventually expelled. While this is an extreme case, Østergaard notes that widespread outsourcing is becoming the norm from primary school through graduate programs. He fears this will reduce the chances of exceptional minds emerging in the future.

The loss of critical thinking skills is not just a future risk but a present reality. A study by Michael Gerlich published in the journal Societies found a strong negative correlation between frequent AI tool usage and critical thinking abilities. The research indicated that younger individuals were particularly susceptible. Those who frequently offloaded cognitive tasks to algorithms performed worse on assessments requiring independent analysis and evaluation.

There is also the issue of false confidence. A study published in Computers in Human Behavior by Daniela Fernandes and colleagues found that while AI helped users score higher on logic tests, it also distorted their self-assessment. Participants consistently overestimated their performance. The technology acted as a buffer, masking their own lack of understanding. This creates a scenario where individuals feel competent because the machine is competent, leading to a disconnect between perceived and actual ability.

This intellectual detachment mirrors the emotional detachment Østergaard identified in his earlier work on AI psychosis. In his previous editorial, he warned that the “sycophantic” nature of chatbots—their tendency to agree with and flatter the user—could reinforce delusions. A user experiencing paranoia might find a willing conspirator in a chatbot, which confirms their false beliefs to keep the conversation going.

The mechanism is similar in the context of cognitive debt. The AI provides an easy, pleasing answer that satisfies the immediate need of the user, whether that need is emotional validation or a completed homework assignment. in both cases, the human user surrenders their agency to the algorithm. They stop testing reality or their own logic against the world, preferring the smooth, frictionless output of the machine.

Østergaard connects this loss of human capability to the ultimate risks of artificial intelligence. He cites Geoffrey Hinton, a Nobel laureate in physics often called the “godfather of AI.” Hinton has expressed concerns that there is a significant probability that AI could threaten humanity’s existence within the next few decades. Østergaard argues that facing such existential threats requires humans who are cognitively adept.

If the population becomes “cognitively indebted,” reliant on machines for basic reasoning, the ability to maintain control over those same machines diminishes. The psychiatrist emphasizes that we need humans in the loop who are capable of independent, rigorous thought. A society that has outsourced its reasoning to the very systems it needs to regulate may find itself ill-equipped to handle the consequences.

The warning is clear. The convenience of generative AI comes with a hidden cost. It is not merely a matter of students cheating on essays or doctors losing their writing flair. The evidence suggests a fundamental change in how the brain processes information. By skipping the struggle of learning and reasoning, humans may be sacrificing the very cognitive traits that allow for scientific advancement and independent judgment.

Østergaard was correct when he flagged the potential for AI to distort reality for psychiatric patients. His new commentary suggests that the distortion of our intellectual potential may be a far more widespread and insidious problem. As AI tools become more integrated into daily life, the choice between cognitive effort and cognitive offloading becomes a defining challenge for the future of human intelligence.

The paper, “Generative Artificial Intelligence (AI) and the Outsourcing of Scientific Reasoning: Perils of the Rising Cognitive Debt in Academia and Beyond,” was published January 21, 2026.

Support for banning hate speech tends to decrease as people get older

An analysis of the 2019-2024 New Zealand Attitudes and Values Study data revealed that support for free speech has been decreasing during this period across all age groups. In contrast, there was little change in the level of support for restricting hate speech. The research was published in Political Psychology.

Free speech is the right of individuals to express ideas, opinions, and information without undue interference or punishment from authorities. It includes spoken words, writing, art, protest, and other forms of expression. Free speech allows people to criticize those in power and hold governments accountable. It supports the search for truth by allowing competing ideas to be debated openly.

Free speech protects individual autonomy by respecting people as thinking agents capable of forming their own views. In democratic societies, free speech enables informed voting and meaningful public participation. It helps minorities and marginalized groups voice their experiences and challenge dominant narratives. Without free speech, fear and conformity tend to replace creativity and innovation. For these reasons, free speech is widely seen as a foundation of free, pluralistic, and resilient societies.

In spite of this, some argue that the right to free speech should be restricted at least in some cases. Traditionally, arguments for this have been focused on maintaining social order and reducing security risk. However, in recent decades, arguments in favor of restricting free speech as a way to protect marginalized groups have become more common. Offensive or disparaging speech targeting groups based on race, religion, gender, or sexuality has generated tensions between the support for free speech and the need to promote social inclusion of these groups.

Study author Maykel Verkuyten and his colleagues wanted to examine the contributions of age, time period, and generation of birth to changes in attitudes toward free speech and hate speech restrictions in New Zealand. They note that support for free speech in New Zealand is likely to be high, but that minority group members might be more supportive of hate speech restriction than majority members because they are typically the target of speech that denigrates their ethnic or racial identity.

These authors analyzed data from the New Zealand Attitudes and Values Study collected between 2019 and 2024. The analysis included data belonging to 50,662 participants who responded to questions of interest for this analysis at least once over the five annual assessments conducted in this period.

The respondents provided the data used in this analysis by rating on a scale from 1 to 7 how strongly they support free speech (“Although I may disagree with the opinions that other people hold, they should be allowed to express those views publicly.”) and how strongly they support restriction of hate speech (“People who hold opinions that are harmful or offensive to minority groups should be banned from expressing those views publicly.”).

Results showed that, in the examined period, general support for free speech has decreased across all age groups (all birth cohorts). This was the case both in ethnic majority and minority groups. In contrast, support for restricting hate speech was relatively stable in this period. This was particularly the case among ethnic minority groups.

“Free speech is critical for liberal democracies to function well, but it has limits. Traditionally, concerns about social order and security are considered to justify free speech restrictions, but increasingly there is a focus on possible offense and harm to minority groups. The increased prominence of egalitarian norms and values may ultimately lead to lower tolerance of speech that is considered to harm the status, dignity, and well-being of minority groups,” the study authors concluded.

The study contributes to the scientific knowledge about changes in support for free speech and speech restrictions in New Zealand. However, it should be noted that both support for free speech and for restricting hate speech were self-reported using only single items. Studies using more objective or more comprehensive measures of these attitudes might produce different results.

The paper, “Changes in support for free speech and hate speech restrictions: Cohort, aging, and period effects among ethnic minority and majority group members,” was authored by Maykel Verkuyten, Kumar Yogeeswaran, Elena Zubielevitch, Kieren J. Lilly, and Chris G. Sibley.

Recreational ecstasy use is linked to lasting memory impairments

Use of the drug MDMA, commonly known as ecstasy, may lead to lasting difficulties with learning and memory that persist long after a person stops taking it. A new analysis indicates that people who use the drug recreationally perform worse on cognitive tests than those who have never used it. These deficits appear to remain the same even in individuals who have abstained from the drug for months or years. These findings were published in the Journal of Psychopharmacology.

The chemical 3,4-methylenedioxy-N-methamphetamine, or MDMA, is a synthetic substance that alters mood and perception. It works primarily by causing a massive release of serotonin in the brain. Serotonin is a neurotransmitter that plays a major role in regulating sleep, mood, and memory. The drug prevents the brain from reabsorbing this chemical, which creates the feelings of euphoria and empathy that users seek. However, this mechanism also depletes the brain’s supply of serotonin.

Animal studies have provided evidence that MDMA can be neurotoxic. Experiments with rats and primates suggest that repeated exposure to the drug can damage the nerve endings that release serotonin. These changes can last for a long time. In humans, brain imaging studies have shown alterations in the serotonin systems of heavy users. These changes often appear in the neocortex and the limbic system, which are brain areas essential for thinking and memory.

Researchers want to understand if these changes are permanent. Some imaging studies suggest that the brain might recover after a period of abstinence. However, it is not clear if the return of serotonin markers corresponds to a recovery in mental sharpness. This question is relevant for public health as well as clinical medicine. There is a renewed interest in using MDMA therapeutically to treat conditions such as post-traumatic stress disorder. Understanding the long-term safety profile of the substance is necessary for both patients and recreational users.

To address this question, a team of researchers led by Hillary Ung and Mark Daglish conducted a systematic review. They are affiliated with Metro North Mental Health and the University of Queensland in Australia. The team searched through medical databases for every available study on the topic. They looked for research that assessed cognitive function in recreational MDMA users.

The researchers applied strict criteria to select the studies. They only included research that focused on individuals who had abstained from MDMA for at least six months. This duration was chosen to ensure that the participants were not experiencing withdrawal or the immediate aftereffects of the drug. The researchers also required that the studies use standardized neurocognitive testing tools.

Fourteen articles met the requirements for the review. From these, the researchers extracted data to perform a meta-analysis. This statistical technique combines the results of multiple small studies to find patterns that might be invisible in a single experiment. The analysis focused primarily on the domain of learning and memory, as this was the most commonly tested area across the studies.

The analysis revealed a clear distinction between those who used MDMA and those who did not. People with a history of MDMA use performed significantly worse on memory tests compared to people who were drug-naïve. The specific deficits were most notable in verbal memory. This involves the ability to remember and recall words or verbal information.

The researchers then compared current users against the abstinent users. Current users were defined as those who had used the drug recently, while the abstinent group had stopped for at least six months. The analysis found no statistical difference between these two groups. The cognitive performance of those who had quit was essentially the same as those who were still using the drug.

This lack of improvement was unexpected. One might predict that the brain would heal over time. However, the data did not show a correlation between the length of abstinence and better memory scores. Even in studies where participants had abstained for two years or more, the memory deficits remained. This suggests that the impact of MDMA on memory may be long-lasting or potentially permanent.

The review also examined other cognitive domains. These included executive function, which covers skills like planning and paying attention. The results for these areas were less consistent. Some data pointed to deficits in executive function, but the evidence was not strong enough to draw a firm conclusion. There was also limited evidence regarding impairments in language or motor skills.

The authors of the study advise caution when interpreting these results. They noted that the quality of the available evidence is generally low. Most of the studies included in the review were cross-sectional. This means they looked at a snapshot of people at one point in time rather than following them over many years. It is possible that people who choose to use MDMA have pre-existing differences in memory or impulsivity compared to those who do not.

Another major complication is the use of other drugs. People who use ecstasy recreationally rarely use only that substance. They often consume alcohol, cannabis, or cocaine as well. While the researchers tried to account for this, it is difficult to isolate the specific effects of MDMA from the effects of these other substances. Alcohol and cannabis are known to affect memory. It is possible that the deficits observed are the result of cumulative polydrug use rather than MDMA alone.

The purity of the drug is another variable. The studies relied on participants reporting how many pills they had taken in their lifetime. However, the amount of active MDMA in a street pill varies wildly. Some pills contain very high doses, while others contain none at all. This makes it impossible to calculate a precise dose-response relationship.

The researchers also pointed out that the drug market has changed. Many of the studies in the review were conducted in the early 2000s. Since then, the average strength of ecstasy tablets has increased significantly. Users today might be exposing themselves to higher doses than the participants in these older studies. This could mean that the cognitive risks are higher for modern users.

The findings have implications for the potential reversibility of brain changes. While some brain imaging studies show that serotonin transporters may regenerate over time, this study suggests that functional recovery does not necessarily follow. It is possible that the brain structures recover, but the functional connections remain altered. Alternatively, six months might simply be too short a time for full cognitive recovery to occur.

The study provides a sobering perspective on recreational drug use. The deficits in learning and memory were moderate to large in size. For a young person in an educational or professional setting, such deficits could have a tangible impact on their daily life. The inability to retain new information efficiently could hinder academic or career progress.

The authors call for better research designs in the future. They recommend longitudinal studies that assess people before they start using drugs and follow them over time. They also suggest using hair analysis to verify exactly what substances participants have taken. This would provide a more objective measure of drug exposure than self-reporting.

Until better data is available, the current evidence suggests a risk of lasting harm. Stopping the use of MDMA stops the immediate risks of toxicity. However, it may not immediately reverse the cognitive toll taken by previous use. The brain may require a very long time to heal, or the changes may be irreversible.

The study, “Long-term neurocognitive side effects of MDMA in recreational ecstasy users following sustained abstinence: A systematic review and meta-analysis,” was authored by Hillary Ung, Gemma McKeon, Zorica Jokovic, Stephen Parker, Mark Vickers, Eva Malacova, Lars Eriksson, and Mark Daglish.

New psychology research changes how we think about power in the bedroom

A new study published in the Personality and Social Psychology Bulletin suggests that having a sense of power in a relationship promotes sexual assertiveness, while perceiving a partner as powerful fosters a willingness to accommodate their needs. The findings indicate that healthy sexual dynamics are not about one person holding dominance over another. Instead, the most satisfying interactions appear to occur when both individuals feel they have influence within the relationship.

Power dynamics are frequently viewed as potential sources of conflict or exploitation within intimate relationships. A common assumption is that if one partner holds power, they might satisfy their own desires while neglecting their partner. Alternatively, the partner with less power might feel forced to comply with unwanted activities.

“Power is commonly thought of as dangerous, particularly within sexual relationships,” said study author Nickola Overall, a professor at the University of Auckland and head of the REACH Lab. ”

“People who have high power in relationships might assert their own sexual need while neglecting their partner’s desires. But lacking power is also problematic. People who have low power in relationships might inhibit their desires and comply to undesired sexual activity. Despite these negative implications of having power and lacking power, how power relates to sexual assertiveness, neglect, and compliance is unclear.”

The researchers sought to clarify how a person’s own sense of power and their perception of their partner’s power distinctly shape sexual motivations and behaviors. They applied a theoretical framework that separates power into two distinct processes.

The first is “actor power,” or the individual’s own perceived ability to influence outcomes. The second is “perceived partner power,” or the individual’s belief in their partner’s ability to influence outcomes. The researchers proposed that one’s own power drives the decision to approach or inhibit sexual desires. Simultaneously, the perception of a partner’s power drives the decision to accommodate or neglect the partner’s needs.

“Most frameworks assume that one partner higher in power will be more assertive in pursuing their sexual needs in ways that neglect the other partner who will be pressured to comply,” Overall explained. “These frameworks assume that power is zero-sum in relationships – if one person has more power, then the other person has less power.”

“But relationships can involve both people having high power (mutually influencing each other), both having low power (lacking influence over each other), or one having more power than the other. And each person’s power can influence their behavior for potential good or ill.”

“All prior studies have only focused on one type of behavior, such as sexual assertiveness or sexual compliance, making assumptions about how these behaviors are linked, such as partners high in power asserting their needs risking the other person complying to undesired sexual activity. But, these distinct behaviors may be shaped by different processes and do not provide a full picture of people’s sexual relationships.”

“So we examined various outcomes relevant to different theories of power, including sexual assertiveness (e.g., comfort initiating sex), sexual compliance (e.g., agreeing to engage in undesired sexual activity), and sexual accommodation vs. neglect (e.g., being more vs. less willing to compromise and being more vs. less understanding when partners are not in the mood),” Overall said.

The research team conducted three separate studies. The first study involved 270 participants recruited from an online platform. These individuals were in committed, mixed-gender relationships and were currently childfree. The sample included 130 women and 140 men. Participants completed the Sense of Power Scale to rate their own ability to influence their partner. They responded to statements such as “I think I have a great deal of power.” They also completed a version of the scale assessing their partner’s power.

In this first study, participants also rated their comfort with initiating and refusing sex. They responded to direct statements like “I am comfortable initiating sex.” Additionally, they reported their history of compromising on sexual frequency or activities over the past six months.

The data showed that individuals who felt they had more power reported greater comfort in both initiating and refusing sexual intimacy. In contrast, those who perceived their partners as having more power expressed a higher willingness to compromise on sexual matters. The results suggested two separate pathways. One pathway leads to personal assertiveness. The other pathway leads to responsiveness to a partner.

The second study aimed to validate these initial observations with a more detailed methodology. The researchers recruited 152 couples, totaling 304 participants. This design allowed the team to analyze data from both partners in a relationship. The study included the same power measures as the first study but added the Hurlbert Index of Sexual Assertiveness. This index measures how openly participants express sexual needs. It includes items such as “I communicate my sexual desires to my partner.”

The second study also assessed sexual compliance. This construct refers to engaging in unwanted sexual activity. Participants rated items such as “I find myself having sex when I do not really want it.” Additionally, the researchers measured sexual communal strength. This is defined as the motivation to meet a partner’s needs. Participants answered questions regarding how far they would go to meet their partner’s sexual desires.

The findings from the second study reinforced the distinction between the two types of power. Participants with higher personal power scores reported higher levels of sexual assertiveness. Perhaps more importantly, those with lower personal power scores reported higher levels of sexual compliance. This suggests that engaging in unwanted sex is often driven by a lack of personal agency rather than the pressure of a powerful partner.

On the other hand, viewing a partner as powerful was linked to greater communal strength. This indicates that perceiving a partner as powerful motivates individuals to meet that partner’s needs rather than simply submit to them out of fear.

The third study expanded the scope further with a sample of 412 individuals recruited online. This iteration aimed to replicate the previous findings and introduce new measures. The researchers assessed “sexual acquiescence,” which captures participation in specific sexual acts without desire but without coercion.

They also measured reactions to sexual rejection. The team wanted to see if high power might lead to “sexual enticement,” or nagging a partner who has refused sex. They also measured “sexual understanding,” which involves accepting a partner’s lack of desire without negative feelings.

Consistent with the previous studies, high personal power predicted assertiveness. Low personal power predicted engaging in unwanted sex. Perceiving a partner as powerful predicted reacting to sexual rejection with understanding rather than persistence. The study found no evidence that high power leads to pressuring behaviors like enticement. This challenges the idea that powerful individuals inevitably use their influence to coerce partners.

Across all three studies, the researchers tested whether the effects differed between men and women. The analysis showed that the fundamental links between power and behavior were consistent regardless of gender. While men reported higher baseline levels of assertiveness and women reported higher compliance, the way power influenced these behaviors was the same for both groups. For both men and women, feeling powerful enabled them to say “no” when they wanted to. For both groups, seeing their partner as influential motivated them to be accommodating.

The researchers also examined “asymmetries,” or whether having more power than one’s partner caused specific issues. The results offered little evidence that power imbalances were the primary driver of behavior. The findings suggest that the combination of high actor power and high perceived partner power may yield the best outcomes. In this scenario, individuals feel free to express their own desires while simultaneously caring for their partner’s needs.

“Both people having power in relationships is important for people to enjoy a fulfilling sex life,” Overall told PsyPost. “When people lack power in their relationships—people feel unable to influence their partner—they are more likely to inhibit their sexual desires, such as being less comfortable in initiating sex or expressing their sexual needs and more likely to engage in sexual activity they do not desire. Sexual inhibition and compliance undermine people’s health and wellbeing, but also restrict the development of satisfying, connected relationships.”

“When partners lack power in relationships—people feel their partner is unable to influence them—they are more likely to neglect their partners’ needs, such as being less willing to compromise with their partner about when and how they have sex or being less understanding when their partner is not in the mood. Neglecting partners’ needs will harm both people in relationships because couples need to accommodate each other’s needs and desires to have fulfilling satisfying sex lives.”

“In short, healthy sexual relationships involve people being able to satisfy their own desires while accommodating their partner’s needs and desire. Hitting this sweet spot requires both partners having power in their relationship.”

These new findings align closely with recent research by Robert Körner and Astrid Schütz, which challenged the idea that power in relationships is a zero-sum game. In their studies published in The Journal of Sex Research and Social Psychological and Personality Science, Körner and Schütz established that relationship quality and sexual satisfaction hinge on an individual’s absolute sense of power rather than a perfect balance of power between partners.

The current study builds on this foundation by mapping these power dynamics to specific behavioral outcomes. While Körner and Schütz demonstrated that feeling powerful predicts positive sexual motivation, the new results explain how this functions: personal power drives the confidence to assert needs, whereas perceiving a partner as powerful drives the motivation to be generous and accommodating.

Both sets of research converge on the conclusion that high mutual power is preferable to power asymmetries or shared powerlessness. Körner and Schütz found that having a powerful partner does not diminish one’s own satisfaction, and similarly, the current study found no evidence that power imbalances are the primary driver of harmful behaviors like sexual compliance or neglect. Instead, both lines of inquiry suggest that the healthiest sexual dynamics occur when both partners feel a high sense of agency.

The new findings also offer a behavioral explanation for the profiles identified by Roxanne Bolduc and her colleagues in the Journal of Sex & Marital Therapy. Bolduc’s research indicated that individuals with egalitarian views and flexible preferences experience greater relationship satisfaction than those adhering to rigid or conflicted gender roles.

The current study supports this by demonstrating that the psychological mechanisms of power function similarly for men and women. By showing that high actor power promotes assertiveness and high partner power promotes accommodation regardless of gender, the findings illustrate why egalitarian dynamics, where both partners exercise influence, likely lead to the superior relationship outcomes observed in Bolduc’s “flexible” profile.

While the new findings provide insight into relationship dynamics, the study relies on self-reported data. Participants may not accurately report or be fully aware of their own behaviors. This is particularly true regarding sensitive topics like compliance or enticement. The cross-sectional nature of the data also prevents drawing definitive conclusions about cause and effect. It is possible that engaging in specific sexual behaviors influences a person’s sense of power, rather than the other way around.

Future research could benefit from longitudinal designs to track these dynamics over time. The samples consisted largely of people in established, committed relationships. Power dynamics might function differently in casual dating scenarios or relationships characterized by severe conflict. In contexts with less commitment, power imbalances might carry more risk of negative outcomes than observed in this study. Additionally, experimental studies could help clarify whether shifting a person’s sense of power directly causes changes in their sexual behaviors.

“Some perspectives warn that power can be dangerous by providing the opportunity to exploit low power others,” Overall added. “Our data show that in close relationships having power is likely to be more beneficial than harmful. People who felt they had power to influence their partner were more assertive in expressing their sexual needs and less compliant to unwanted sexual activity, but they were not less willing to compromise with their partners or less understanding when their partners were not in the mood. Similarly, people who perceived their partner had high power were more willing to compromise with their partner and less likely to neglect their partner’s needs, but they were not more likely to comply to unwanted sexual activity.”

“Many perspectives also suggest that power asymmetries are critical—one person having more power than the other risks greater neglect and compliance. But testing interactions between people’s own and their partners’ power did not provide any evidence for this. Instead, the few interactions that emerged suggested that jointly holding power solidified rather than reduced the positive effects of power – greater assertiveness in expressing sexual needs and accommodation of the partners’ sexual desires and lower compliance and partner neglect.”

“That said, our investigation examined power and sexual behavior within long-term intimate relationships in which both people care about and have some power over each other,” Overall continued. “In non- intimate contexts, like the workplace, one person holding power over another who has little or no counterpower could produce particularly harmful dynamics in which the person high in power can assert their needs while neglecting the other who may be more likely to comply. The risk of these harmful outcomes could also be greater in younger samples and dating couples that are not yet committed to one another, or in contexts where greater asymmetries between men and women restrict women’s power and sexual behavior.”

The study, “Actor Power and Perceived Partner Power Differentially Relate to Sexual Behavior and Motivations,” was authored by Nickola C. Overall, Jessica A. Maxwell, Amy Muise, Nina Waddell, and Auguste G. Harrington.


Five key points from the article:

  • Distinct roles of power: The researchers identified two separate processes: “actor power” (a person’s own sense of influence), which drives sexual assertiveness and the confidence to refuse sex, and “perceived partner power” (a person’s view of their partner’s influence), which motivates a willingness to accommodate and compromise on the partner’s needs.

  • The “sweet spot” for satisfaction: Contrary to the idea that power in relationships is a zero-sum game where one person dominates the other, the findings suggest that the best sexual dynamics occur when both partners feel influential. This mutual power allows individuals to pursue their own desires while simultaneously caring for their partner’s needs.

  • Roots of sexual compliance: The study found that engaging in unwanted sexual activity (compliance) is primarily driven by a lack of personal agency (low actor power) rather than pressure from a powerful partner. Individuals who feel powerless are more likely to inhibit their own desires and agree to sex they do not want to avoid conflict.

  • Gender consistency: The link between power and sexual behavior was consistent for both men and women. Regardless of gender, feeling powerful facilitated boundary-setting and assertiveness, while perceiving a partner as influential fostered a motivation to be understanding and responsive to that partner.

  • Alignment with previous research: These findings reinforce other recent studies suggesting that relationship satisfaction depends on high absolute levels of power for both partners rather than just an equitable balance. The research supports the notion that egalitarian dynamics, where both parties exercise influence, produce better outcomes than rigid or conflicted gender roles.

Scientists find evidence of Epstein-Barr virus activity in spinal fluid of multiple sclerosis patients

Emerging research has provided fresh evidence regarding the role of viral infection in the development of multiple sclerosis. By analyzing immune cells extracted from the spinal fluid of patients, scientists identified a specific population of “killer” T cells that appear to target the Epstein-Barr virus. The findings suggest that an immune response directed at this common pathogen may drive the neurological damage associated with the disease. The study was published in the journal Nature Immunology.

Multiple sclerosis is a chronic condition in which the immune system mistakenly attacks myelin, the protective sheath covering nerve fibers in the central nervous system. This damage disrupts communication between the brain and the rest of the body. For decades, scientific inquiry focused heavily on CD4+ T cells. These are immune cells that help coordinate the body’s defense response.

However, pathologists have observed that a different type of immune cell is actually more abundant in the brain lesions of patients. These are CD8+ T cells, also known as cytotoxic or “killer” T cells. Their primary function is to destroy cells that have been damaged or infected by viruses. Despite their prevalence at the site of injury, the specific targets they hunt in the central nervous system have remained largely unknown.

There is a strong epidemiological link between the Epstein-Barr virus and multiple sclerosis. Almost every person diagnosed with the condition tests positive for previous exposure to this virus. Yet, because the virus infects the vast majority of the global population, the mere presence of the virus does not explain why some individuals develop the disease while others do not.

Joseph J. Sabatino Jr., a researcher at the University of California, San Francisco, and his colleagues sought to resolve this ambiguity. They aimed to determine what specific proteins the CD8+ T cells in the central nervous system were recognizing. The team hypothesized that identifying the targets of these cells could reveal the mechanism driving the disease.

The researchers collected samples of cerebrospinal fluid and blood from human participants. The study group included 13 individuals with multiple sclerosis or clinically isolated syndrome, a precursor to the disease. For comparison, they also collected samples from five control participants who were healthy or had other neurological conditions.

Obtaining cerebrospinal fluid is an invasive procedure. This makes such samples relatively rare and difficult to acquire, particularly from patients in the early stages of the disease. The team used a technology called single-cell RNA sequencing to analyze these samples. This method allows scientists to examine the genetic activity of thousands of individual cells simultaneously.

The investigators paid particular attention to the T cell receptors found on the surface of the immune cells. These receptors function like unique identification cards or keys. Each one is shaped to bind with a specific protein fragment, or antigen. When a T cell encounters its specific target, it clones itself repeatedly to create an army capable of eliminating the threat.

In the spinal fluid of patients with multiple sclerosis, the researchers found groups of CD8+ T cells that were genetically identical. This indicated they had undergone clonal expansion. These expanded groups were found in much higher concentrations in the spinal fluid than in the blood of the same patients. This suggests that these cells were not just passing through but were actively recruited to the central nervous system to fight a specific target.

To identify that target, the research team employed several antigen discovery strategies. One method involved a technique known as yeast display. The researchers created a library of hundreds of millions of yeast cells, each displaying a different protein fragment on its surface. They exposed the T cell receptors from the patients to this library to see which proteins they would bind.

This screening process initially identified synthetic protein fragments that acted as “mimics” for the true target. While these mimics bound to the receptors, they did not necessarily provoke a functional immune response. To find the naturally occurring target, the researchers compared the genetic sequences of the receptors against databases of known viral antigens.

This comparison yielded a match for the Epstein-Barr virus. Specifically, the receptors from the expanded CD8+ T cells matched those known to target proteins produced by the virus. To validate this finding, the team used CRISPR gene-editing technology. They engineered fresh T cells from healthy donors to express the exact receptors found in the multiple sclerosis patients.

When these engineered cells were exposed to Epstein-Barr virus peptides, they became activated and released inflammatory cytokines. This confirmed that the receptors identified in the spinal fluid were indeed specific for the virus. The team found that these virus-specific cells were highly activated and possessed the molecular machinery necessary to migrate into tissues and kill cells.

The researchers also investigated whether the virus itself was present in the central nervous system. They analyzed the cerebrospinal fluid for viral DNA. They detected genetic material from the Epstein-Barr virus in the fluid of both patients and controls. However, the presence of DNA alone only indicates that the virus is there, not necessarily that it is active.

To assess viral activity, the team looked for viral RNA transcripts. These are produced when the virus is reading its own genes to make proteins. They found higher levels of a specific transcript called BamHI-W in the fluid of patients with multiple sclerosis compared to the control group. This transcript is associated with the virus’s lytic phase, a period when it is actively replicating.

The detection of lytic transcripts suggests that the virus is not dormant in these patients. Instead, it appears to be reactivating within the central nervous system or the immune cells trafficking there. This reactivation could be the trigger that causes the immune system to expand its army of CD8+ T cells.

Some theories of autoimmune disease propose a mechanism called molecular mimicry. This occurs when a viral protein looks so similar to a human protein that the immune system attacks both. The researchers tested the Epstein-Barr virus-specific receptors against human proteins that resembled the viral targets. They found no evidence of cross-reactivity. The T cells attacked the virus but ignored the human proteins.

This finding implies that the immune system in multiple sclerosis may not be confused. It may be accurately targeting a viral invader. The collateral damage to the nervous system could be a side effect of this ongoing battle between the immune system and the reactivated virus.

The gene expression profile of these cells supported this idea. The virus-specific T cells expressed high levels of genes associated with migrating to tissues and persisting there. They appeared to be an “effector” population, primed for immediate defense rather than long-term memory storage.

“Looking at these understudied CD8+ T cells connects a lot of different dots and gives us a new window on how EBV is likely contributing to this disease,” said senior author Joe Sabatino in a press release. The study provides a clearer picture of the cellular machinery at work in the disease.

There are limitations to the study that warrant consideration. The sample size was small, involving only 18 participants in total. This is a common challenge in studies requiring invasive spinal fluid collection. While the researchers identified Epstein-Barr virus targets for some of the expanded T cell clones, the targets for the majority of the expanded cells remain unidentified.

It is also not yet clear if the viral reactivation causes the disease or if the disease state allows the virus to reactivate. The immune system is complex, and inflammation in the brain could theoretically create an environment that favors viral replication. Further research will be necessary to establish the direction of causality.

Future studies will likely focus on larger cohorts of patients. Researchers will need to determine if these virus-specific cells are present at all stages of the disease or only during early development. Additionally, understanding where the virus resides within the central nervous system remains a priority. The virus typically infects B cells, another type of immune cell, and their presence in the brain is a hallmark of multiple sclerosis.

The implications for treatment are notable. Current therapies for multiple sclerosis largely function by suppressing the immune system broadly or by trapping immune cells in the lymph nodes so they cannot enter the brain. If the disease is driven by a viral infection, therapies targeting the virus itself could offer a new approach. Antiviral drugs or vaccines designed to suppress the Epstein-Barr virus might help reduce the immune activation that leads to neurological damage.

The study, “Antigen specificity of clonally enriched CD8+ T cells in multiple sclerosis,” was authored by Fumie Hayashi, Kristen Mittl, Ravi Dandekar, Josiah Gerdts, Ebtesam Hassan, Ryan D. Schubert, Lindsay Oshiro, Rita Loudermilk, Ariele Greenfield, Danillo G. Augusto, Gregory Havton, Shriya Anumarlu, Arhan Surapaneni, Akshaya Ramesh, Edwina Tran, Kanishka Koshal, Kerry Kizer, Joanna Dreux, Alaina K. Cagalingan, Florian Schustek, Lena Flood, Tamson Moore, Lisa L. Kirkemo, Isabelle J. Fisher, Tiffany Cooper, Meagan Harms, Refujia Gomez, University of California, San Francisco MS-EPIC Team, Claire D. Clelland, Leah Sibener, Bruce A. C. Cree, Stephen L. Hauser, Jill A. Hollenbach, Marvin Gee, Michael R. Wilson, Scott S. Zamvil & Joseph J. Sabatino Jr.

World Trade Center responders with PTSD show signs of accelerated brain aging

A new study published in Translational Psychiatry has found that post-traumatic stress disorder is associated with accelerated biological aging in the brain. Researchers found that World Trade Center responders with PTSD had brains that appeared approximately three years older than their chronological age compared to responders without the disorder. This research suggests that the condition involves tangible structural changes to the brain that persist long after the initial trauma.

The health impacts of the September 11, 2001 attacks extend well beyond the immediate physical injuries sustained at Ground Zero. Many responders who assisted in the rescue and recovery efforts developed chronic psychological conditions. PTSD remains particularly prevalent in this population. Previous studies have linked the disorder to various markers of accelerated aging in the body, such as changes in immune function and inflammation.

The specific impact of the disorder on the biological aging of the brain itself has remained less clear. Determining how PTSD affects brain structure is necessary for understanding long-term health risks. Individuals with the condition face a higher statistical likelihood of developing age-related conditions like memory decline or dementia earlier in life. By identifying biological markers of brain aging, scientists hope to create better tools for early diagnosis and treatment.

“Nearly a quarter of World Trade Center responders continue to experience chronic PTSD more than two decades after 9/11, yet we still lack clear biological markers that capture its long-term impact on the brain,” said study author Azzurra Invernizzi of the Icahn School of Medicine at Mount Sinai.

“Previous MRI studies showed structural and functional brain differences in responders with PTSD, but these findings were often region-specific and difficult to translate into an overall picture of brain health. We wanted to address this gap by asking whether PTSD is associated with accelerated brain aging — a single, intuitive metric that reflects cumulative brain
changes and may help explain long-term cognitive and health risks in this population.”

The research team recruited 99 World Trade Center responders to participate in the study. This group included 47 individuals diagnosed with PTSD and 52 individuals with no history of the disorder. The participants were matched based on key demographics such as age, sex, and occupation to ensure a fair comparison. The average age of the participants was approximately 55 years.

Each participant underwent a high-resolution structural magnetic resonance imaging scan. The researchers then employed a specialized artificial intelligence tool called BrainAgeNeXt to analyze these scans. This tool uses a form of deep learning called a convolutional neural network. The model estimates a person’s “brain age” based on anatomical features captured in the MRI data.

The model was previously trained on over 11,000 MRI scans from healthy individuals to learn what a brain typically looks like at different stages of life. This training allows the software to bypass manual measurements and identify complex patterns across the entire brain volume. The team calculated a metric known as the Brain Age Difference for each responder.

This number represents the gap between the age predicted by the MRI scan and the person’s actual chronological age. A positive number indicates the brain appears older than expected. A negative number suggests it appears younger or consistent with healthy aging. The researchers used this metric to compare the two groups of responders.

“Brain age is a summary measure, not a diagnosis, but even modest shifts are meaningful because they reflect widespread changes across the brain rather than isolated regions,” Invernizzi explained. “Accelerated brain aging has been linked in other studies to cognitive decline and increased risk for age-related neurological conditions.”

The analysis revealed a significant distinction between the groups. Responders diagnosed with PTSD showed an average Brain Age Difference of approximately 3.07 years. In contrast, responders without the disorder showed an average difference of negative 0.43 years. This indicates that the brains of those with the condition showed structural signs associated with advanced age compared to their trauma-exposed peers.

“One striking aspect was how clearly PTSD status alone distinguished brain aging trajectories, even among individuals with shared exposures and similar demographic characteristics,” Invernizzi told PsyPost. “This suggests that PTSD itself may play a central role in shaping long-term brain outcomes, beyond general stress or aging effects.”

Further examination linked these higher brain age estimates to specific anatomical changes. The researchers observed associations between increased brain age and larger volumes of cerebrospinal fluid and ventricular spaces. These patterns typically signify a loss of brain tissue or atrophy. In the PTSD group specifically, a smaller thalamus was associated with an older-appearing brain. The thalamus is a region involved in sensory processing and fear regulation.

The study also assessed the duration of time responders spent working at the World Trade Center site. The data indicated that the length of exposure moderated the relationship between the disorder and brain age. Responders with PTSD who spent more time working at the disaster site tended to show greater increases in estimated brain age.

This interaction suggests that the combination of the psychological condition and prolonged exposure to the environmental stressors of the site may compound the effects on brain structure. Responders faced both psychological trauma and exposure to particulate matter and toxins during the recovery efforts. The study implies these factors might work synergistically to accelerate aging processes.

“The key takeaway is that PTSD is not only a psychological condition—it is associated with measurable, long-lasting changes in the brain,” Invernizzi said. “In responders exposed to the extreme trauma of 9/11, PTSD was linked to a brain that appears ‘older’ than expected for a person’s chronological age. This underscores the importance of recognizing PTSD as a condition with real biological consequences and reinforces the need for long-term monitoring and support for affected individuals.”

While the findings provide insight into the biological footprint of PTSD, there are limitations to consider. The study utilized a cross-sectional design. This means the data was collected at a single point in time. This structure prevents researchers from proving that the disorder caused the accelerated aging. It remains possible that pre-existing brain differences made some individuals more susceptible to developing the condition.

“It’s important to note that an ‘older-appearing’ brain does not mean inevitable cognitive decline or neurodegenerative disease,” Invernizzi noted. “Brain age is a statistical biomarker, not a clinical diagnosis. Additionally, while our findings show a strong association between PTSD and accelerated brain aging, they do not prove causality.”

Future research efforts will likely focus on longitudinal studies that track participants over many years. Monitoring how these brain age markers change over time could help clarify the direction of the relationship between trauma and aging. Scientists also aim to investigate whether specific treatments or lifestyle interventions might slow or reverse these patterns.

“Our next steps include examining how brain aging relates to cognitive performance, physical health, and functional outcomes over time, as well as identifying factors—such as treatment, resilience, or lifestyle—that may slow or reverse accelerated brain aging in PTSD,” Invernizzi told PsyPost. “Ultimately, we hope this work will inform targeted interventions and improve long-term care for trauma exposed populations.”

“This study also highlights the potential of advanced AI-based neuroimaging tools to capture complex brain changes in real-world clinical populations. By using a data-driven approach trained on thousands of brain scans, we can move closer to objective, scalable biomarkers that complement traditional clinical assessments and help bridge neuroscience and public health.”

The study, “MRI signature of brain age underlying post-traumatic stress disorder in World Trade Center responders,” was authored by Azzurra Invernizzi, Francesco La Rosa, Anna Sather, Elza Rechtman, Ismail Nabeel, R. Sean Morrison, Alison C. Pellecchia, Stephanie Santiago-Michels, Evelyn J. Bromet, Roberto G. Lucchini, Benjamin J. Luft, Sean A. Clouston, Erin S. Beck, Cheuk Y. Tang, and Megan K. Horton.

This behavior explains why emotionally intelligent couples are happier

New research suggests that emotional intelligence improves romantic relationships primarily through a single, specific behavior: making a partner feel valued and appreciated. While emotionally intelligent people employ various strategies to manage their partners’ feelings, the act of valuing stands out as the most consistent driver of relationship quality. This finding implies that the key to a happier partnership may be as simple as regularly expressing that one’s partner is special. The study appears in the Journal of Social and Personal Relationships.

Emotional intelligence is broadly defined as the ability to perceive, understand, and manage emotions. Psychologists have recognized a connection between this skill set and successful romances. People with higher emotional intelligence generally report higher satisfaction with their partners. Despite this established link, the specific mechanisms explaining why these individuals have better relationships have remained unclear.

One theory proposes that the answer lies in how people regulate emotions. This concept encompasses not only how individuals manage their own feelings but also how they influence the feelings of those around them. This latter process is known as extrinsic emotion regulation. In a romantic partnership, this often involves one person trying to cheer up, calm down, or validate the other.

To investigate this theory, a research team led by Hester He Xiao from the University of Sydney in Australia conducted a detailed study. They aimed to identify which specific regulatory behaviors bridge the gap between emotional intelligence and relationship satisfaction. The researchers sought to understand if emotionally intelligent people are simply better at helping their partners navigate difficult feelings.

The study included 175 heterosexual couples, comprising 350 individuals in total. The participants were recruited online and ranged in age from their early 20s to their 80s. The researchers designed a longitudinal study that spanned 14 weeks. This design allowed them to track changes and associations over time rather than just capturing a single snapshot.

Participants completed surveys in three separate waves. In the first wave, they assessed their own emotional intelligence levels. They answered questions about their ability to appraise and use emotions. In the second wave, they reported on the specific strategies they used to make their partners feel better. The researchers focused on three “high-engagement” strategies: cognitive reframing, receptive listening, and valuing.

Cognitive reframing involves helping a partner view a situation from a new, more positive perspective. Receptive listening entails encouraging a partner to vent their emotions while paying close attention to what they say. Valuing consists of actions that make the partner feel special, important, and appreciated. In the final wave, participants rated the overall quality of their relationship, considering factors like trust, closeness, and conflict levels.

The researchers used a statistical approach called the Actor-Partner Interdependence Mediation Model. This method treats the couple as a unit. It allows scientists to see how one person’s emotional intelligence affects their own happiness, known as an actor effect. It also reveals how that same person’s intelligence affects their partner’s happiness, known as a partner effect.

The analysis revealed that valuing was the primary mediator for both men and women. Individuals with higher emotional intelligence were more likely to use valuing strategies. In turn, frequent use of valuing was associated with higher relationship quality for both members of the couple. This means that when a person feels their partner values them, the relationship improves. Simultaneously, the person doing the valuing also perceives the relationship as better.

This finding was unique because it applied consistently across genders. Whether the high-emotional-intelligence partner was male or female, the pathway was the same. They used their emotional skills to convey appreciation. This action created a positive feedback loop that boosted satisfaction for everyone involved.

The other two strategies showed less consistent results. Cognitive reframing and receptive listening did play roles, but they functioned differently for men and women. For example, men with higher emotional intelligence were more likely to use receptive listening. When men listened attentively, their female partners reported better relationship quality. However, the men themselves did not report a corresponding increase in their own relationship satisfaction from this behavior.

Women’s use of receptive listening showed a different pattern. When women listened attentively, it was linked to better relationship quality for both themselves and their male partners. This suggests a gender difference in how listening is experienced. For women, engaging deeply with a partner’s emotions appears to be mutually rewarding. For men, it primarily benefits the partner.

Cognitive reframing also displayed gendered nuances. Men’s use of reframing—helping a partner see the bright side—predicted higher relationship quality for their female partners. Women’s use of reframing did not show this same strong association in the primary analysis. These variations highlight that while valuing is universally beneficial, other support strategies may depend on who is using them.

The researchers also looked at whether these behaviors predicted changes in relationship quality over time. They ran an analysis controlling for the couples’ initial satisfaction levels. In this stricter test, the mediation effect of valuing disappeared. This result indicates that while emotional intelligence and valuing are linked to high relationship quality in the present, they may not drive long-term improvements.

This distinction is important for understanding the limits of the findings. The behaviors seem to maintain a good relationship rather than transforming a bad one. High emotional intelligence helps sustain a high level of functioning. It does not necessarily predict that a relationship will grow happier over the 14-week period if it starts at a certain baseline.

There was one unexpected finding in the change-over-time analysis. Men’s emotional intelligence was associated with a decrease in their female partners’ relationship quality relative to the baseline. This hints at a potential “dark side” to emotional intelligence. It is possible that some individuals use their emotional skills for manipulation or self-serving goals, though this interpretation requires further study.

The study had several limitations that affect how the results should be viewed. The sample consisted primarily of White, English-speaking participants from Western countries. Cultural differences in how emotions are expressed and regulated could lead to different results in other populations. Additionally, the study relied on self-reports for all measures. Participants described their own behaviors, which can introduce bias.

People often perceive their own actions differently than their partners do. A person might believe they are listening attentively, while their partner feels ignored. Future research would benefit from asking partners to rate each other’s regulation strategies. This would provide a more objective measure of how well these strategies are actually performed.

The timing of the data collection is another factor to consider. The study took place between August and October 2021. This was a period when many people were still adjusting to life after the peak of the COVID-19 pandemic. The unique stressors of that time may have influenced how couples relied on each other for emotional support.

Future research should also explore the context in which these strategies are used. The current study asked about general attempts to make a partner feel better. It did not distinguish between low-stakes situations and high-conflict arguments. It is possible that cognitive reframing or listening becomes more or less effective depending on the intensity of the distress.

Despite these caveats, the core message offers practical insight. While complex psychological skills help, the most effective behavior is relatively straightforward. Making a partner feel valued acts as a powerful buffer. It connects emotional ability to tangible relationship success. For couples, focusing on simple expressions of appreciation may be the most efficient way to utilize emotional intelligence.

The study, “Valuing your partner more: Linking emotional intelligence to better relationship quality,” was authored by Hester He Xiao, Kit S. Double, Rebecca T. Pinkus, and Carolyn MacCann.

Scientists just mapped the brain architecture that underlies human intelligence

For decades, researchers have attempted to pinpoint the specific areas of the brain responsible for human intelligence. A new analysis suggests that general intelligence involves the coordination of the entire brain rather than the superior function of any single region. By mapping the connections within the human brain, or connectome, scientists found that distinct patterns of global communication predict cognitive ability.

The research indicates that intelligent thought relies on a system-wide architecture optimized for efficiency and flexibility. These findings were published in the journal Nature Communications.

General intelligence represents the capacity to reason, learn, and solve problems across a variety of different contexts. In the past, theories often attributed this capacity to specific networks, such as the areas in the frontal and parietal lobes involved in attention and working memory. While these regions are involved in cognitive tasks, newer perspectives suggest they are part of a larger story.

The Network Neuroscience Theory proposes that intelligence arises from the global topology of the brain. This framework suggests that the physical wiring of the brain and its patterns of activity work in tandem.

Ramsey R. Wilcox, a researcher at the University of Notre Dame, led the study to test the specific predictions of this network theory. Working with senior author Aron K. Barbey and colleagues from the University of Illinois and Stony Brook University, Wilcox sought to move beyond localized models. The team aimed to understand how the brain’s physical structure constrains and directs its functional activity.

To investigate these questions, the research team utilized data from the Human Connectome Project. This massive dataset provided brain imaging and cognitive testing results from 831 healthy young adults. The researchers also validated their findings using an independent sample of 145 participants from a separate study.

The investigators employed a novel method that combined two distinct types of magnetic resonance imaging (MRI) data. They used diffusion-weighted MRI to map the structural white matter tracts, which act as the physical cables connecting brain regions. Simultaneously, they analyzed resting-state functional MRI, which measures the rhythmic activation patterns of brain cells.

By integrating these modalities, Wilcox and his colleagues created a joint model of the brain. This approach allowed them to estimate the capacity of structural connections to transmit information based on observed activity. The model corrected for limitations in traditional scanning, such as the difficulty in detecting crossing fibers within the brain’s white matter.

The team then applied predictive modeling techniques to see if these global network features could estimate a participant’s general intelligence score. The results provided strong support for the idea that intelligence is a distributed phenomenon. Models that incorporated connections across the whole brain successfully predicted intelligence scores.

In contrast, models that relied on single, isolated networks performed with less accuracy. This suggests that while specific networks have roles, the interaction between them is primary. The most predictive connections were not confined to one area but were spread throughout the cortex.

One of the specific predictions the team tested involved the strength and length of neural connections. The researchers found that individuals with higher intelligence scores tended to rely on “weak ties” for long-range communication. In network science, a weak tie represents a connection that is not structurally dense but acts as a bridge between separate communities of neurons.

These long-range, weak connections require less energy to maintain than dense, strong connections. Their weakness allows them to be easily modulated by neural activity. This quality makes the brain more adaptable, enabling it to reconfigure its communication pathways rapidly in response to new problems.

The study showed that in highly intelligent individuals, these predictive weak connections spanned longer physical distances. Conversely, strong connections in these individuals tended to be shorter. This architecture likely balances the high cost of long-distance communication with the need for system-wide integration.

Another key finding concerned “modal control.” This concept refers to the ability of specific brain regions to drive the brain into difficult-to-reach states of activity. Cognitive tasks often require the brain to shift away from its default patterns to process complex information.

Wilcox and his team found that general intelligence was positively associated with the presence of regions exhibiting high modal control. These control hubs were located in areas of the brain associated with executive function and visual processing. The presence of these regulating nodes allows the brain to orchestrate interactions between different networks effectively.

The researchers also examined the overall topology of the brain using a concept known as “small-worldness.” A small-world network is one that features tight-knit local communities of nodes as well as short paths that connect those communities. This organization is efficient because it allows for specialized local processing while maintaining rapid global communication.

The analysis revealed that participants with higher intelligence scores possessed brain networks with greater small-world characteristics. Their brains exhibited high levels of local clustering, meaning nearby regions were tightly interconnected. Simultaneously, they maintained short average path lengths across the entire system.

This balance ensures that information does not get trapped in local modules. It also ensures that the brain does not become a disorganized random network. The findings suggest that deviations from this optimal balance may underlie lower cognitive performance.

There are limitations to the current study that warrant consideration. The research relies on correlational data, so it cannot definitively prove that specific network structures cause higher intelligence. It is possible that engaging in intellectual activities alters the brain’s wiring over time.

Additionally, the study focused primarily on young adults. Future research will need to determine if these network patterns hold true across the lifespan, from childhood development through aging. The team also used linear modeling techniques, which may miss more nuanced, non-linear relationships in the data.

These insights into the biological basis of human intelligence have implications for the development of artificial intelligence. Current AI systems often excel at specific tasks but struggle with the broad flexibility characteristic of human thought. Understanding how the human brain achieves general intelligence through global network architecture could inspire new designs for artificial systems.

By mimicking the brain’s balance of local specialization and global integration, engineers might create AI that is more adaptable. The reliance on weak, flexible connections for integrating information could also serve as a model for efficient data processing.

The shift in perspective offered by this study is substantial. It moves the field away from viewing the brain as a collection of isolated tools. Instead, it presents the brain as a unified, dynamic system where the pattern of connections determines cognitive potential.

Wilcox and his colleagues have provided empirical evidence that validates the core tenets of Network Neuroscience Theory. Their work demonstrates that intelligence is not a localized function but a property of the global connectome. As neuroscience continues to map these connections, the definition of what it means to be intelligent will likely continue to evolve.

The study, “The network architecture of general intelligence in the human connectome,” was authored by Ramsey R. Wilcox, Babak Hemmatian, Lav R. Varshney & Aron K. Barbey.

Sorting Hat research: What does your Hogwarts house say about your psychological makeup?

A recent study suggests that the popular “Sorting Hat Quiz” from the Harry Potter universe may loosely reflect actual personality traits, particularly for fans of the series. The findings indicate that while the quiz captures some real psychological differences, its predictive power relies heavily on the participant’s familiarity with the narrative. These results were published in PLOS One.

Human beings possess a deep-seated drive to engage with storytelling and often identify closely with fictional characters. This tendency frequently manifests in the popularity of online assessments that assign individuals to specific groups within a fictional universe.

The “Sorting Hat Quiz” is a prominent example where users are sorted into one of four Hogwarts Houses based on their responses to situational questions. Prior investigations suggested a correlation between these House assignments and established psychological traits. The authors of the current study sought to verify these associations using more rigorous personality measures. They also aimed to determine if these connections exist for people who are unfamiliar with the books.

“The project actually started in a very down-to-earth way: my coauthors and I are genuine Harry Potter fans, and at some point we found ourselves joking—but also seriously debating—that each of us’ belongs’ to a different Hogwarts House,” said study author Maria Flakus of the Polish Academy of Sciences in Warsaw.

“That naturally led to a more scientific question: is there any real psychological signal behind these identifications, or are they mostly narrative stereotypes and wishful thinking? In other words, we wanted to see whether people’s House alignment (especially the House they feel they are, or want to be) maps onto meaningful differences in their dominant personality characteristics.”

“At the same time, there was a broader gap worth addressing. Sorting-type pop-culture quizzes are massively popular and people often treat the outcomes as surprisingly ‘accurate,’ yet the evidence for whether they track established psychological traits—and under what conditions—is limited and not fully consistent. We were particularly motivated to test whether the Sorting Hat Quiz can tell us something about personality at all, and whether ‘desired’ House membership might be as informative (or even more informative) than the algorithmic assignment—potentially reflecting an ideal self rather than a measured trait profile.”

To examine this, the research team recruited 677 participants through social media platforms. The sample consisted of adults ranging from 18 to 55 years old who were residents of Poland or spoke Polish fluently. The researchers divided the participants into two distinct groups based on their exposure to the series. The first group contained 578 individuals who had read the Harry Potter books. The second group consisted of 99 individuals who had not read the books.

Participants completed the official Sorting Hat Quiz on the Wizarding World website to determine their designated House. They also indicated which House they personally desired to join. To assess personality, the researchers administered the Polish Personality Lexicon, which is based on the HEXACO model. This model measures honesty-humility, emotional stability, extroversion, agreeableness, conscientiousness, and openness to experience.

The study also employed specific scales to measure darker personality aspects known as the Dark Triad. The researchers used the Narcissistic Admiration and Rivalry Questionnaire and the MACH-IV scale (for Machiavellianism). They assessed psychopathy using the Triarchic Psychopathy Measure. Additionally, the Need for Cognition Scale evaluated how much participants enjoyed complex thinking and intellectual challenges.

The data revealed specific patterns among the participants who had read the books. Individuals sorted into Slytherin scored higher on measures of Machiavellianism, narcissism, and psychopathy compared to members of other Houses. These participants displayed traits associated with manipulation and a focus on self-interest. This finding aligns with the fictional portrayal of Slytherin House as ambitious and sometimes cunning.

Participants sorted into Ravenclaw demonstrated a higher need for cognition. This indicates a preference for intellectual engagement and problem-solving activities. This result corresponds well with the Ravenclaw reputation for valuing wit, learning, and wisdom. Those assigned to Gryffindor scored marginally higher on extroversion than the other groups. This suggests a tendency toward social assertiveness and enthusiasm.

Individuals sorted into Hufflepuff reported higher levels of agreeableness and honesty-humility. This aligns with the fictional description of the House as valuing fair play, loyalty, and hard work. However, these participants also reported lower levels of emotional stability. This finding implies a greater tendency to experience worry or a need for emotional support in stressful situations.

“Readers should think of the effects as modest rather than ‘life-defining,'” Flakus told PsyPost. “Even when differences between Houses are statistically reliable, there’s substantial overlap—many people in different Houses look similar on standard trait measures—so House membership explains only a limited share of personality variance. Practically, that means the Sorting Hat result may capture a real tendency at the group level, but it’s not precise enough for individual prediction or decision-making. It’s best viewed as a fun, coarse-grained signal.”

The researchers noted a discrepancy regarding conscientiousness among Hufflepuffs. Previous theories posited that Hufflepuffs would score highest in this trait due to their association with hard work. The current data provided evidence that Hufflepuffs did not score significantly higher in conscientiousness than members of other Houses. This challenges some of the simpler stereotypes associated with the House.

The researchers also analyzed the personality traits of participants based on the House they wanted to join rather than the one they were assigned. The patterns for desired Houses closely mirrored the results for the assigned Houses among readers. For example, those who wished to be in Slytherin scored higher on narcissism and psychopathy. This implies that personal preference is a strong indicator of one’s psychological makeup in this context.

“We were surprised that the pattern of associations pointed not only to traits but also to how people see themselves—self-identification sometimes seemed as informative as the quiz assignment,” Flakus said.

But the relationships between House assignment and personality traits were largely absent in the group of non-readers. While there was a minor link between Gryffindor assignment and extroversion, most other correlations disappeared. The Sorting Hat Quiz failed to predict the “Dark Triad” traits or need for cognition in participants unfamiliar with the books. This suggests that the quiz itself does not function as a standalone personality test.

These findings suggest that the Sorting Hat Quiz is not an effective tool for psychological assessment in a general context. The predictive power of the quiz appears to depend on the participant’s knowledge of the fictional universe. This supports the “narrative collective assimilation hypothesis.” This theory proposes that immersing oneself in a story allows a person to internalize the traits of a specific group within that narrative.

Fans of the series may unconsciously or consciously align their self-perception with the traits of their preferred House. When they answer personality questions, they may do so through the lens of this identity. For non-readers, the questions in the quiz lack this contextual weight. Consequently, their answers do not aggregate into meaningful personality profiles in the same way.

“The key takeaway is that these kinds of pop-culture quizzes can reflect some real personality differences, but they’re not a substitute for validated psychological assessment,” Flakus explained. “Your ‘House’ can be a fun mirror of broad tendencies—and sometimes your preferred House may say as much about your values or ideal self as about your traits—so it’s best used as a playful starting point for self-reflection, not a diagnosis.”

As with all research, there are some limitations to consider. The group of non-readers was relatively small compared to the group of readers. The sample was also predominantly female and recruited via social media. This may affect how well the results represent the general population.

Future inquiries could examine whether these patterns persist across different generations of fans. Researchers might also investigate similar phenomena in other popular fictional universes. Further study is needed to understand how identifying with fictional groups relates to real-world behaviors and values.

“At this point, we don’t have a fixed long-term roadmap yet, but we do see several promising next steps,” Flakus said. “One natural extension would be to test whether similar patterns appear in other pop-culture identity systems—i.e., whether identifying with particular factions, archetypes, or ‘types’ in other franchises relates to established personality traits in comparable ways.”

“We’re also interested in potential generational differences: the Harry Potter universe has a distinct cultural footprint across age cohorts, so it would be valuable to examine whether the mechanisms behind identification (and its links to traits or values) vary by generation.”

“Finally, an important direction is to look more closely at how these quizzes function among people who don’t know the universe at all—in our study we had such a subgroup, but it was small. A larger, more balanced sample would let us more confidently explore whether the quiz captures general psychological tendencies independent of fandom, or whether familiarity and narrative knowledge meaningfully shape the outcomes.”

The new findings regarding the personality structures of Hogwarts Houses align with separate research focused on external economic behaviors. A study published in Small Business Economics by Martin Obschonka and colleagues utilized a massive dataset to examine how these fictional profiles relate to entrepreneurship.

While the current study focused on self-reported traits, the Obschonka research found that identifying with Gryffindor or Slytherin predicted a higher likelihood of starting a business. The researchers attributed this to a shared tendency toward “deviance” or rule-breaking, which is often necessary for innovation.

The new study, “Harry Potter and personality assessment – The utility of the Sorting Hat Quiz in personality traits’ assessment,” was authored by Lidia Baran, Maria Flakus, and Franciszek Stefanek.

Deceptive AI interactions can feel more deep and genuine than actual human conversations

A new study published in Communications Psychology suggests that artificial intelligence systems can be more effective than humans at establishing emotional closeness during deep conversations, provided the human participant believes the AI is a real person. The findings indicate that while individuals can form social bonds with AI, knowing the partner is a machine reduces the feeling of connection.

The rapid development of large language models has fundamentally altered the landscape of human-computer interaction. Previous observations have indicated that these programs can generate content that appears empathetic and similar to human speech. Despite these advancements, it remained unclear whether humans could form relationships with AI that are as strong as those formed with other people. This is particularly relevant during the initial stages of getting to know a stranger.

Scientists aimed to fill this gap by investigating how relationship building differs between human partners and AI partners. They sought to determine if AI could handle “deep talk,” which involves sharing personal feelings and memories, as effectively as it handles superficial “small talk.” Additionally, the research team wanted to understand how a person’s pre-existing attitude toward technology affects this connection. Many people view AI with skepticism or perceive it as a threat to uniquely human qualities like emotion.

To investigate these dynamics, the research team recruited a total of 492 participants between the ages of 18 and 35. The sample consisted of university students. The experiments took place online to mimic typical digital communication. To simulate a realistic environment for relationship building, the researchers utilized a method known as the “Fast Friends Procedure.” This standardized protocol involves two partners asking and answering a series of questions that become increasingly personal over time.

In the first study, 322 participants engaged in a text-based chat. They were all informed that they would be interacting with another human participant. In reality, the researchers assigned half of the participants to chat with a real human. The other half interacted with a fictional character generated by a Google AI model known as PaLM 2. The interactions were further divided into two categories. Some pairs engaged in small talk, discussing casual topics. Others engaged in deep talk, addressing emotionally charged subjects.

The results from this first experiment showed a distinct difference based on the type of conversation. When the interaction involved small talk, participants reported similar levels of closeness regardless of whether their partner was human or AI. However, in the deep talk condition, the AI partner outperformed the human partner. Participants who unknowingly chatted with the AI reported significantly higher feelings of interpersonal closeness than those who chatted with real humans.

To understand why this occurred, the researchers analyzed the linguistic patterns of the chats. They found that the AI produced responses with higher levels of “self-disclosure.” The AI spoke more about emotions, self-related topics, and social processes. This behavior appeared to encourage the human participants to reciprocate. When the AI shared more “personal” details, the humans did the same. This mutual exchange of personal information led to a stronger perceived bond.

The second study sought to determine how the label assigned to the partner influenced these feelings. This phase focused exclusively on deep conversations. The researchers analyzed data from 334 participants, combining new recruits with relevant data from the first experiment. In this setup, the researchers manipulated the information given to the participants. Some were told they were chatting with a human, while others were told they were interacting with an AI.

The researchers found that the label played a significant role in relationship building. Regardless of whether the partner was actually a human or a machine, participants reported feeling less closeness when they believed they were interacting with an AI. This suggests an anti-AI bias that hinders social connection. The researchers noted that this effect was likely due to lower motivation. When people thought they were talking to a machine, they wrote shorter responses and engaged less with the conversation.

Despite this bias, the study showed that relationship building did not disappear entirely. Participants still reported an increase in closeness after chatting with a partner labeled as AI, just to a lesser degree than with a partner labeled as human. This suggests that people can develop social bonds with artificial agents even when they are fully aware of the agent’s non-human nature.

The researchers also explored individual differences in these interactions. They looked at a personality trait called “universalism,” which involves a concern for the welfare of people and nature. The analysis indicated that individuals who scored high on universalism felt closer to partners labeled as human but did not show the same increased closeness toward partners labeled as AI. This finding suggests that personal values may influence how receptive an individual is to forming bonds with technology.

There are several potential misinterpretations and limitations to consider regarding this work. The study relied on text-based communication, which differs significantly from face-to-face or voice-based interactions. The absence of visual and auditory cues might make it easier for an AI to pass as human. Additionally, the sample consisted of university students from a Western cultural context. The findings may not apply to other age groups or cultures.

The AI responses were generated using a specific model available in early 2024. As technology evolves rapidly, newer models might yield different results. It is also important to note that the AI was prompted to act as a specific character. This means the results apply to AI that is designed to mimic human behavior, rather than a generic chatbot assistant.

Future research could investigate whether these effects persist over longer periods. This study looked only at a single, short-term interaction. Scientists could also explore whether using avatars or voice generation changes the dynamic of the relationship. It would be useful to understand if the “uncanny valley” effect, where near-human replicas cause discomfort, becomes relevant as the technology becomes more realistic.

The study has dual implications for society. On one hand, the ability of AI to foster closeness suggests it could be useful in therapeutic settings or for combating loneliness. It could help alleviate the strain on overburdened social and medical services. On the other hand, the fact that AI was most effective when disguised as a human points to significant ethical risks. Malicious actors could use such systems to create deceptive emotional connections for scams or manipulation.

The study, “AI outperforms humans in establishing interpersonal closeness in emotionally engaging interactions, but only when labelled as human,” was authored by Tobias Kleinert, Marie Waldschütz, Julian Blau, Markus Heinrichs, and Bastian Schiller.

Divorce history is not linked to signs of brain aging or dementia markers

A new study investigating the biological impact of marital dissolution suggests that a history of divorce does not accelerate physical changes in the brain associated with aging or dementia. Researchers analyzed brain scans from a racially and ethnically diverse group of older adults to look for signs of neurodegeneration. They found no robust link between having been divorced and the presence of Alzheimer’s disease markers or reductions in brain volume. These findings were published in Innovation in Aging.

The rising number of older adults globally has made understanding the causes of cognitive decline a priority for medical researchers. Scientists are increasingly looking beyond diet and exercise to understand how social and psychological experiences shape biology. Psychosocial stress is a primary area of interest in this field. Chronic stress can negatively impact the body, potentially increasing inflammation or hormonal imbalances that harm brain cells over time.

Divorce represents one of the most common and intense sources of psychosocial stress in the United States. Approximately 17 percent of adults over the age of 50 reported being divorced in 2023. The experience often involves not just the emotional pain of a relationship ending but also long-term economic strain and the loss of social standing. These secondary effects are often particularly harsh for women.

Previous research into how divorce affects the aging mind has produced conflicting results. Some past studies indicated that divorced or widowed individuals faced higher odds of developing dementia compared to married peers. Other inquiries found that ending a marriage might actually slow cognitive decline in some cases. Most of this prior work relied on memory tests rather than looking at the physical condition of the brain itself.

To address this gap, a team of researchers sought to determine if divorce leaves a physical imprint on brain structure. The study was led by Suhani Amin and Junxian Liu, who are affiliated with the Leonard Davis School of Gerontology at the University of Southern California. They collaborated with senior colleagues from Kaiser Permanente, the University of California, Davis, and Rush University.

The team hypothesized that the accumulated stress of divorce might correlate with worse brain health in later years. They specifically looked for reductions in brain size and the accumulation of harmful proteins. They also aimed to correct a limitation in previous studies that often focused only on White populations. This new analysis prioritized a cohort that included Asian, Black, Latino, and White participants.

The researchers utilized data from two major ongoing health studies. The first was the Kaiser Healthy Aging and Different Life Experiences (KHANDLE) cohort. The second was the Study of Healthy Aging in African Americans (STAR) cohort. Both groups consisted of long-term members of the Kaiser Permanente Northern California healthcare system.

Participants in these cohorts had previously completed detailed health surveys and were invited to undergo neuroimaging. The researchers identified 664 participants who had complete magnetic resonance imaging (MRI) data. They also analyzed a subset of 385 participants who underwent positron emission tomography (PET) scans. The average age of the participants at the time of their MRI scan was approximately 74 years old.

The primary variable the researchers examined was a history of divorce. They classified participants based on whether they answered yes to having a previous marriage end in divorce. They also included individuals who reported their current marital status as divorced. This approach allowed them to capture lifetime exposure to the event rather than just current status.

The MRI scans provided detailed images allowing the measurement of brain volumes. The team looked at the total size of the cerebrum and specific regions like the hippocampus. The hippocampus is a brain structure vital for learning and memory that often shrinks early in the course of Alzheimer’s disease. They also examined the lobes of the brain and the volume of gray matter and white matter.

In addition to volume, the MRI scans measured white matter hyperintensities. These are bright spots on a scan that indicate damage to the brain’s communication cables. High amounts of these hyperintensities are often associated with vascular problems and cognitive slowing.

The PET scans utilized a radioactive tracer to detect amyloid plaques. Amyloid beta is a sticky protein that clumps between nerve cells and is a hallmark characteristic of Alzheimer’s disease. The researchers calculated the density of these plaques to determine if a person crossed the threshold for amyloid positivity.

The statistical analysis accounted for various factors that could skew the results. The models adjusted for age, sex, race and ethnicity, and education level. They also controlled for whether the participant was born in the American South and whether their own parents had divorced.

The results showed that individuals with a history of divorce had slightly smaller volumes in the total cerebrum and hippocampus. They also displayed slightly greater volumes of white matter hyperintensities. However, these differences were small and not statistically significant. This means the calculations were not precise enough to rule out the possibility that the differences were due to random chance.

The PET scan analysis yielded similar results regarding Alzheimer’s pathology. There was no meaningful association between a history of divorce and the total burden of amyloid plaques. The likelihood of being classified as amyloid-positive was effectively the same for divorced and non-divorced participants.

The researchers performed several sensitivity analyses to ensure their findings were robust. They broke the data down by sex to see if men and women experienced different effects. Although the impact of divorce on brain volume seemed to trend in opposite directions for men and women in some brain regions, the confidence intervals overlapped. This suggests there is no strong evidence of a sex-specific difference in this sample.

They also checked if the definition of the sample population affected the outcome. They ran the numbers again excluding people who had never been married. They also adjusted for childhood socioeconomic status, looking at factors like parental education and financial stability. None of these adjustments altered the primary conclusion that divorce was not associated with brain changes.

There are several potential reasons why this study did not find a link between divorce and neurodegeneration. One possibility is that the stress of divorce acts more like an acute, short-term event rather than a chronic condition. Detectable changes in brain structure usually result from sustained exposure to adversity over many years. It is possible that for many people, the stress of divorce resolves before it causes permanent biological damage.

Another factor is the heterogeneity of the divorce experience. For some individuals, ending a marriage is a devastating source of trauma and financial ruin. For others, it is a relief that removes them from an unhealthy or unsafe environment. These opposing experiences might cancel each other out when analyzing a large group, leading to a null result.

The authors noted several limitations to their work. The study relied on a binary measure of whether a divorce occurred. They did not have data on the timing of the divorce or the reasons behind it. They also lacked information on the subjective level of stress the participants felt during the separation.

Future research could benefit from a more nuanced approach. Gathering data on the duration of the marriage and the economic aftermath of the split could provide clearer insights. Understanding the personal context of the divorce might help reveal specific subgroups of people who are more vulnerable to health consequences.

The study provides a reassuring perspective for the millions of older adults who have experienced marital dissolution. While divorce is undoubtedly a major life event, this research suggests it does not automatically dictate the biological health of the brain in late life. It underscores the resilience of the aging brain in the face of common social stressors.

The study, “The Association Between Divorce and Late-life Brain Health in a Racially and Ethnically Diverse Cohort of Older Adults,” was authored by Suhani Amin, Junxian Liu, Paola Gilsanz, Evan Fletcher, Charles DeCarli, Lisa L. Barnes, Rachel A. Whitmer, and Eleanor Hayes-Larson.

Infants fed to sleep at 2 months wake up more often at 6 months

A 12-month longitudinal study found that infants who are put to bed with a bottle at 2 months of age tended to display more sleep problems at 6 months of age. They needed a longer time to fall asleep, spent more time awake, and woke up during the night more often. Mothers of infants who displayed more sleep problems at 6 months of age were more likely to keep putting them to bed with a bottle at 14 months of age. The paper was published in the Journal of Sleep Research.

Many infants have sleep problems, particularly in the first year of life. These include difficulty falling asleep, frequent or prolonged night wakings, short nighttime sleep duration, and an inability to soothe themselves back to sleep. These problems are important because they are linked to later risks for both child and family well-being.

Poor infant sleep has been associated with outcomes such as overweight, obesity, and difficulties in emotional and behavioral regulation. Sleep problems also affect parents, contributing to higher depressive symptoms, lower energy, and less adaptive parenting practices. Research suggests that infant sleep and parenting behaviors influence each other in a bidirectional, transactional way over time.

One parenting practice of interest is putting an infant to bed with a bottle, which is believed to interfere with the infant’s ability to self-soothe to sleep. Feeding infants to sleep is associated with shorter nighttime sleep duration, more frequent night wakings, and greater sleep fragmentation. Expert guidance therefore emphasizes putting infants to bed while drowsy but still awake, rather than using feeding as a sleep aid.

Providing a bottle at bedtime has also been identified as a feeding practice that promotes obesity, linking sleep routines to physical health outcomes. Poor infant sleep may, in turn, increase parents’ reliance on bottle-to-bed practices as a way to manage nighttime distress.

Study author Esther M. Leerkes and her colleagues wanted to examine associations between putting the infant to bed with a bottle and maternal-reported infant sleep problems. They conducted a 12-month longitudinal study in which they followed a group of infants and their mothers from the infants’ 2nd month of life until the infants were 14 months old.

Pregnant women in their third trimester were recruited in and around Guilford County, North Carolina, to participate in the Infant Growth and Development Study. The primary goal of that larger study was to identify early life predictors of childhood obesity. Originally, 299 women were recruited. The average age of these mothers was approximately 30 years (mean age 29.71).

Data from participating women were collected when their infants were 2 months, 6 months, and 14 months old. 90% of these women provided data at the 2-month wave, 81% at 6 months, and 76% at 14 months.

Mothers reported how often they put their infant to bed with a bottle of formula, breast milk, juice, juice drink, or any other kind of milk by providing ratings on a 5-point scale. They reported infants’ sleep problems using the Brief Infant Sleep Questionnaire.

The study authors included data on maternal education, race, and their participation in the Women, Infant and Children Special Food Supplemental Program (WIC) in their analyses. They also controlled for maternal depressive symptoms, maternal sleep quality, breastfeeding status, and weekly work hours. WIC is a U.S. federal nutrition assistance program that provides supplemental foods, nutrition education, and health referrals to low-income pregnant women, new mothers, infants, and young children.

Results showed that infants who were put to bed with a bottle more frequently at 2 months of age tended to display more sleep problems at 6 months of age. They needed a longer time to fall asleep, spent more time awake at night, and had more frequent night wakings.

Mothers whose infants woke up more frequently and less time sleeping during the night at 6 months were more likely to be putting them to bed with a bottle at 14 months of age.

“In conclusion, putting infants to bed with a bottle and infant sleep problems influence one another across infants’ first year and into their second year. Given infant sleep problems are a predictor of maladaptive infant, parent and family outcomes, efforts to prevent parental use of this strategy are important to promote infant and parent well-being,” the study authors concluded.

The study contributes to the scientific knowledge about infant sleep patterns. However, it should be noted that both infants’ sleep quality and bottle-to-bed practices were reported by mothers, leaving room for reporting and common method bias to have affected the results.

The paper, “Transactional Associations Between Bottle to Bed and Infant Sleep Problems Over the First Year,” was authored by Esther M. Leerkes, Agona Lutolli, Cheryl Buehler, Lenka Shriver, and Laurie Wideman.

Eye contact discomfort does not explain slower emotion recognition in autistic individuals

Recent findings published in the journal Emotion suggest that the discomfort associated with making eye contact is not exclusive to individuals with a clinical autism diagnosis but scales with autistic traits found in the general population. The research team discovered that while this social unease is common among those with higher levels of autistic traits, it does not appear to be the direct cause of difficulties in recognizing facial expressions.

The concept of autism has evolved significantly in recent years. Mental health professionals and researchers increasingly view the condition not as a binary category but as a spectrum of traits that exist throughout the general public. This perspective implies that the distinction between a person with an autism diagnosis and a neurotypical person is often a matter of degree rather than a difference in kind.

Features associated with autism, such as sensory sensitivities or preferences for repetitive behaviors, can be present in anyone to varying extents. One of the most recognizable features associated with autism is a reduction in mutual gaze during social interactions. Autistic individuals frequently report that meeting another person’s eyes causes intense sensory or emotional overarousal.

Despite these self-reports, the scientific community has not fully determined why this avoidance occurs or how it impacts social cognition. Previous theories posited that avoiding eye contact limits the visual information a person receives. If a person does not look at the eyes, they might miss subtle cues required to identify emotions such as fear or happiness.

To investigate this, a team of researchers led by Sara Landberg from the University of Gothenburg in Sweden designed a study to disentangle these factors. The study included co-authors Jakob Åsberg Johnels, Martyna Galazka, and Nouchine Hadjikhani. Their primary goal was to examine how eye gaze discomfort relates to autistic traits, distinct from a formal diagnosis.

They also sought to understand the role of other conditions that often co-occur with autism. One such condition is alexithymia, which is characterized by a difficulty in identifying and describing one’s own emotions. Another is prosopagnosia, often called “face blindness,” which involves an impairment in recognizing facial identity.

The researchers recruited 187 adults from English-speaking countries through an online platform. This method allowed them to access a diverse sample of the general public rather than relying solely on clinical patients. The participants completed a series of standardized questionnaires to measure their levels of autistic traits, alexithymia, and face recognition abilities.

To assess sensory experiences, the group answered questions about their sensitivity to stimuli like noise, light, and touch. The study also utilized a specific “Eye Contact Questionnaire.” This tool asked participants directly if they found eye contact unpleasant and, if so, what strategies they used to manage that feeling.

In addition to the self-reports, the participants completed an objective performance test called the Emotion Labeling Task. On a computer screen, they viewed faces that had been digitally morphed to display emotions at only 40 percent intensity. This low intensity was chosen to make the task sufficiently challenging for a general adult audience.

Participants had to match the emotion shown on the screen—such as fear, anger, or happiness—to one of four label options. The researchers measured both the accuracy of the answers and the reaction time. This setup allowed the team to determine if people with high levels of specific traits were slower or less accurate at reading faces.

The data revealed clear associations between personality traits and social comfort. Participants who scored higher on the scale for autistic traits were more likely to report finding eye contact unpleasant. This supports the idea that social gaze aversion is a continuous trait in the population.

The study also identified an independent link between alexithymia and eye gaze discomfort. Individuals who struggle to understand their own internal emotional states also tend to find mutual gaze difficult. While these two traits often overlap, the statistical analysis showed that alexithymia predicts discomfort on its own.

A particularly revealing finding emerged regarding the coping strategies participants employed. The researchers asked individuals how they handled the discomfort of looking someone in the eye. The responses indicated that people with high autistic traits tend to look at other parts of the face, such as the mouth or nose.

In contrast, those with high levels of alexithymia were more likely to look away from the face entirely. They might look at the floor or in another direction. This suggests that while the symptom of gaze avoidance looks similar from the outside, the internal mechanism or coping strategy differs depending on the underlying trait.

When analyzing the performance on the Emotion Labeling Task, the researchers found no statistically significant difference in accuracy based on autistic traits. Participants with higher levels of these traits were just as capable of correctly identifying the emotions as their peers. This contrasts with some previous literature that found deficits in emotion recognition accuracy.

However, the results did show a difference in processing speed. Participants with higher levels of autistic traits took longer to identify the emotions. Similarly, those with higher levels of prosopagnosia, or difficulty recognizing identities, also demonstrated slower reaction times.

The researchers then performed a mediation analysis to see if the eye gaze discomfort explained this slower processing. The hypothesis was that discomfort might cause people to look away or avoid the eyes, which would then slow down their ability to read the emotion. The data did not support this hypothesis.

Eye gaze discomfort was not a statistically significant predictor of the reaction time on the emotion task. This implies that the discomfort one feels about eye contact and the cognitive speed of recognizing an emotion are likely separate issues. The slower processing speed associated with autistic traits seems to stem from a different cognitive mechanism than the emotional or sensory aversion to gaze.

The study also explored sensory sensitivity. The researchers hypothesized that general sensory over-responsiveness might drive the discomfort with eye contact. However, the analysis did not find a strong link between general sensory sensitivity scores and the specific report of eye gaze discomfort.

These findings suggest that the difficulty autistic individuals face with emotion recognition may be more about processing efficiency than a lack of visual input due to avoidance. It challenges the assumption that simply training individuals to make more eye contact would automatically improve their ability to read emotions.

There are limitations to this research that must be considered. The data was collected entirely online. While this allows for a large sample, it prevents the researchers from controlling the environment in which participants took the tests. Factors such as screen size, lighting, or distractions at home could influence reaction times.

The sample was also relatively highly educated. A majority of the participants had completed a university degree. This demographic skew might mean the results do not perfectly represent the broader global population. Additionally, the autistic traits in this sample were slightly higher than average, which may reflect a self-selection bias in who chooses to participate in online psychological studies.

The measurement of eye gaze discomfort relied on a binary “yes or no” question followed by strategy selection. This simple metric may not capture the full complexity or intensity of the experience. Future research would benefit from using more granular scales to measure the degree of discomfort.

The researchers note that this study focused on traits rather than diagnostic categories. This approach is beneficial for understanding the continuum of human behavior. However, it means the results might not fully apply to individuals with profound autism who experience high functional impairment.

Future investigations could expand on the distinct coping strategies identified here. Understanding why individuals with alexithymia look away completely, while those with autistic traits look at other facial features, could inform better support strategies. It suggests that interventions should be tailored to the specific underlying profile of the individual.

The study also raises questions about the role of social anxiety. While the team controlled for several factors, they did not specifically measure current anxiety levels. It is possible that general social anxiety plays a role in the strategies people use to avoid eye contact.

The study, “Eye Gaze Discomfort: Associations With Autistic Traits, Alexithymia, Face Recognition, and Emotion Recognition,” was authored by Sara Landberg, Jakob Åsberg Johnels, Martyna Galazka, and Nouchine Hadjikhani.

A high-sugar breakfast may trigger a “rest and digest” state that dampens cognitive focus

Starting the day with a sugary pastry might feel like a treat, but new research suggests it could sabotage your workday before it begins. A study published in the journal Food and Humanity indicates that a high-fat, high-sugar morning meal may dampen cognitive planning abilities and increase sleepiness in young women. The findings imply that nutritional choices at breakfast play a larger role in regulating morning physiological arousal and mental focus than previously realized.

Dietary habits vary widely across populations, yet breakfast is often touted as the foundation for daily energy. Despite this reputation, statistical data indicates that a sizable portion of adult women frequently consume confectionaries or sweet snacks as their first meal of the day. Researchers identify this trend as a potential public health concern, particularly regarding productivity and mental well-being in the workplace.

The autonomic nervous system regulates involuntary body processes, including heart rate and digestion. It functions through two main branches: the sympathetic nervous system and the parasympathetic nervous system. The sympathetic branch prepares the body for action, often described as the “fight or flight” response.

Conversely, the parasympathetic branch promotes a “rest and digest” state, calming the body and conserving energy. Professional work performance typically requires a certain level of alertness and physiological arousal. Fumiaki Hanzawa and colleagues at the University of Hyogo in Japan sought to understand how different breakfast compositions influence this delicate neural balance.

Hanzawa and his team hypothesized that the nutrient density of a meal directly impacts how the nervous system regulates alertness and cognitive processing shortly after eating. To test this, they designed a randomized crossover trial involving 13 healthy female university students. This specific study design ensured that each participant acted as her own control, minimizing the impact of individual biological variations.

On two separate mornings, the women arrived at the laboratory after fasting overnight. They consumed one of two test meals that contained an identical amount of food energy, totaling 497 kilocalories. The researchers allowed for a washout period of at least one week between the two sessions to prevent any lingering effects from the first test.

One meal option was a balanced breakfast modeled after a traditional Japanese meal, known as Washoku. This included boiled rice, salted salmon, an omelet, spinach with sesame sauce, miso soup, and a banana. The nutrient breakdown of this meal favored carbohydrates and protein, with a moderate amount of fat.

The alternative was a high-fat, high-sugar meal designed to mimic a common convenient breakfast of poor nutritional quality. This consisted of sweet doughnut holes and a commercially available strawberry milk drink. This meal derived more than half its total energy from fat and contained very little protein compared to the balanced option.

The researchers monitored several physiological markers for two hours following the meal. They measured body temperature inside the ear to track diet-induced thermogenesis, which is the production of heat in the body caused by metabolizing food. They also recorded heart rate variability to assess the activity of the autonomic nervous system.

At specific intervals, the participants completed computerized cognitive tests. These tasks were designed to measure attention and executive function. Specifically, the researchers looked at “task switching,” which assesses the brain’s ability to shift attention between different rule sets.

The participants also rated their subjective feelings on a sliding scale. They reported their current levels of fatigue, vitality, and sleepiness at multiple time points. This allowed the researchers to compare the women’s internal psychological states with their objective physiological data.

The physiological responses showed distinct patterns depending on the food consumed. The balanced breakfast prompted a measurable rise in body temperature and heart rate shortly after eating. This physiological shift suggests an activation of the sympathetic nervous system, preparing the body for the day’s activities.

In contrast, the doughnut and sweetened milk meal failed to raise body temperature to the same degree. Instead, the data revealed a dominant response from the parasympathetic nervous system immediately after consumption. This suggests the sugary meal induced a state of relaxation and digestion rather than physiological readiness.

Subjective reports from the participants mirrored these physical changes. The women reported feeling higher levels of vitality after consuming the balanced meal containing rice and fish. This feeling of energy persisted during the post-meal monitoring period.

Conversely, when the same women ate the high-fat, high-sugar breakfast, they reported increased sleepiness. This sensation of lethargy aligns with the parasympathetic dominance observed in the heart rate data. The anticipated energy boost from the sugar did not translate into a feeling of vitality.

The cognitive testing revealed that the sugary meal led to a decline in planning function. Specifically, the participants struggled more with task switching after the high-fat, high-sugar breakfast compared to the balanced meal. This function is vital for organizing steps to achieve a goal and adapting to changing work requirements.

Unexpectedly, the high-fat, high-sugar group performed slightly better on a specific visual attention task. The authors suggest this could be due to a temporary dopamine release triggered by the sweet taste. However, this isolated improvement did not extend to the more complex executive functions required for planning.

The researchers propose that the difference in carbohydrate types may explain some of the results. The balanced meal contained rice, which is rich in polysaccharides like amylose and amylopectin. These complex carbohydrates digest differently than the sucrose found in the doughnuts and sweetened milk.

Protein content also likely played a role in the thermal effects observed. The balanced meal contained significantly more protein, which is known to require more energy to metabolize than fat or sugar. This thermogenic effect contributes to the rise in body temperature and the associated feeling of alertness.

The study implies that work performance is not just about caloric intake but the quality of those calories. A breakfast that triggers a “rest and digest” response may be counterproductive for someone attempting to start a workday. The mental fog and sleepiness associated with the high-fat, high-sugar meal could hinder productivity.

While the results provide insight into diet and physiology, the study has limitations that affect broader applications. The sample size was small, involving only 13 participants from a specific age group and gender. This limits the ability to generalize the results to men or older adults with different metabolic profiles.

The study also focused exclusively on young students rather than full-time workers. Actual workplace stress and physical demands might interact with diet in ways this laboratory setting could not replicate. Additionally, the study only examined immediate, short-term effects following a single meal.

It remains unclear how long-term habitual consumption of high-fat, high-sugar breakfasts might alter these responses over months or years. Chronic exposure to such a diet could potentially lead to different adaptations or more severe deficits. The researchers note that habitual poor diet is already linked to cognitive decline in other epidemiological studies.

Hanzawa and the research team suggest that future investigations should expand the demographic pool. Including male participants and older workers would help clarify if these physiological responses are universal. They also recommend examining how these physiological changes translate into actual performance metrics in a real-world office environment.

The study, “High-fat, high-sugar breakfast worsen morning mood, cognitive performance, and cardiac sympathetic nervous system activity in young women,” was authored by Fumiaki Hanzawa, Manaka Hashimoto, Mana Gonda, Miyoko Okuzono, Yumi Takayama, Yukina Yumen, and Narumi Nagai.

Neuroscientists reveal how jazz improvisation shifts brain activity

Recent findings in neuroscience provide new evidence that musical creativity is not a static trait but a dynamic process involving the rapid reconfiguration of brain networks. By monitoring the brain activity of skilled jazz pianists, an international research team discovered that high levels of improvisational freedom rely less on introspection and more on sensory and motor engagement. The study suggests that the brain shifts its processing strategy depending on how much creative liberty a musician exerts. These findings were published in the Annals of the New York Academy of Sciences.

Creativity is a complex human ability often defined as the capacity to produce ideas that are both novel and appropriate for a given context. One scientific view proposes that creativity emerges from a balance between constraints and freedom, or between what is predictable and what is surprising. Musical improvisation offers an ideal setting to study this balance because it requires musicians to generate new material spontaneously while adhering to specific structural rules.

Previous neuroimaging studies have identified various brain regions associated with improvisation. These include areas linked to motor planning, emotional processing, and the monitoring of one’s own performance. However, most of these studies have looked at brain activity as a static average over time. This approach can miss the rapid fluctuations in neural connectivity that characterize real-time creative performance. The authors of the current study sought to map these fleeting changes to understand how the brain adapts to different levels of improvisational constraints.

“My main motivation for the study was a long-standing scientific challenge about how to study creativity in real time,” said study author Peter Vuust, the director of the Center for Music in the Brain and professor at Aarhus University and the Royal Academy of Music Aarhus.

“Much research looks at finished products or abstract tasks, but fewer studies capture the process of creating something new as it unfolds in the brain. Musical jazz improvisation offers a rare opportunity because it is spontaneous yet structured—musicians create novel material moment-to-moment while still following certain rules relating to harmony, rhythm and structure.”

“So the gap was twofold: 1) A need for ecologically valid models of creativity (real behavior, not artificial lab tasks). 2) Limited knowledge about how whole-brain networks dynamically reconfigure during different levels of creative freedom.”

“In the Center for Music in the Brain we have the unique capability of studying brain activity as it unfolds in real time, using state-of-the-art brain imaging combined with whole-brain modelling methods which allow for understanding the shifting brain network activity over time,” Vuust explained.

The study included 16 male jazz pianists with significant experience in the genre. All participants were right-handed and had no history of neurological disease. On average, the musicians had over ten years of dedicated jazz practice. The researchers utilized functional magnetic resonance imaging to record brain activity. This imaging technique measures changes in blood flow to infer which areas of the brain are most active.

To allow the musicians to play while inside the MRI scanner, the team used a custom-designed, non-magnetic fiber optic keyboard. This 25-key instrument was positioned on the participants’ laps. This setup allowed the musicians to play with their right hand while listening to audio through noise-canceling headphones.

The experimental procedure involved playing along with a backing track of the jazz standard “Days of Wine and Roses.” The backing track provided the bass and drums to create a realistic musical context. The participants performed under four specific conditions. First, they played the melody of the song from memory. Second, they played an alternate melody from a score sheet they had briefly studied.

The third and fourth conditions introduced improvisation. In the third task, musicians improvised variations based on the melody. In the fourth and final task, they improvised freely based solely on the song’s chord progression. This design created a gradient of creative freedom, ranging from strict memorization to unconstrained expression. Each condition lasted for 45 seconds and was repeated multiple times.

The researchers analyzed the musical output using digital tools to assess complexity. They measured the number of notes played and calculated the “entropy” of the melodies. In this context, entropy refers to the unpredictability of the musical choices. Higher entropy indicates a performance that is less repetitive and harder to predict.

The behavioral results showed the expected relationship between freedom and musical complexity. As the task became less constrained, the musicians played significantly more notes. The condition involving free improvisation on the chord changes resulted in the highest number of notes and the highest level of entropy. The analysis also revealed that during free improvisation, the musicians tended to use smaller intervals between notes. This suggests a dense and rapidly moving musical style.

To analyze the brain imaging data, the researchers employed a method known as Leading Eigenvector Dynamics Analysis. This advanced analytical technique focuses on the phase-locking of blood oxygenation level-dependent signals. It allows scientists to detect recurrent patterns of functional connectivity that may only last for short periods. This is distinct from traditional methods that assume brain connectivity remains constant throughout a task.

The imaging results revealed five distinct brain states, or “substates,” that appeared with varying frequency across the conditions. One of these states was associated with the brain’s reward system. It included the orbitofrontal cortex, a region involved in sensory integration and pleasure. This reward-related state was more active during all playing conditions compared to when the musicians were resting. This finding aligns with the idea that playing music is inherently rewarding, regardless of whether one is improvising or playing from memory.

“A simple takeaway is: Creativity in music is not located in a single ‘creative center’ of the brain,” Vuust told PsyPost. “Instead, it emerges from rapid shifts between multiple brain networks—including those involved in movement, hearing, reward, attention, and self-reflection, depending on the improvisational taks: whether you are trying to improvise on the melody or the chord changes.”

A distinct pattern emerged when the researchers compared the improvisation tasks to the memory tasks. Both the melodic and free improvisation conditions significantly increased the probability of engaging a brain state dominated by auditory and sensorimotor networks, as well as the posterior salience network. These regions are critical for processing sound, coordinating complex movements, and integrating sensory information.

The increased activity in auditory and sensorimotor areas suggests that improvisation places a heavy demand on the brain’s ability to predict and execute sound. Jazz musicians often report “hearing” lines in their head immediately before playing them. The data supports the notion that improvisation is a highly embodied activity. It relies on a tight coupling between the auditory cortex and the motor system to navigate the musical landscape in real time.

Perhaps the most distinct finding appeared in the condition with the highest level of creative freedom. When musicians improvised freely on the chords, the researchers observed a decrease in the occurrence of a brain state involving the default mode network and the executive control network. The default mode network is typically active during introspection, mind-wandering, and self-referential thought. The executive control network is usually involved in planning and goal-directed behavior.

The reduced presence of these networks during free improvisation implies a shift in cognitive strategy. To generate novel ideas rapidly without getting stuck in evaluation or planning, the brain may need to suppress these introspective systems. This aligns with the concept of “flow,” where an individual becomes fully immersed in an activity and self-consciousness recedes. The musicians appeared to rely less on internal planning and more on external sensory feedback.

“Another key message is that greater freedom in improvisation changes how the brain is organized in the moment,” Vuust said. “When musicians improvise more freely, their brains rely more on auditory–motor and salience systems (listening, acting, reacting), and less on heavily controlled, evaluative networks. In everyday terms: creativity often involves letting go of over-analysis while staying highly engaged and responsive.”

The study indicates that creativity involves a flexible reconfiguration of neural resources. Moderate improvisation may require a balance of structure and freedom. However, highly unconstrained improvisation appears to demand a surrender of executive control in favor of sensory-motor processes.

“The effects are not about small local activations but about system-level reconfigurations—which networks are more or less likely to appear over time,” Vuust explained. “Practically, this means the significance lies in patterns and probabilities, not single brain spots lighting up.”

“For musicians and educators, the implication is that training creativity may involve balancing structure and freedom, rather than maximizing one or the other. For neuroscience, it shows that dynamic brain-state analysis can reveal meaningful differences even within subtle variations of the same task.”

As with all research, there are limitations to consider. The sample consisted exclusively of male jazz pianists. This homogeneity limits the ability to generalize the results to female musicians or those from other musical traditions. The creative demands of jazz are specific and may differ from those in other arts, such as painting or writing.

Another consideration is the nature of the “novelty” observed. While the free improvisation condition produced the most unpredictable music, the study did not assess the aesthetic quality of these performances. Higher entropy does not necessarily equate to better music. Previous research suggests that listeners often prefer a balance of complexity and familiarity. The most unconstrained performances might be the most cognitively demanding but not necessarily the most pleasing to an audience.

“Another possible misinterpretation is to assume that more novelty automatically equals more enjoyment or value,” Vuust noted. “The study notes that pleasure and complexity often follow an inverted-U relationship—too much unpredictability can reduce perceived enjoyment.”

Future research could address these gaps by recruiting a more diverse group of participants. Comparing jazz improvisation with other forms of real-time creativity could reveal which brain dynamics are universal and which are specific to music. The authors also suggest that future studies could investigate how these brain states relate to subjective feelings of inspiration or enjoyment. Understanding the link between neural dynamics and the quality of the creative product remains a key goal for the field.

The study, “Creativity in Music: The Brain Dynamics of Jazz Improvisation,” was authored by Patricia Alves Da Mota, Henrique Miguel Fernandes, Ana Teresa Lourenço Queiroga, Eloise Stark, Jakub Vohryzek, Joana Cabral, Ole Adrian Heggli, Nuno Sousa, Gustavo Deco, Morten Kringelbach, and Peter Vuust.

A new experiment reveals an unexpected shift in how pregnant women handle intimidation

A new study published in the British Journal of Psychology provides evidence that women in the late stages of pregnancy and early motherhood do not display increased submissiveness when facing potential social threats. Contrary to the expectation that physical vulnerability would lead to conflict avoidance, the findings suggest that women in the perinatal period tend to aggressively protect resources when interacting with threatening-looking men.

The rationale behind this investigation is rooted in the evolutionary history of human development. Human infants are born in a state of high dependency, requiring significant time and energy from caregivers to survive. Throughout history, high rates of infant mortality likely necessitated specific cognitive adaptations in parents to help them assess and manage dangers in the environment.

Psychological theories, such as protection motivation theory, propose that people constantly weigh potential threats against their ability to cope with them. When the perceived threat outweighs the ability to cope, individuals typically adopt protective or avoidant behaviors.

This calculation is particularly relevant during pregnancy. The perinatal period, defined as the months leading up to and immediately following childbirth, is physically demanding. Pregnant women experience reduced physical mobility and significant metabolic costs associated with fetal development.

Because of these physical limitations and the high value of the developing fetus, previous models of parental motivation suggested that pregnant women should be highly risk-averse. The logic follows that if a pregnant woman is physically vulnerable, she should avoid escalation and confrontation to prevent harm to herself and her unborn child.

Past research supports the idea that pregnancy heightens sensitivity to danger. For example, pregnant women often show stronger reactions to disgust and are better at recognizing angry or fearful faces than non-pregnant women. The authors of the current study wanted to determine if this heightened sensitivity translates into behavioral submissiveness.

“While previous work demonstrated that pregnancy may change how women perceive threats—such as how fast they spot an angry or fearful face—we didn’t know how this might lead to changes in their actual behavior. Particularly, we became interested in knowing if this enhanced sensitivity to threat may impact their willingness to compete over resources they may need,” said co-author Shawn Geniole, an associate professor at the University of the Fraser Valley.

“On one hand, pregnancy brings new financial and other demands, making it important to compete for and secure resources (e.g., preferred/overtime shifts at work or better products/services). On the other hand, if pregnancy boosts sensitivity to social threats, it may bring greater cautiousness, increasing the likelihood of ‘backing down’ to avoid any risks of conflict or retaliation.”

“We therefore wanted to conduct this study to determine precisely how pregnancy, and more specifically the perinatal period—the months leading up to and immediately after delivery—would impact these types of competitive decisions. To do so, we used an experimental economics task in which women had to decide how to share resources with others.”

The researchers recruited a total of 139 participants. The sample included 86 perinatal women and a control group of 53 non-perinatal women. The perinatal group was tested at two specific time points: approximately 29 weeks into their gestation and again one month after giving birth. The control group also completed testing at two time points separated by a two-month interval to match the timeline of the pregnant participants.

The primary measure used in the study was the “Threat Premium Task.” This is a competitively framed variation of the Ultimatum Game, a standard tool in economic psychology. In this task, participants were given a set amount of virtual money, specifically ten coins, and asked to propose a split with a series of partners. The participants were told that the goal was to keep as much money as possible. However, there was a catch. If the partner accepted the offer, the money was split as proposed. If the partner rejected the offer, neither party received anything.

This design forced participants to make a strategic calculation. Offering a low amount was profitable but risky, as a threatening partner might be perceived as more likely to reject the offer out of spite or aggression. Offering a high amount was safer but resulted in less resource acquisition for the participant. This “threat premium”—the extra money paid to scary-looking partners—is a measure of submissive behavior.

“The women in the study had to carefully balance both the desire to maximize earnings and to avoid retaliation. We were particularly interested in how sensitive they would be—or how much their decisions would change—when interacting with others who appeared more or less threatening.”

The “partners” in this game were not real people but photographs of male faces. Unbeknownst to the participants, these faces had been digitally manipulated to appear either more or less threatening.

The results contradicted the preregistered predictions of the research team. The non-pregnant control group behaved as expected. They were sensitive to the social cues of threat and tended to offer more money to the threatening-looking men than to the non-threatening men. This indicates a typical strategy of appeasement to avoid conflict.

But the perinatal women showed a completely different behavioral pattern. Instead of paying a higher premium to threatening men, they became less generous. The study found that pregnant women were less sensitive to the social threat cues and less willing to cede resources. They dominantly protected their coins rather than submissively handing them over.

This effect was particularly pronounced during the pre-birth session when the women were in the third trimester of pregnancy. The data indicated that the anticipated “threat premium” was effectively eliminated in the perinatal group.

“The biggest takeaway is that pregnancy doesn’t necessarily make women more submissive,” explained co-author Valentina Proietti, an assistant professor at the University of the Fraser Valley. “Based on previous research, we originally expected that pregnant and postpartum women might be more prone to submissive behavior and more likely to relinquish their resources when faced with threatening individuals.”

“However, we found the exact opposite to be true: women in the perinatal period actually defended their resources more dominantly than those who weren’t pregnant, especially when they were dealing with people who looked more threatening. In short: while the common assumption is that heightened threat-sensitivity leads to caution in the face of such threat, our findings suggest it may actually trigger a more dominant drive to secure and protect the resources necessary for themselves and their growing families.”

These findings align with a phenomenon observed in non-human mammals known as maternal aggression. In many species, including rodents and bears, females become significantly more aggressive and protective during pregnancy and lactation. This biological shift prioritizes the security and provision of offspring over the mother’s own safety or tendency toward conflict avoidance.

The researchers suggest that in humans, this maternal defense mechanism may manifest as a refusal to be intimidated by social threats when resources are at stake. The drive to secure necessary assets for the growing family appears to suppress the usual tendency to back down from threatening individuals.

“This pattern may fit with what researchers call ‘maternal aggression’ in other mammals — think of a protective and potentially aggressive mother bear with her cubs,” the researchers told PsyPost. “While we didn’t measure aggression directly, the fact that perinatal women were less submissive in the face of potential threats aligns with this idea.”

“While our study used a more controlled economic task, the results may point toward a more general change in behavior during a truly unique life stage. Readers should think of the perinatal phase as a special/sensitive period—a time when a woman’s social and economic priorities may shift to meet the new demands of motherhood.”

“Although we used a rather simple economic task in our study, the same mechanisms at play here may extend to other types of competitive interactions in the real world, such as bargaining for better work or overtime shifts, navigating online marketplaces, or negotiating for services. We view this study as a first step in understanding how this special biological period reshapes economic decision-making, and we hope to explore these more ‘real-world’ economic interactions in future research.”

The study offers new insights into the psychology of pregnancy, but — as with all research — there are limitations to consider. The study utilized only male faces as the source of social threat.

“Although we’d ideally like to study real‑world economic interactions and other forms of competition that involve a variety of interaction partners, our study focused only on how women responded to threatening situations involving unfamiliar men. As a result, we still don’t know how perinatal women might behave in similar competitive situations with other women. That remains an important direction for future research.”

Additionally, while the sample size was relatively large for this type of research, distinguishing the specific effects of pregnancy from the general effects of parenthood requires even larger groups that compare pregnant women exclusively to women who have never had children.

The study also raises questions about the biological mechanisms driving this behavior. The researchers speculate that hormonal changes may play a key role. Testosterone levels, for instance, are known to rise during pregnancy. In men, higher testosterone is associated with the same type of dominant behavior observed in the perinatal women in this study.

However, the researchers did not measure hormone levels, so this link remains a hypothesis for future investigation. Future work might also explore how this resource-protection drive interacts with the known decreases in mating motivation that occur during pregnancy.

Looking ahead, “we would like to investigate how these effects may extend to real-world economic interactions and how changes in hormones during pregnancy may play a role and/or explain some of the findings here,” Geniole said.

“One ongoing challenge with this kind of research is finding a large enough sample of participants at the right moment in pregnancy or postpartum,” Proietti added. “If you are a professional who supports women during this period—whether you are a midwife, doula, lactation consultant, or work in a maternity ward—and you’d like to see this population be better represented in research, we’d be happy to connect by email at lifespan.lab@ufv.ca or through Instagram (https://www.instagram.com/bicocca_child_and_baby_lab?igsh=dGUxNmdpeDR4djEx) and share information about any future studies! If interested, reader can also check out some additional work at https://bicoccababylab.wixsite.com/website/en.”

The study, “Perinatal women dominantly protect—rather than submissively cede—resources when interacting with threatening-looking others,” was authored by Valentina Proietti, Ilenia Mastroianni, Valentina Silvestri, Martina Arioli, Viola Macchi Cassia, and Shawn N. Geniole.

Trump-related search activity signals a surprising trend in the stock market

A new study suggests that the amount of attention paid to Donald Trump online helps predict optimism on Wall Street. Published in American Behavioral Scientist, the research indicates that spikes in Google searches for the former president tend to precede increases in bullish sentiment among individual investors. This relationship appears to have grown stronger in the period following the 2024 U.S. election.

The financial world has traditionally operated under the assumption that markets are rational. This view holds that asset prices reflect all available information regarding economic fundamentals, such as corporate earnings, interest rates, and employment data. However, the field of behavioral finance challenges this perspective. It argues that human psychology, cognitive biases, and collective emotion play a significant role in how investors make decisions.

Political figures have always influenced markets, but typically this occurs through specific policy decisions or legislative actions. Donald Trump represents a shift in this dynamic. His influence is often exerted through a pervasive media presence and direct communication styles rather than traditional policymaking channels alone. The researchers wanted to understand if the sheer volume of attention a political figure generates can act as a signal for market mood, independent of specific policy details.

“We were motivated by a clear gap between two well-established literatures that rarely talk to each other: behavioral finance has shown that investor sentiment moves markets, and political communication research has shown that media attention shapes public perceptions, but few studies connect political attention directly to financial sentiment,” said study author Raúl Gómez Martínez, an associate professor at Rey Juan Carlos University in Madrid.

“Donald Trump offered a unique real-world case because his media presence is unusually intense and persistent, even outside formal policymaking, raising the question of whether attention alone can influence market psychology. We therefore wanted to test whether high-frequency digital signals, such as Google search activity, could capture this transmission mechanism and help explain weekly changes in retail investor optimism. In short, the study addresses the broader problem of how political narratives spill over into financial markets beyond traditional fundamentals.”

The study builds on the concept that attention is a finite resource. In the digital age, what captures the public’s focus often drives their economic expectations. The researcher sought to test whether the visibility of Donald Trump, a figure closely associated with pro-business narratives, directly impacts investor sentiment. This phenomenon is often referred to by market analysts as the “Trump trade,” where his political prominence signals potential deregulation and tax cuts.

To investigate this connection, the research team analyzed weekly data spanning from April 5, 2020, to October 12, 2025. This timeframe provided a total of 289 weekly observations. The researchers utilized Google Trends to measure public attention. This tool indexes search interest on a scale from zero to 100 rather than providing raw search numbers. It allows for a standardized comparison of interest over time.

The researchers tracked the search term “Donald Trump” within the United States to gauge the intensity of public focus. For investor sentiment, they relied on data from the American Association of Individual Investors (AAII). This non-profit organization conducts a weekly survey asking its members if they feel bullish, bearish, or neutral about the stock market over the next six months. The study focused specifically on the percentage of respondents who reported a bullish or optimistic outlook.

The research team employed statistical models known as ordinary least squares regressions. This method helps identify relationships between the search data and the sentiment survey results. They aimed to see if variations in one variable could explain variations in the other. Additionally, the researchers employed Granger causality tests. This statistical technique helps determine if one time series is useful in forecasting another, effectively checking if changes in attention happen before changes in sentiment.

The analysis revealed a positive association between search volume and investor optimism across the entire five-year period. When online searches for Trump increased, self-reported bullish sentiment among individual investors tended to rise in the same week. The Granger causality analysis provided evidence that the search activity occurred before the shift in sentiment. This suggests that public attention flows into market optimism rather than market optimism driving the search traffic.

The researchers then isolated the data from the period following the 2024 election. This subsample covered the weeks from November 3, 2024, to October 12, 2025. In this smaller set of 50 weeks, the connection between attention and sentiment became much more pronounced. The statistical model explained approximately 15 percent of the variation in investor sentiment during this post-election phase. This is a notable increase compared to about 2 percent for the full five-year period.

The strength of the relationship more than doubled in the post-election data. This indicates that in times of heightened political activity or uncertainty, the market becomes more sensitive to political visibility. The authors suggest that the political context acts as an amplifier. When Trump is at the center of the news cycle during a critical political transition, his visibility becomes a stronger driver of economic expectations for retail investors.

“What we show is that media attention becomes a directly observable, quantifiable variable with real explanatory power for market dynamics,” Gómez Martínez told PsyPost. “Even though the full-sample fit is modest (R²≈0,02), this is common in finance, where sentiment is influenced by many overlapping factors; what matters is that attention consistently adds incremental information.”

“In more sensitive political contexts, the explanatory power rises markedly (R²≈0,15 and a coefficient more than double), indicating that this signal becomes substantially more relevant when uncertainty or polarization is high. In that sense, political attention measured through Google Trends can function as a new complementary market indicator—an additional behavioral barometer that investors and analysts can use alongside traditional economic and financial variables to inform investment decisions.”

These findings imply that financial markets are not driven solely by economic spreadsheets. Collective attention and mass psychology serve as measurable drivers of financial expectations. For the average person, this suggests that everyday news consumption and online behavior can indirectly influence prices by shifting the general mood of investors.

“Our findings show that spikes in public interest in a highly visible political figure like Donald Trump, measured through Google searches, tend to precede increases in investor optimism, meaning that media attention itself can shape market mood,” Gómez Martínez explained. “This suggests that everyday news consumption and online behavior can indirectly influence prices by affecting sentiment, especially among retail investors. In short, politics and digital attention are not just background noise—they can become measurable drivers of financial expectations and market dynamics.”

The study provides a practical application for the theories of behavioral finance. It moves beyond the anecdotal observation that politics moves markets to providing statistical evidence. It supports the idea that high-profile figures can serve as exogenous drivers of sentiment. Their media dominance can shape market psychology even before any concrete policies are enacted.

“Nothing in the results truly surprised us, because they were broadly consistent with what behavioral finance and attention-based theories would predict: highly visible political figures should influence expectations and, therefore, investor sentiment,” Gómez Martínez said. “What was important for us was not discovering an unexpected effect, but demonstrating it rigorously with data, using an econometric framework and supervised regression techniques that allow us to quantify and test the relationship formally.”

“In other words, we moved from an intuitive or anecdotal idea—’politics moves markets’—to statistically validated evidence. That empirical validation is what gives the findings credibility and practical value for both researchers and practitioners.”

While the findings provide evidence of a link between political attention and market mood, the study has certain limitations. The sentiment data comes from the American Association of Individual Investors, which reflects the views of retail investors rather than large institutional firms. Retail investors are often considered more susceptible to behavioral biases and media influence than professional fund managers. It is possible that institutional investors interpret these attention spikes differently.

Google Trends measures the volume of searches but not the intent behind them. A spike in searches could result from negative controversies just as easily as positive news. The tool does not distinguish between a supporter searching for rally dates and a critic searching for details on a scandal. The current study assumes the attention is generally interpreted through the lens of the “Trump trade,” but it does not qualitatively analyze the content of the news driving the searches.

The researchers also note that financial markets are complex ecosystems influenced by countless variables simultaneously. Political attention is one factor among many.

“A potential misinterpretation we would like to preempt is the idea that media attention to a single political figure ‘drives the market’ by itself,” Gómez Martínez told PsyPost. “Our results do not imply that political searches replace fundamentals such as earnings, interest rates, or macroeconomic news; rather, they show that attention adds an additional behavioral layer that helps explain changes in sentiment at the margin. Financial markets are influenced by many overlapping forces, so this variable should be understood as complementary, not deterministic.”

Future research could incorporate sentiment analysis of news headlines. This would allow researchers to determine the tone of the media coverage alongside the volume. Expanding the scope to include institutional investor data would help determine if professional traders react similarly to retail investors. The researchers also suggest applying this methodology to other political figures to see if the phenomenon is unique to Trump.

“This paper is part of an ongoing collaboration between researchers at Universidad Rey Juan Carlos (URJC) and Dublin City University (DCU), and it represents just one step in a broader research agenda focused on understanding investor sentiment as a measurable and actionable variable,” Gómez Martínez explained. “Our long-term goal is to continue developing models that integrate behavioral indicators—such as digital attention, surveys, and online activity—alongside traditional financial data to improve how markets are analyzed and forecasted.”

“We currently have several related articles in progress that expand this line of work using alternative sentiment proxies and more advanced econometric and supervised machine-learning regression techniques to strengthen predictive performance. Importantly, this research also has practical transfer through my fintech, InvestMood, where these insights are applied to build algorithmic trading systems that help investors incorporate sentiment-based signals into real-world investments.”

“Perhaps the most important point to add is that this study illustrates how the growing availability of digital behavioral data is changing the way we can analyze financial markets,” Gómez Martínez said. “Tools such as Google Trends allow us to observe collective attention almost in real time, something that was simply not possible a decade ago, and this opens new opportunities to measure psychological and social drivers of market movements more precisely.”

“More broadly, we hope the paper encourages researchers and practitioners to think beyond purely fundamental variables and to treat attention and sentiment as legitimate, quantifiable components of market dynamics. In that sense, the study is not only about one political figure, but about demonstrating a methodology that can be applied to many other contexts where public narratives influence financial expectations.”

The study, “The Strengthening Link Between Donald Trump’s Online Attention and Wall Street Sentiment,” was authored by Raúl Gómez Martínez, Camilo Prado Román, María Luisa Medrano García, Aref Mahdavi Ardekani, and Damien Dupré.

A new mouse model links cleared viral infections to ALS-like symptoms

Recent research suggests that a person’s unique genetic makeup may determine whether a temporary viral infection triggers a permanent, debilitating brain disease later in life. A team of scientists found that specific genetic strains of mice developed lasting spinal cord damage resembling amyotrophic lateral sclerosis (ALS) long after their immune systems had successfully cleared the virus. These findings were published in the Journal of Neuropathology & Experimental Neurology.

The origins of neurodegenerative diseases have puzzled medical experts for decades. Conditions such as ALS, often called Lou Gehrig’s disease, involve the progressive death of motor neurons. This leads to muscle weakness, paralysis, and eventually respiratory failure. While a small percentage of cases run in families, the vast majority are sporadic. This means they appear without a clear family history.

Researchers have hypothesized that environmental factors likely initiate these sporadic cases. Viral infections are a primary suspect. The theory suggests a “hit and run” mechanism. A virus enters the body and causes damage or alters the immune system. The body eventually eliminates the virus. However, the pathological process continues long after the pathogen is gone. Proving this connection has been difficult because by the time a patient develops ALS, the triggering virus is no longer detectable.

To investigate this potential link, the research team needed a better animal model. Standard laboratory mice are often genetically identical. This lack of diversity fails to mimic the human population. In humans, one person might catch a cold and recover quickly, while another might develop severe complications. Standard lab mice usually respond to infections in a uniform way.

To overcome this limitation, the researchers utilized the Collaborative Cross. This is a large panel of mouse strains bred to capture immense genetic diversity. The team, led by first author Koedi S. Lawley and senior author Candice Brinkmeyer-Langford from Texas A&M University, selected five distinct strains from this collection. They aimed to see if different genetic backgrounds would result in different disease outcomes following the exact same viral exposure.

The researchers infected these genetically diverse mice with Theiler’s murine encephalomyelitis virus (TMEV). This virus is a well-established tool in neurology research. It is typically used to study conditions like multiple sclerosis and epilepsy. In this context, the scientists used it to examine spinal cord damage. They compared the infected mice to a control group that received a placebo.

The team monitored the animals over a period of three months. They assessed the mice at four days, fourteen days, and ninety days post-infection. These time points represented the acute phase, the transition phase, and the chronic phase of the disease. The researchers utilized a variety of methods to track the health of the animals. They observed clinical signs of motor dysfunction. They also performed detailed microscopic examinations of spinal cord tissues.

In the acute phase, which occurred during the first two weeks, most of the infected mouse strains showed signs of illness. The virus actively replicated within the spinal cord. This triggered a strong immune response. The researchers tracked this response by staining for Iba-1, a marker for microglia and macrophages. These are the immune cells that defend the central nervous system. As expected, inflammation levels spiked as the bodies of the mice fought the invader.

The virus targeted the lumbar region of the spinal cord. This is the lower section of the back that controls the hind legs. Consequently, the mice displayed varying degrees of difficulty walking. Some developed paresis, which is partial weakness. Others developed paralysis. The severity of these early symptoms varied widely depending on the mouse strain. This confirmed that genetics played a major role in the initial susceptibility to the infection.

The most revealing data emerged at the ninety-day mark. By this time, the acute infection had long passed. The researchers used sensitive RNA testing to look for traces of the virus. They found that every single mouse had successfully cleared the infection. There was no detectable viral genetic material left in their spinal cords. In most strains, the inflammation had also subsided.

Despite the absence of the virus, the clinical outcomes diverged sharply. One specific strain, known as CC023, remained severely affected. These mice did not recover. Instead, they exhibited lasting symptoms that mirrored human ALS. They suffered from profound muscle atrophy, or wasting, particularly in the muscles controlled by the lumbar spinal cord. They also displayed kyphosis, a hunching of the back often seen in models of neuromuscular disease.

The microscopic analysis of the CC023 mice revealed the underlying cause of these symptoms. Even though the virus was gone, the damage to the motor neurons persisted. The researchers observed lesions in the ventral horn of the spinal cord. This is the specific area where motor neurons reside. The loss of these neurons disconnected the spinal cord from the muscles, leading to the observed atrophy.

This outcome stood in stark contrast to other strains. For instance, the CC027 strain proved to be highly resistant. These mice showed almost no clinical signs of disease despite being infected with the same amount of virus. Their genetic background seemingly provided a protective shield against the neurological damage that devastated the CC023 strain.

The researchers noted that the inflammation in the spinal cord did not persist at high levels into the chronic phase. At ninety days, the number of active immune cells had returned to near-normal levels. This is a critical observation. It suggests that the ongoing disease in the CC023 mice was not driven by chronic, active inflammation. Instead, the initial viral insult triggered a cascade of damage that continued independently.

These findings support the idea that a person’s genetic background dictates how their body handles the aftermath of an infection. In susceptible individuals, a virus might initiate a neurodegenerative process that outlasts the infection itself. The study provides a concrete example of a virus causing a “hit and run” injury that leads to an ALS-like condition.

Candice Brinkmeyer-Langford, the senior author, highlighted the importance of this discovery in a press release. She noted, “This is exciting because this is the first animal model that affirms the long-standing theory that a virus can trigger permanent neurological damage or disease — like ALS — long after the infection itself occurred.”

The identification of the CC023 mouse strain is a practical advancement for the field. Current mouse models for ALS often rely on artificial genetic mutations found in only a tiny fraction of human patients. The CC023 model represents a different pathway. It models sporadic disease triggered by an environmental event. This could allow scientists to test therapies designed to stop neurodegeneration in a context that more closely resembles the human experience.

There are caveats to the study. While the symptoms in the mice resemble ALS, mice are not humans. The biological pathways may differ. Additionally, the researchers have not yet identified the specific genes responsible for the susceptibility in the CC023 strain. Understanding exactly which genes failed to protect these mice is a necessary next step.

Future research will likely focus on pinpointing these genetic factors. The team plans to investigate why the immune response in the CC023 strain failed to prevent the lasting damage. They also aim to identify biomarkers that appear early in the infection. Such markers could potentially predict which individuals are at risk for developing long-term neurological complications following a viral illness.

The study, “The association between virus-induced spinal cord pathology and the genetic background of the host,” was authored by Koedi S. Lawley, Tae Wook Kang, Raquel R. Rech, Moumita Karmakar, Raymond Carroll, Aracely A. Perez Gomez, Katia Amstalden, Yava Jones-Hall, David W Threadgill, C. Jane Welsh, Colin R. Young, and Candice Brinkmeyer-Langford.

New study highlights distinct divorce patterns between same-sex and opposite-sex couples

New research conducted in Finland highlights distinct patterns in relationship stability when comparing same-sex and opposite-sex unions. The findings indicate that while female couples experience the highest rates of divorce, the factors contributing to these breakups vary by gender composition. The study suggests that traditional gender norms regarding income and the specific challenges faced by immigrant men in host societies play substantial roles in these outcomes. This research was published in the journal Advances in Life Course Research.

Sociologists and demographers have previously observed that same-sex couples tend to dissolve their unions at higher rates than opposite-sex couples. This trend is particularly pronounced among female couples. Despite this established pattern, the specific reasons behind these disparities have remained largely unexplained. Theoretical models suggest that minority stress, which includes experiences of discrimination and stigma, likely destabilizes these relationships.

Other theories focus on the search for a partner. Finding the right spouse involves predicting future compatibility, a process that is inherently uncertain. This uncertainty is often higher regarding economic characteristics. Researchers wanted to understand if specific observable factors could account for the stability gap. The authors of the current study aimed to determine if nationality intermarriage, religious affiliation, education, or income dynamics could explain the differences in divorce risks.

“There was increasing interest in understanding how the intersections of several minority statuses (e.g., sexual minority and immigration background) shape divorce risks. Not much was known about this because there has been a lack of sufficiently large data to statistically address these types of questions,” said study author Elina Einiö, a lecturer at the Helsinki Institute for Demography and Population Health at the University of Helsinki.

The researchers utilized comprehensive register-based data from Statistics Finland. The dataset covered the entire population of individuals who entered a same-sex registered partnership or an opposite-sex marriage between March 2002 and February 2017. The observation window ended just before Finland implemented gender-neutral marriage laws, replacing the registered partnership system.

The final sample consisted of 3,780 same-sex couples and 339,401 opposite-sex couples. Among the same-sex unions, 37.2 percent were male couples and 62.8 percent were female couples. The researchers restricted the data to couples where at least one spouse lived in Finland at the time of registration and was born in the country. They tracked these couples until the end of 2021 to identify legal divorces.

The analysis employed Cox proportional hazards models to estimate divorce risks. The models controlled for variables such as the year of marriage, the age of both spouses, and the area of residence. The researchers also incorporated annual data on taxable income and religious affiliation based on church tax records.

The general findings revealed a clear hierarchy in divorce risk. Approximately 40 percent of female couples divorced within the first ten years of their legal union. This rate was significantly higher than the 24 percent observed for male couples. Opposite-sex couples had the lowest rate, with 21 percent divorcing within the same timeframe.

For female couples, the elevated risk persisted even after accounting for various socioeconomic factors. The researchers found that income and religious affiliation played only a modest role in explaining their higher divorce rates. The risk for female couples remained roughly double that of opposite-sex couples in the fully adjusted models. This suggests that unobserved factors, potentially including minority stress, continue to impact these relationships heavily.

The results for male couples told a different story. Their slightly higher risk of divorce was partly explained by higher rates of intermarriage and lower rates of religious affiliation. When researchers adjusted for these factors, the difference in divorce risk between male couples and opposite-sex couples became barely significant.

A major focus of the study was the impact of nationality intermarriage. The data showed that marriages involving a foreign-born husband and a native-born spouse were less stable. This pattern was consistent for both male couples and opposite-sex couples. It indicates that the specific experience of being an immigrant man in a host society may strain a marriage.

“It was surprising to see that intermarriage between a foreign-born husband and a native-born spouse destabilizes marriages, regardless of the latter spouse’s gender,” Einiö told PsyPost. “This suggests that there could be psychological distress stemming from being an immigrant man in a host society rather than distress resulting from gendered conflicts between a man and a wife due to different cultural understandings of gender roles.”

This destabilizing effect was not observed in female couples. Marriages between a foreign-born woman and a native-born woman did not show elevated divorce risks compared to couples where both women were native-born.

“Female same-sex couples had an elevated divorce risk, but this risk did not further increase if a native-born woman married a foreign-born wife,” Einiö said. “One of the reasons could be that when a native-born woman legalizes her relationship with another woman, it is often with someone of a relatively similar cultural background (e.g., a wife from another European country).”

Income dynamics provided further insight into how gender norms shape relationship stability. The study distinguished between the primary breadwinner and the secondary breadwinner. In opposite-sex couples, this usually aligned with the husband and wife, respectively. For same-sex couples, the researchers categorized earners based on age to allow for comparison.

High income for the primary breadwinner appeared to stabilize all marriages. This was true regardless of the gender composition of the couple. When the primary earner brought in more money, the risk of divorce decreased across the board.

However, the income of the secondary breadwinner had divergent effects. In opposite-sex marriages, a higher income for the secondary earner was associated with an increased risk of divorce. This aligns with theories regarding the “independence effect,” where financial independence may allow a wife to leave an unhappy marriage.

In contrast, a higher income for the secondary earner in same-sex marriages tended to stabilize the union. This was particularly evident for male couples. The data suggests that male couples benefit from greater income equality within the relationship. While income inequality often protected opposite-sex marriages, it appeared to be a risk factor for same-sex unions.

Religious affiliation also emerged as a significant factor. The study measured this by tracking membership in Finland’s national churches. Joint membership in a church was associated with lower divorce risks for all couple types. This may reflect shared values or the presence of social support from a religious community.

Dissimilarity in religious status was detrimental for some. When one partner was a church member and the other was not, divorce risk increased for same-sex couples. This effect was strongest for male couples. Such dissimilarity did not appear to impact the stability of opposite-sex couples.

The researchers discussed several theoretical implications of these findings. The persistence of high divorce rates among female couples supports the minority stress hypothesis. Women in same-sex relationships may face compounded stress from sexual minority status and gender-related societal expectations. They may also lack the institutional support often available to mixed-gender couples.

The findings regarding men suggest that deviations from cultural norms impact them differently. For immigrant men, the pressure of adapting to a host society appears to bleed into marital stability. For gay men, the lack of shared religious community or significant income disparities can weaken the relationship bond.

The study has some limitations inherent to the use of administrative data. The registers do not contain information on the psychological well-being of the participants. This prevents a direct measurement of relationship quality or specific stressors. The data relies on legal gender markers, which excludes non-binary identities. Additionally, religious affiliation was measured by church tax payment, which may not accurately reflect personal faith or spirituality.

The researchers note that the context of Finland is specific. The country is known for high gender equality but was relatively late among Nordic nations to adopt same-sex marriage laws. The transition from registered partnerships to marriage in 2017 may have altered the social landscape, though the study period largely covers the partnership era.

Future research is needed to see if these patterns hold in other countries. The authors specifically express interest in whether the destabilizing effect of intermarriage for men is consistent across different European nations. Understanding these nuances helps clarify how the intersection of gender, culture, and economic resources influences the longevity of modern relationships.

The study, “Divorce in same-sex and opposite-sex couples: The roles of intermarriage, religious affiliation, and income,” was authored by Elina Einiö and Maria Ponkilainen.

Psilocybin impacts immunity and behavior differently depending on diet and exercise context

A new study published in the journal Psychedelics reveals that the environment and physiological state of an animal profoundly influence the effects of psilocybin. Researchers found that while the drug altered immune markers in mice that exercised, it did not modify social behaviors in mice modeling anorexia nervosa. These findings suggest that the therapeutic potential of psychedelics may depend heavily on the biological context in which they are administered.

Anorexia nervosa is a severe psychiatric condition characterized by restricted eating and excessive exercise. Many patients also struggle with social interactions and understanding the emotions of others. These social challenges often persist even after weight recovery, and they contribute to the isolation associated with the disorder. Current treatments frequently fail to address these specific interpersonal symptoms.

Claire J. Foldi and her colleagues at Monash University in Australia sought to investigate potential biological causes for these issues. They focused on the connection between brain function and the immune system. Previous research suggests that inflammation may play a role in psychiatric disorders. Specifically, molecules like interleukin-6 often appear at abnormal levels in people with depression and anxiety.

Psilocybin, the active compound in magic mushrooms, is known to affect serotonin receptors and possesses potential anti-inflammatory properties. Foldi’s team wanted to see if psilocybin could improve social behavior and regulate immune responses in a living organism. They hypothesized that the drug might rescue social deficits by lowering inflammation.

To test this, the researchers used a method called the activity-based anorexia model. They housed female mice in cages with running wheels and limited their access to food. This combination typically causes mice to run excessively and lose weight rapidly, mimicking human anorexia. The researchers chose female mice because the condition predominantly affects women.

The team compared these mice to three other groups to isolate specific variables. One group had restricted food but no wheel, which tested the effect of hunger alone. Another group had a wheel but unlimited food, testing the effect of exercise alone. A control group lived in standard housing with no restrictions.

Once the mice in the anorexia model lost a specific amount of weight, the researchers administered a single dose of psilocybin or a saline placebo. Later that day, they placed the mice in a special testing apparatus. This box contained three connected chambers designed to measure social interest.

The researchers measured how much time the mice spent interacting with a new mouse versus an inanimate object. In a second phase, they tracked whether the mice preferred spending time with a familiar mouse or a stranger. Finally, the team analyzed blood samples to measure levels of interleukin-6.

The results showed distinct behavioral patterns based on the living conditions of the mice. Mice in the anorexia model did not withdraw socially as the researchers had anticipated. Instead, these mice showed a strong interest in investigating new mice. They preferred novel social interactions over familiar ones.

This intense curiosity was also present in the mice that only had access to running wheels. In contrast, mice that were only food-restricted spent more time investigating the object. This likely indicates a motivation to search for food rather than socialize.

Psilocybin did not alter these social behaviors in the anorexia group, the exercise group, or the food-restricted group. The drug only changed behavior in the healthy control mice. Control mice given psilocybin became less interested in novelty and spent more time with familiar companions. This was an unexpected outcome that contrasted with the other groups.

The physiological results were equally specific to the environment. The researchers found that psilocybin markedly elevated levels of interleukin-6 in the mice that had access to running wheels. This effect was not observed in the anorexia group or the other groups.

In the running wheel group, higher levels of this inflammatory marker correlated with a stronger preference for social novelty. The drug did not reduce inflammation in the anorexia model as originally hypothesized. This suggests that prior exercise primes the immune system to respond differently to the drug.

The study highlights a limitation in how animal models mimic complex human disorders. While human patients often retreat socially, the mice in this model became hyperactive and explorative. This behavior may represent a foraging instinct triggered by hunger. It complicates the ability to translate these specific social findings directly to human psychology.

The increase in inflammation seen in the exercise group suggests a relationship between physical activity and how psychedelics affect the body. Psilocybin is often cited as an anti-inflammatory agent. However, this study indicates that in certain contexts, it may promote immune signaling.

The researchers note that they only measured inflammation at a single time point. Psilocybin may have transient effects that vary over hours or days. Future studies will need to track these markers over a longer period to capture the full picture.

It remains necessary to test different biological markers and brain regions to fully understand these mechanisms. The relationship between serotonin signaling and immune function is not uniform. The data indicate that a “one size fits all” approach to psychedelic therapy may be insufficient.

This research implies that clinical trials should account for the patient’s physical state, including their exercise habits and nutritional status. Factors such as metabolic stress could alter how the drug impacts both behavior and the immune system.

The study, “Psilocybin exerts differential effects on social behavior and inflammation in mice in contexts of activity-based anorexia,” was authored by Sheida Shadani, Erika Greaves, Zane B. Andrews, and Claire J. Foldi.

Violence linked to depression in adolescent girls but not boys

A longitudinal study of adolescents from the Chicago metropolitan area found that in female, but not in male adolescents, higher exposure to violence was associated with more severe depression symptoms. In males, depression was associated with the expansion of the salience network of the brain and with increased connectivity of this network. The paper was published in Translational Psychiatry.

Violence exposure in this study was defined as experiencing, witnessing, or being repeatedly confronted with acts of interpersonal physical violence, such as being shoved, kicked, punched, or attacked with a weapon. It is a major risk factor for mental health problems, increasing the likelihood of all types of psychopathology.

Childhood adversities such as physical abuse and family violence account for a substantial proportion of psychiatric disorders that emerge during adolescence. This period is especially sensitive because key social and emotional brain systems are still developing. Exposure to violence during adolescence is associated with maladaptive emotion regulation strategies, such as rumination and emotional suppression, which contribute to rising rates of depression.

Although males are more likely to be exposed to or witness violence, females tend to show higher levels of depression during adolescence. Some evidence suggests that violence exposure places females at greater risk for internalizing problems (psychological difficulties directed inward), particularly depression and anxiety.

One explanation is that females may be more reactive to interpersonal stressors and show stronger physiological and neural responses to threat following violence exposure. Another proposed mechanism is perceived lack of control, as stressors experienced as uncontrollable are strongly linked to depressive outcomes.

Violence exposure may also alter brain systems involved in detecting and responding to threat, such as the salience network, making individuals more vigilant to potential danger. The salience network is a large functional neural network composed of multiple interconnected regions in the brain that detects and prioritizes behaviorally and emotionally important stimuli, helping the brain switch attention between internal thoughts and external demands.

Study author Ellyn R. Butler and her colleagues wanted to explore whether features of the salience network of the brain such as connectivity and expansion (the proportion of the cortex utilized by the network) may explain the association between exposure to violence and depression in adolescents. Study authors hypothesized that males experience more instances of violence than females and that depression symptoms will increase in individuals exposed to violence. They expected that this increase in depression symptoms after exposure to violence will be greater among females and that it will be accompanied by the expansion of the salience network.

Study participants were 220 adolescents between 14 and 18 years of age from the Chicago metropolitan area. Study authors intentionally prioritized adolescents from low-income neighborhoods for inclusion in the study. 141 of them were females. 38% were Black, and 30% were Hispanic. On average, they were exposed to 1.8 violent events in the past year.

Study participants provided study data twice – at the start of the study, and 2 years later. They completed an assessment of exposure to violence (a set of 7 questions about participants or their friends or family members being physically hurt, attacked, or killed) and assessments of depression and anxiety symptoms (the Revised Child Anxiety and Depression Scale).

Participants also underwent functional magnetic resonance imaging (fMRI) of their brains. The study authors used these fMRI data to derive information about connectivity and size of participants’ salience networks at both time points to control for baseline levels.

Results showed that female participants reporting greater exposure to violence tended to report more severe depressive symptoms. This association was not present in male participants. Salience network expansion or connectivity were not associated with exposure to violence in the past year.

However, greater expansion of the salience network and its greater connectivity were associated with more severe depressive symptoms in male participants. Study authors note that both of these associations remained after controlling for depression at the start of the study, indicating that exposures that impact males’ depression through the salience network may occur during middle adolescence.

“We demonstrated that salience network expansion and connectivity are positively associated with depression among males even after controlling for depression two years prior, highlighting that it is likely that males are experiencing some type of adversity that increases connectivity within the salience network, expansion of the salience network, and depression during this time period in early- to mid-adolescence. Therefore, future efforts to determine which exposures lead to depression during adolescence in males should focus on this developmental time frame,” the study authors concluded.

The study contributes to the scientific understanding of the neural underpinnings of depression. However, both depressive symptoms and exposure to violence in this study were self-reported, leaving room for reporting bias to have affected the results.

The paper, “Sex differences in response to violence: role of salience network expansion and connectivity on depression,” was authored by Ellyn R. Butler, Noelle I. Samia, Amanda F. Mejia, Damon D. Pham, Adam Pines, and Robin Nusslock.

❌