Reading view

Changes in breathing patterns may predict moments of joy before they happen

Recent research suggests that the way a person breathes does more than simply sustain life. Respiratory patterns may actually predict moments of joy and excitement before they occur. A study published in the Journal of Affective Disorders found that specific changes in breathing dynamics are linked to surges in high-energy positive emotions. This connection appears to be particularly strong for individuals with a history of depression.

The findings offer a fresh perspective on the relationship between physiological processes and mental health. While traditional advice often focuses on slow breathing to calm the nerves, this new data indicates that more active breathing patterns may precede positive states of high arousal. The study was conducted by a team of researchers led by Sean A. Minns and Jonathan P. Stange from the University of Southern California.

Mental health professionals have long recognized a connection between the lungs and the mind. The field of psychology itself derives its name from the Greek word psyche, which shares a root with the word for breath. This relationship is often studied in the context of Major Depressive Disorder. This condition is characterized by persistent sadness and a broad impairment in daily functioning.

One of the most debilitating aspects of depression is anhedonia. This symptom refers to a reduced ability to experience pleasure or interest in life. Even after a person has recovered from a depressive episode, they may still struggle to experience positive emotions. This lingering deficit can increase the risk of the depression returning.

Most previous research has focused on how negative emotions alter breathing. For example, stress might cause a person to sigh more often or breathe erratically. There has been less investigation into how breathing relates to positive moods. This represents a gap in scientific understanding. Positive affect is a strong predictor of long-term recovery.

Psychologists often categorize emotions using a model that includes two dimensions. The first dimension is valence, which ranges from pleasant to unpleasant. The second dimension is arousal, which ranges from low energy to high energy. Joy and excitement are examples of high-arousal positive affect. Calmness and contentment are examples of low-arousal positive affect.

Individuals with depression often show a specific reduction in high-arousal positive emotions. They may feel calm, but they rarely feel enthusiastic. The researchers wanted to see if breathing patterns in daily life could predict these elusive states of high energy. They also wanted to know if this relationship worked differently for people who had previously suffered from depression compared to those who had not.

To investigate these questions, the team recruited seventy-three adults. The participants were divided into two groups. One group consisted of thirty-six individuals with a history of Major Depressive Disorder who were currently in remission. The second group consisted of thirty-seven healthy volunteers with no history of psychiatric issues.

The study employed a method known as Ecological Momentary Assessment. This approach allows scientists to collect data in the real world rather than in an artificial laboratory setting. For seven days, participants went about their normal lives while wearing a specialized piece of technology. This device was a “smart shirt” called the Hexoskin.

The Hexoskin is a garment worn under regular clothes. It contains sensors woven into the fabric that measure the expansion and contraction of the chest and abdomen. This allowed the researchers to continuously monitor respiratory metrics. The device measured breathing rate and the volume of air moved with each breath.

While wearing the shirts, participants received surveys on their smartphones at random times throughout the day. These surveys asked them to rate their current mood. The participants rated the intensity of various emotions, such as feeling cheerful, happy, or confident. They also reported on the strategies they were using to manage their emotions.

The researchers focused their analysis on the thirty-minute window immediately preceding each survey. By looking at the physiological data leading up to the mood report, they hoped to see if breathing changes happened before the emotional shift. This time-lagged design helps clarify the direction of the relationship.

The results revealed a clear pattern. When participants exhibited increases in minute ventilation and breathing rate, they were more likely to report high-arousal positive emotions thirty minutes later. Minute ventilation refers to the total amount of air a person breathes in one minute. Essentially, breathing faster and moving more air was a precursor to feeling joy and excitement.

The researchers then compared the two groups of participants. They found that this physiological link was present in both groups. However, the strength of the connection varied based on the participant’s medical history. The relationship between breathing and positive mood was notably stronger in the group with a history of depression.

For healthy controls, an increase in ventilation predicted a subtle increase in positive mood. For those with remitted depression, the same increase in ventilation predicted a much larger boost in positive mood. This suggests that for these individuals, physiological activation may be a requisite for experiencing joy.

The study also examined the role of emotion regulation strategies. The researchers looked specifically at a strategy called acceptance. Acceptance involves experiencing thoughts and feelings without judging them or trying to change them. It emphasizes openness to the present moment.

Participants who reported using acceptance more frequently showed a stronger link between their breathing and their mood. For those who rarely used acceptance, the connection between minute ventilation and positive emotion was statistically insignificant. This suggests that being open to one’s internal experience may allow physiological changes to more effectively influence emotional states.

The team also found a connection between breathing variability and regulation style. At the level of individual differences, people who had more variable depth of breath tended to use acceptance more often. This variability might reflect a flexible physiological system that adapts readily to different situations.

These findings challenge the common assumption that slower breathing is always better for mental health. While slow breathing can help reduce anxiety, it may not be the best tool for generating excitement or enthusiasm. High-energy positive states appear to be supported by a more active respiratory pattern.

The authors propose that individuals with a history of depression may rely more heavily on this physiological “ramp-up” to feel good. In healthy individuals, positive emotions might arise more easily without requiring such a strong physiological push. For those in remission, the body may need to work harder to generate the same level of joy.

There are several caveats to consider regarding this research. The study relied on wearable sensors that come in standard sizes. This led to issues with sensor fit for some participants with atypical body proportions. As a result, a portion of the respiratory data had to be excluded to ensure accuracy.

Additionally, the study was observational. It showed that breathing changes predict mood changes, but it cannot definitively prove that breathing causes the mood to change. It is possible that an unmeasured third variable influences both factors. The sample size was also relatively small, which limits how broadly the results can be generalized.

Despite these limitations, the implications for treatment are promising. The study suggests that respiratory patterns could serve as a target for new interventions. Therapies could potentially harness breathing techniques to help individuals with depression access high-energy positive states.

The researchers envision the possibility of “just-in-time” interventions. Wearable devices could monitor a person’s breathing in real time. If the device detects a pattern associated with low mood or disengagement, it could prompt the user to engage in specific breathing exercises. These exercises would be designed to increase ventilation and potentially spark a positive emotional shift.

This approach could be particularly useful for preventing relapse. Since the loss of joy is a major risk factor for the return of depression, finding ways to boost positive affect is a treatment priority. By understanding the physiological precursors of joy, clinicians may be able to offer more precise tools to their patients.

Future research will need to confirm these findings in larger groups. Scientists also need to determine if these patterns hold true for people currently experiencing a major depressive episode. The current study focused only on those in remission. It remains to be seen if the same dynamics apply during the acute phase of the illness.

The study provides a first step toward understanding the dynamic interplay between breath and joy in everyday life. It highlights the importance of looking beyond the laboratory to see how physiology functions in the real world. As technology improves, the ability to monitor and influence these processes will likely expand.

The study, “When breath lifts your mood: Dynamic everyday links between breathing, affect, and emotion regulation in remitted depression,” was authored by Sean A. Minns, Bruna Martins-Klein, Sarah L. Zapetis, Ellie P. Xu, Jiani Li, Gabriel A. León, Margarid R. Turnamian, Desiree Webb, Archita Tharanipathy, Emily Givens, and Jonathan P. Stange.

A common enzyme linked to diabetes may offer a new path for treating Alzheimer’s

A protein long implicated in diabetes and obesity may hold the key to treating Alzheimer’s disease by reinvigorating the brain’s immune system. New research suggests that blocking this protein, known as PTP1B, allows immune cells to clear toxic waste more effectively and restores cognitive function in mice. The findings were published in the Proceedings of the National Academy of Sciences.

Alzheimer’s disease is characterized by the accumulation of sticky protein clumps called amyloid-beta. These plaques disrupt communication between brain cells and are widely believed to drive memory loss and neurodegeneration. The brain relies on specialized immune cells called microglia to maintain a healthy environment. In a healthy brain, microglia locate and engulf toxic clumps like amyloid-beta through a process called phagocytosis.

However, in patients with Alzheimer’s, these immune cells often become lethargic. They fail to keep up with the accumulating waste, allowing plaques to spread. Scientists have struggled to find ways to safely reactivate these cells without causing damaging inflammation.

There is a growing body of evidence linking Alzheimer’s to metabolic disorders. Conditions like type 2 diabetes are well-established risk factors for dementia. This connection led researchers to investigate a specific enzyme called protein tyrosine phosphatase 1B, or PTP1B.

This enzyme acts as a brake on signaling pathways that control how cells use energy and respond to insulin. Nicholas K. Tonks, a professor at Cold Spring Harbor Laboratory who discovered PTP1B in 1988, led the investigation along with graduate student Yuxin Cen. They hypothesized that PTP1B might be preventing microglia from doing their job.

To test this theory, the team used a mouse model genetically engineered to develop Alzheimer’s-like symptoms. These mice, known as APP/PS1 mice, typically develop amyloid plaques and memory deficits as they age. The researchers created a group of these mice that lacked the gene responsible for producing PTP1B. When these mice reached an age where memory loss typically begins, the researchers assessed their cognitive abilities.

The mice lacking the enzyme performed better on memory tests than the standard Alzheimer’s mice. One test involved a water maze where mice had to remember the location of a hidden platform. The mice without PTP1B found the escape route faster, indicating superior spatial learning. Another test measured how much time mice spent exploring a new object versus a familiar one. The genetically modified mice showed a clear preference for the new object, a sign of intact recognition memory.

The team also tested a drug designed to inhibit PTP1B to see if pharmacological intervention could mimic the genetic deletion. They administered a compound called DPM1003 to older mice that had already developed plaques. After five weeks of treatment, these mice showed similar improvements in memory and learning. This suggested that blocking the enzyme could reverse existing deficits and was not just a preventative measure.

Next, the investigators examined the brains of the animals to understand the biological changes behind these behavioral improvements. They used staining techniques to visualize amyloid plaques. Both the mice lacking the PTP1B gene and those treated with the inhibitor had considerably fewer plaques in the hippocampus. This region of the brain is essential for forming new memories.

To understand how the plaques were being cleared, the researchers analyzed the gene activity in individual brain cells. They performed single-cell RNA sequencing to look at the genetic profiles of thousands of cells. They found that PTP1B is highly expressed in microglia. When the enzyme was absent, the microglia shifted into a unique state.

These cells began expressing genes associated with the consumption of cellular debris. This state is often referred to as “disease-associated microglia,” or DAM. While the name sounds negative, this profile indicates cells that are primed to respond to injury. The lack of PTP1B appeared to push the microglia toward this beneficial, cleaning-focused phenotype.

The researchers then isolated microglia in a dish and exposed them to amyloid-beta to observe their behavior directly. Cells lacking PTP1B were much more efficient at swallowing the toxic proteins. “Over the course of the disease, these cells become exhausted and less effective,” says Cen. “Our results suggest that PTP1B inhibition can improve microglial function, clearing up Aβ plaques.”

The study revealed that this boost in activity was powered by a change in cellular metabolism. Phagocytosis is an energy-intensive process. The immune cells without PTP1B were able to ramp up their energy production to meet this demand. They increased both their glucose consumption and their oxygen use.

This metabolic surge was driven by the PI3K-AKT-mTOR signaling pathway. This is a well-known cellular circuit that regulates growth and energy survival. In the absence of PTP1B, this pathway remained active, providing the fuel necessary for the microglia to function.

Finally, the team identified the specific molecular switch that PTP1B controls to regulate this process. They found that the enzyme directly interacts with a protein called spleen tyrosine kinase, or SYK. SYK is a central regulator that tells microglia to activate and start eating. PTP1B normally removes phosphate groups from SYK, which keeps the kinase in an inactive state.

When PTP1B is removed or inhibited, SYK becomes overactive. This triggers a cascade of signals that instructs the cell to produce more energy and engulf amyloid. The researchers confirmed this by adding a drug that blocks SYK to the cells. When SYK was blocked, the benefits of removing PTP1B disappeared, and the microglia stopped clearing the plaque. This proved that PTP1B works by suppressing SYK.

The researchers utilized a “substrate-trapping” technique to confirm this direct interaction. They created a mutant version of PTP1B that can grab onto its target protein but cannot let go. This allowed them to isolate the PTP1B enzyme and see exactly what it was holding. They found it was bound tightly to SYK, confirming the direct relationship between the two proteins.

While these results are promising, the study was conducted in mice. Animal models mimic certain aspects of Alzheimer’s pathology but do not perfectly replicate the human disease. Future research will need to determine if similar metabolic and immune pathways are active in human patients. Additionally, PTP1B regulates many systems in the body, so widespread inhibition must be tested for safety.

The researchers are now interested in developing inhibitors that can specifically target the brain to minimize potential side effects. The Tonks lab is working to refine these compounds for potential clinical use. Tonks envisions a strategy where these inhibitors are used alongside existing treatments. “The goal is to slow Alzheimer’s progression and improve quality of life of the patients,” says Tonks. “Using PTP1B inhibitors that target multiple aspects of the pathology, including Aβ clearance, might provide an additional impact,” says Ribeiro Alves.

The study, “PTP1B inhibition promotes microglial phagocytosis in Alzheimer’s disease models by enhancing SYK signaling,” was authored by Yuxin Cen, Steven R. Alves, Dongyan Song, Jonathan Preall, Linda Van Aelst, and Nicholas K. Tonks.

Evolutionary psychology’s “macho” face ratio theory has a major flaw

For years, evolutionary psychologists and biologists have investigated the idea that the shape of a man’s face can predict his behavior. A specific measurement known as the facial width-to-height ratio has garnered attention as a potential biological billboard for aggression and dominance. A new comprehensive analysis, however, challenges the validity of this metric.

The research suggests that this specific ratio is not a reliable marker of sexual difference. Instead, the study points toward a simpler measurement that may hold the key to understanding facial evolution. These findings were published in the journal Evolution and Human Behavior.

The human face is a complex landscape that conveys biological information to others. We instinctively look at faces to judge health, age, and emotion. Beyond these immediate signals, researchers have hypothesized that facial structure reveals deeper evolutionary traits. The primary metric used to test this is the facial width-to-height ratio, often abbreviated as fWHR. To get this number, a researcher measures the distance between the cheekbones and divides it by the distance between the brow and the upper lip.

The prevailing theory has been that men with wider, shorter faces possess higher levels of testosterone and are more formidable. Previous studies have linked a high ratio in men to aggressive behavior in sports and financial success in business. The underlying assumption is that this facial structure evolved because it signaled a competitive advantage to potential mates or rivals. This concept relies on the existence of sexual dimorphism, which is the condition where the two sexes of the same species exhibit different characteristics.

Despite the popularity of this theory, the scientific evidence has been inconsistent. Some studies find a strong link between the ratio and masculine traits, while others find no connection at all. A major issue in past research is the inconsistent definition of the ratio itself. Different scientists measure the height of the face using different landmarks, such as the eyelids, the brow, or the hairline. Furthermore, many studies fail to account for the overall size of the person.

To address these inconsistencies, a team of researchers led by Alex L. Jones from the School of Psychology at Swansea University conducted a rigorous re-examination of the evidence. The team included Tobias L. Kordsmeyer, Robin S.S. Kramer, Julia Stern, and Lars Penke. They aimed to apply a more sophisticated statistical approach to determine if the facial width-to-height ratio is truly a sexually dimorphic trait. They also sought to determine if simple facial width might be a more accurate signal of biological differences than the ratio.

The researchers utilized a statistical method known as Bayesian inference. This approach differs from traditional statistics by incorporating prior knowledge into the analysis. It allows researchers to estimate the probability of a hypothesis being true given the available data. This contrasts with standard methods that often focus solely on whether a specific result is statistically significant. The team argues that Bayesian models are better suited for understanding subtle biological patterns because they can simulate data and quantify uncertainty.

In their first study, the group analyzed facial photographs of 1,949 individuals drawn from nine different datasets. The sample included 818 men and 1,131 women from various Western countries. The researchers used computer software to automatically place landmarks on the facial images. This ensured that the measurements were consistent across all photographs. They calculated the width-to-height ratio using five different common definitions of facial height to see if the measurement method mattered.

Crucially, the team controlled for body size in their statistical model. They adjusted the data for both height and weight. This is a vital step because men are generally larger than women. Without this control, a feature might appear to be a specific facial signal when it is actually just a byproduct of having a larger body. The researchers also defined a “region of practical equivalence.” This is a statistical tool used to determine if a difference is large enough to matter in the real world.

The results of this first analysis contradicted the popular evolutionary theory. When controlling for height and weight, the researchers found that men did not have a larger width-to-height ratio than women. In fact, the model showed a small tendency for women to have a larger ratio. However, this difference was so minute that it fell within the region of practical equivalence. This means the difference was effectively zero for any practical purpose.

The study also revealed that the ratio is heavily influenced by general body geometry. The researchers found that as a person’s height increases, their facial width-to-height ratio tends to decrease. Conversely, as body weight increases, the ratio tends to increase. This suggests that previous findings linking the ratio to aggression might have actually been detecting differences in body mass index rather than specific facial architecture. The researchers argue that the ratio is not a standalone signal of masculinity.

Following these results, the team conducted a second study focusing solely on the width of the face. This measurement is known technically as bizygomatic width. It is the distance between the two zygions, or the most outer points of the cheekbones. The researchers hypothesized that raw width might be the sexually selected trait that earlier scientists were trying to capture with the ratio.

For this second analysis, they examined the same large dataset of photographs. They also analyzed a smaller subset of 305 individuals for whom they had detailed measurements of upper body size. This included shoulder width, chest girth, and arm girth. This allowed them to test if facial width is connected to muscularity and physical strength, which are key components of evolutionary dominance.

The findings for facial width were starkly different from those for the ratio. The Bayesian analysis showed a very high probability that men have wider faces than women. This held true even when the researchers adjusted for height and weight. The difference was substantial, amounting to roughly half a standard deviation.

When the researchers looked at the smaller group and controlled for upper body size, the distinction became even clearer. The model indicated that men have almost a two standard deviation greater face width than women. The analysis suggested that an individual man has a 99.9 percent probability of having a wider face than a woman of similar body composition. This indicates that facial width is a robust, sexually dimorphic trait.

The authors propose that the evolutionary signal is driven by the lateral growth of the cheekbones. During puberty, male faces tend to grow wider, a process likely driven by testosterone. This growth trajectory aligns with the development of other skeletal features associated with physical formidability. The study implies that the horizontal width of the face is a reliable indicator of physical size and strength.

There are caveats to this research. The study relied on static two-dimensional photographs. This method cannot capture the dynamic nature of facial expressions or the three-dimensional structure of the skull as effectively as medical imaging. Additionally, the samples were primarily from Western populations. It is possible that facial metrics vary across different ethnic groups and environments. Future research would need to verify these findings in more diverse global populations.

The researchers also noted that facial perception is complex. While physical measurements provide hard data, human social interaction relies on how these features are perceived. It remains to be seen if the human brain specifically attends to raw width when making judgments about dominance or threat. The current study focuses on the physical reality of the face rather than the psychological processing of it.

This research represents a methodological correction for the field of evolutionary psychology. By using advanced Bayesian statistics and proper body size controls, the authors have dismantled a widely held belief about the facial width-to-height ratio. They argue that the ratio is likely a statistical artifact rather than a meaningful biological signal.

The shift in focus toward bizygomatic width offers a clearer path for future investigation. If facial width is the true signal of formidability, previous studies on aggression and leadership may need to be re-evaluated. The authors suggest that researchers should move away from the ratio and focus on simple width in future work. This simplification may lead to more consistent and replicable results in the study of human evolution.

The study, “Updating evidence on facial metrics: A Bayesian perspective on sexual dimorphism in facial width-to-height ratio and bizygomatic width,” was authored by Alex L. Jones, Tobias L. Kordsmeyer, Robin S.S. Kramer, Julia Stern, and Lars Penke.

Self-kindness leads to a psychologically rich life for teenagers, new research suggests

New research suggests that teenagers who practice kindness toward themselves are more likely to experience a life filled with variety and perspective-changing events. The findings indicate that specific positive mental habits can predict whether an adolescent develops a sense of psychological richness over time. These results were published in the journal Applied Psychology: Health and Well-Being.

To understand this study, one must first understand that happiness is not a single concept. Traditional psychology often divides a good life into two categories. The first is hedonic well-being, which focuses on feeling pleasure and being satisfied. The second is eudaimonic well-being, which centers on having a sense of purpose and meaning.

However, researchers have recently identified a third type of good life known as psychological richness. A psychologically rich life is characterized by complex mental experiences and a variety of novel events. It is not always comfortable or happy in the traditional sense. Instead, it is defined by experiences that shift a person’s perspective and deepen their understanding of the world.

Adolescence is a specific time when young people are exploring their identities and facing new academic and social challenges. This developmental stage is ripe for cultivating psychological richness because teenagers are constantly encountering new information. The authors of the current study wanted to know what internal tools help adolescents turn these challenges into a rich life rather than a stressful one.

The investigation was led by Yuening Liu and colleagues from Shaanxi Normal University in China. They focused their attention on the concept of self-compassion. This is often described as treating oneself with the same warmth and understanding that one would offer to a close friend.

Self-compassion is not a single trait but rather a system of six distinct parts. Three of these parts are positive, or compassionate. They include self-kindness, mindfulness, and a sense of common humanity.

Self-kindness involves being supportive of oneself during failures. Mindfulness is the ability to observe one’s own pain without ignoring it or exaggerating it. Common humanity is the recognition that suffering is a shared part of the human experience.

The other three parts are negative, or non-compassionate. These include self-judgment, isolation, and over-identification. Self-judgment refers to being harshly critical of one’s own flaws. Isolation is the feeling that one is the only person suffering. Over-identification happens when a person gets swept up in their negative emotions.

Previous research has linked self-compassion to general happiness, but the link to psychological richness was unclear. The researchers hypothesized that the positive components of self-compassion would act as an engine for psychological richness. They also predicted that the negative components would stall this growth.

To test this, the team recruited 528 high school students from western China. The participants ranged in age from 14 to 18 years old. The study was longitudinal, meaning the researchers collected data at more than one point in time.

The students completed detailed surveys at the beginning of the study. They answered questions about how they treated themselves during difficult times. They also rated statements regarding how psychologically rich they felt their lives were.

Four months later, the students completed the same surveys again. This time gap allowed the researchers to see how feelings and behaviors shifted over the semester. It moved the analysis beyond a simple snapshot of a single moment.

The team used a statistical technique called cross-lagged panel network analysis. This method allows scientists to map out psychological traits like a weather system. It shows which traits are the strongest predictors of future changes in other traits.

The results revealed a clear distinction between the positive and negative aspects of self-compassion. The analysis showed that self-kindness was a strong predictor of psychological richness four months later. Students who were kind to themselves reported lives that were more interesting and perspective-changing at the second time point.

Mindfulness also emerged as a significant positive predictor. Adolescents who could observe their difficult emotions with balance were more likely to experience growth in psychological richness. These two traits acted as central hubs in the network.

The study suggests that these positive traits help teenagers process their experiences more effectively. When a student faces a setback, self-kindness may prevent them from shutting down. This openness allows them to learn from the event, adding to the complexity and richness of their worldview.

On the other hand, the researchers found that self-judgment negatively predicted psychological richness. Students who criticized themselves harshly tended to view their lives as less rich over time. This suggests that strict self-criticism may cause teenagers to avoid new challenges.

Isolation also showed a negative connection to future psychological richness. This makes theoretical sense because psychological richness often comes from interacting with diverse viewpoints. If a student feels isolated, they are cut off from the social exchanges that expand their perspective.

The network analysis also revealed how the different parts of self-compassion interact with each other. The researchers found that isolation at the first time point predicted higher self-judgment later on. This indicates a negative cycle where feeling alone leads to being harder on oneself.

Conversely, there was a positive feedback loop between the compassionate components. Self-kindness predicted higher levels of mindfulness in the future. In turn, being mindful predicted higher levels of self-kindness.

These findings support a theory known as the “well-being engine model.” This model suggests that certain personality traits act as inputs that drive positive mental outcomes. In this case, self-kindness and mindfulness serve as the fuel that powers a psychologically rich life.

The results also align with the “bottom-up theory” of well-being. This theory posits that overall well-being comes from the balance of positive and negative daily experiences. Self-compassion appears to help adolescents balance these experiences so that negative events do not overwhelm them.

By regulating their emotions through self-kindness, teenagers can remain open to the world. They can accept uncertainty and change, which are key ingredients for a rich life. Without these tools, they may become rigid or fearful.

The study highlights potential targets for helping adolescents improve their mental health. Interventions that specifically teach self-kindness could be very effective. Teaching students to be mindful of their distress could also yield long-term benefits.

There are some limitations to this research that should be noted. The study relied entirely on self-reports from the students. People do not always view their own behaviors accurately.

Additionally, the study was conducted exclusively with Chinese adolescents. Cultural differences can influence how people experience concepts like self-compassion and well-being. The results might not be exactly the same in other cultural contexts.

The time frame of four months is also relatively short. Adolescence spans many years, and developmental changes can be slow. Future research would benefit from tracking students over a longer period.

The researchers also noted that while they found predictive relationships, this does not strictly prove causation. Other unmeasured factors could influence both self-compassion and psychological richness. Experimental studies would be needed to confirm a direct cause-and-effect link.

Despite these caveats, the study offers a detailed look at the mechanics of adolescent well-being. It moves beyond the idea that self-compassion is just one general thing. Instead, it shows that specific habits, like being kind to oneself, have specific outcomes.

The distinction between simply being happy and having a rich life is important for educators and parents. A teenager might not always be cheerful, but they can still be developing a deep and complex understanding of life. This research suggests that self-compassion is a vital resource for that developmental journey.

The study, “Longitudinal relationship between self-compassion and psychological richness in adolescents: Evidence from a network analysis,” was authored by Yuening Liu, Kaixin Zhong, Ao Ren, Yifan Liu, and Feng Kong.

Biological sex influences how blood markers reflect Alzheimer’s severity

A new study suggests that a promising blood test for Alzheimer’s disease may need to be interpreted differently depending on whether the patient is male or female. The researchers found that for the same concentration of a specific protein in the blood, men exhibited more severe brain damage and cognitive decline than women. These findings were published in the journal Molecular Psychiatry.

Diagnosing Alzheimer’s disease has historically been a difficult and expensive process. Physicians currently rely on a combination of subjective memory tests and invasive or costly biological measures. The most accurate biological tools available today involve positron emission tomography, known as PET scans, or lumbar punctures to analyze cerebrospinal fluid.

PET scans use radioactive tracers to visualize plaques and tangles in the brain, while lumbar punctures require inserting a needle into the lower back to collect fluid for analysis. Because these methods are not easily scalable for routine screening, the medical community has sought a blood-based biomarker that could indicate the presence and severity of neurodegeneration without the need for specialized equipment or invasive procedures.

One of the most promising candidates for such a test is neurofilament light chain, often abbreviated as NfL. This protein acts as a structural component within the axons of neurons, functioning much like a skeleton to provide support and shape to the nerve cells. When neurons are damaged or die due to neurodegenerative diseases, this internal structure breaks down. The neurofilament light chain proteins are then released into the cerebrospinal fluid and eventually make their way into the bloodstream.

Elevated levels of NfL in the blood serve as a signal that injury to the brain’s cellular network is occurring. While the potential of NfL as a diagnostic tool is widely recognized, its clinical application is hindered by a lack of standardized reference ranges. Doctors do not yet have a universal set of numbers to define what constitutes a normal or abnormal level across different demographic groups.

Xiaoqin Cheng, alongside Fang Xie and Peng Yuan from Fudan University in Shanghai, sought to determine if biological sex influences how these protein levels correlate with the actual severity of the disease. Previous research regarding sex differences in NfL levels has produced inconsistent results. Some studies suggested no difference between men and women, while others indicated variations in specific genetic cases. Cheng and colleagues aimed to clarify this relationship by examining whether a specific amount of NfL in the blood reflects the same amount of brain damage in men as it does in women.

The research team began their investigation by analyzing data from the Alzheimer’s Disease Neuroimaging Initiative, a large, long-running study based in North America. They selected 860 participants who had available data on plasma NfL levels, brain imaging, and cognitive assessments.

This group included people with normal cognition, mild cognitive impairment, and diagnosed dementia. The researchers used statistical models to look for interactions between sex and NfL levels regarding their effect on clinical symptoms. They controlled for variables such as age, education, and genetic risk factors to isolate the effect of sex.

The analysis revealed a distinct divergence between men and women. The researchers observed that as NfL levels rose, men experienced a much steeper decline in cognitive function compared to women with similar protein increases.

When the researchers looked at specific cognitive tests, such as the Clinical Dementia Rating or the Mini-Mental State Examination, they found that a unit increase in NfL predicted a more significant drop in performance for male participants. This pattern suggested that the male brain might be more vulnerable to the neurodegenerative processes associated with these elevated protein markers.

To understand the physical changes driving these cognitive differences, the team examined brain scans of the participants. They looked at magnetic resonance imaging data to measure the volume of specific brain regions critical for memory and thinking. The results showed that for every unit increase in plasma NfL, men displayed a greater reduction in the volume of the hippocampus, a brain structure essential for forming new memories.

The team also analyzed metabolic activity in the brain using glucose PET scans. These scans measure how much energy brain cells are consuming, which is a proxy for how healthy and active they are. Men showed more severe hypometabolism, or reduced brain energy use, than women at comparable levels of plasma NfL.

To ensure these results were not specific to one demographic or geographic population, the authors attempted to replicate their findings in a completely different group of people. They utilized the Chinese Preclinical Alzheimer’s Disease Study, a cohort consisting of 619 individuals.

Despite differences in ethnicity and genetic background between the American and Chinese cohorts, the fundamental finding remained the same. In this second group, men again showed more prominent functional and structural deterioration associated with rising NfL levels compared to women. A third, smaller public dataset was also analyzed, which confirmed the pattern once more.

The study also investigated whether this sex difference was unique to neurofilament light chain or if it applied to other Alzheimer’s biomarkers. They repeated their analysis using two other blood markers: phosphorylated tau 181, which is linked to the tangles found in Alzheimer’s brains, and glial fibrillary acidic protein, a marker of brain inflammation. Neither of these markers showed the same sex-dependent effect. This specificity suggests there is a unique biological mechanism linking NfL levels to disease severity that differs between males and females.

The authors also explored the predictive power of the biomarker over time. Using longitudinal data, they tracked how quickly patients progressed from mild impairment to full dementia. The statistical models indicated that an increase in plasma NfL levels was predictive of a faster cognitive decline and a higher likelihood of disease progression in men compared to women. This implies that a high NfL test result in a male patient might warrant a more urgent prognosis than the same result in a female patient.

While the study establishes a correlation, the biological reasons behind this discrepancy remain a subject for future investigation. The researchers propose several hypotheses. One possibility involves the blood-brain barrier, the protective filter that separates the brain’s circulatory system from the rest of the body.

If the blood-brain barrier in men becomes more permeable or dysfunctional during Alzheimer’s disease than in women, it could alter how NfL is released into the blood. Another potential explanation involves microglia, the immune cells of the brain. Sex differences in how these cells react to injury and inflammation could influence the rate of neurodegeneration and the subsequent release of neurofilament proteins.

There are limitations to the study. The cognitive tests used to assess participants can have subjective elements, although the researchers attempted to mitigate this by using composite scores. Additionally, while the statistical methods used to predict disease progression were robust, the sample size for the survival analysis was relatively small, and validation in larger cohorts will be necessary. The authors also note that the mechanism remains theoretical and requires direct testing in laboratory settings to confirm exactly why male physiology reacts differently.

This research highlights a significant need for precision in how blood biomarkers are developed and used. If these findings are further validated, it suggests that using a single cutoff value for plasma NfL to screen for Alzheimer’s disease may be insufficient.

Instead, clinicians may need to use sex-specific reference ranges to accurately assess the level of neurodegeneration in a patient. As the medical field moves closer to routine blood tests for dementia, accounting for biological sex will be essential to ensure that both men and women receive accurate diagnoses and appropriate care.

The study, “Plasma neurofilament light reflects more severe manifestation of Alzheimer’s disease in men,” was authored by Xiaoqin Cheng, Zhenghong Wang, Kun He, Yingfeng Xia, Ying Wang, Qihao Guo, Fang Xie, and Peng Yuan.

Recreational ecstasy use is linked to lasting memory impairments

Use of the drug MDMA, commonly known as ecstasy, may lead to lasting difficulties with learning and memory that persist long after a person stops taking it. A new analysis indicates that people who use the drug recreationally perform worse on cognitive tests than those who have never used it. These deficits appear to remain the same even in individuals who have abstained from the drug for months or years. These findings were published in the Journal of Psychopharmacology.

The chemical 3,4-methylenedioxy-N-methamphetamine, or MDMA, is a synthetic substance that alters mood and perception. It works primarily by causing a massive release of serotonin in the brain. Serotonin is a neurotransmitter that plays a major role in regulating sleep, mood, and memory. The drug prevents the brain from reabsorbing this chemical, which creates the feelings of euphoria and empathy that users seek. However, this mechanism also depletes the brain’s supply of serotonin.

Animal studies have provided evidence that MDMA can be neurotoxic. Experiments with rats and primates suggest that repeated exposure to the drug can damage the nerve endings that release serotonin. These changes can last for a long time. In humans, brain imaging studies have shown alterations in the serotonin systems of heavy users. These changes often appear in the neocortex and the limbic system, which are brain areas essential for thinking and memory.

Researchers want to understand if these changes are permanent. Some imaging studies suggest that the brain might recover after a period of abstinence. However, it is not clear if the return of serotonin markers corresponds to a recovery in mental sharpness. This question is relevant for public health as well as clinical medicine. There is a renewed interest in using MDMA therapeutically to treat conditions such as post-traumatic stress disorder. Understanding the long-term safety profile of the substance is necessary for both patients and recreational users.

To address this question, a team of researchers led by Hillary Ung and Mark Daglish conducted a systematic review. They are affiliated with Metro North Mental Health and the University of Queensland in Australia. The team searched through medical databases for every available study on the topic. They looked for research that assessed cognitive function in recreational MDMA users.

The researchers applied strict criteria to select the studies. They only included research that focused on individuals who had abstained from MDMA for at least six months. This duration was chosen to ensure that the participants were not experiencing withdrawal or the immediate aftereffects of the drug. The researchers also required that the studies use standardized neurocognitive testing tools.

Fourteen articles met the requirements for the review. From these, the researchers extracted data to perform a meta-analysis. This statistical technique combines the results of multiple small studies to find patterns that might be invisible in a single experiment. The analysis focused primarily on the domain of learning and memory, as this was the most commonly tested area across the studies.

The analysis revealed a clear distinction between those who used MDMA and those who did not. People with a history of MDMA use performed significantly worse on memory tests compared to people who were drug-naïve. The specific deficits were most notable in verbal memory. This involves the ability to remember and recall words or verbal information.

The researchers then compared current users against the abstinent users. Current users were defined as those who had used the drug recently, while the abstinent group had stopped for at least six months. The analysis found no statistical difference between these two groups. The cognitive performance of those who had quit was essentially the same as those who were still using the drug.

This lack of improvement was unexpected. One might predict that the brain would heal over time. However, the data did not show a correlation between the length of abstinence and better memory scores. Even in studies where participants had abstained for two years or more, the memory deficits remained. This suggests that the impact of MDMA on memory may be long-lasting or potentially permanent.

The review also examined other cognitive domains. These included executive function, which covers skills like planning and paying attention. The results for these areas were less consistent. Some data pointed to deficits in executive function, but the evidence was not strong enough to draw a firm conclusion. There was also limited evidence regarding impairments in language or motor skills.

The authors of the study advise caution when interpreting these results. They noted that the quality of the available evidence is generally low. Most of the studies included in the review were cross-sectional. This means they looked at a snapshot of people at one point in time rather than following them over many years. It is possible that people who choose to use MDMA have pre-existing differences in memory or impulsivity compared to those who do not.

Another major complication is the use of other drugs. People who use ecstasy recreationally rarely use only that substance. They often consume alcohol, cannabis, or cocaine as well. While the researchers tried to account for this, it is difficult to isolate the specific effects of MDMA from the effects of these other substances. Alcohol and cannabis are known to affect memory. It is possible that the deficits observed are the result of cumulative polydrug use rather than MDMA alone.

The purity of the drug is another variable. The studies relied on participants reporting how many pills they had taken in their lifetime. However, the amount of active MDMA in a street pill varies wildly. Some pills contain very high doses, while others contain none at all. This makes it impossible to calculate a precise dose-response relationship.

The researchers also pointed out that the drug market has changed. Many of the studies in the review were conducted in the early 2000s. Since then, the average strength of ecstasy tablets has increased significantly. Users today might be exposing themselves to higher doses than the participants in these older studies. This could mean that the cognitive risks are higher for modern users.

The findings have implications for the potential reversibility of brain changes. While some brain imaging studies show that serotonin transporters may regenerate over time, this study suggests that functional recovery does not necessarily follow. It is possible that the brain structures recover, but the functional connections remain altered. Alternatively, six months might simply be too short a time for full cognitive recovery to occur.

The study provides a sobering perspective on recreational drug use. The deficits in learning and memory were moderate to large in size. For a young person in an educational or professional setting, such deficits could have a tangible impact on their daily life. The inability to retain new information efficiently could hinder academic or career progress.

The authors call for better research designs in the future. They recommend longitudinal studies that assess people before they start using drugs and follow them over time. They also suggest using hair analysis to verify exactly what substances participants have taken. This would provide a more objective measure of drug exposure than self-reporting.

Until better data is available, the current evidence suggests a risk of lasting harm. Stopping the use of MDMA stops the immediate risks of toxicity. However, it may not immediately reverse the cognitive toll taken by previous use. The brain may require a very long time to heal, or the changes may be irreversible.

The study, “Long-term neurocognitive side effects of MDMA in recreational ecstasy users following sustained abstinence: A systematic review and meta-analysis,” was authored by Hillary Ung, Gemma McKeon, Zorica Jokovic, Stephen Parker, Mark Vickers, Eva Malacova, Lars Eriksson, and Mark Daglish.

Scientists find evidence of Epstein-Barr virus activity in spinal fluid of multiple sclerosis patients

Emerging research has provided fresh evidence regarding the role of viral infection in the development of multiple sclerosis. By analyzing immune cells extracted from the spinal fluid of patients, scientists identified a specific population of “killer” T cells that appear to target the Epstein-Barr virus. The findings suggest that an immune response directed at this common pathogen may drive the neurological damage associated with the disease. The study was published in the journal Nature Immunology.

Multiple sclerosis is a chronic condition in which the immune system mistakenly attacks myelin, the protective sheath covering nerve fibers in the central nervous system. This damage disrupts communication between the brain and the rest of the body. For decades, scientific inquiry focused heavily on CD4+ T cells. These are immune cells that help coordinate the body’s defense response.

However, pathologists have observed that a different type of immune cell is actually more abundant in the brain lesions of patients. These are CD8+ T cells, also known as cytotoxic or “killer” T cells. Their primary function is to destroy cells that have been damaged or infected by viruses. Despite their prevalence at the site of injury, the specific targets they hunt in the central nervous system have remained largely unknown.

There is a strong epidemiological link between the Epstein-Barr virus and multiple sclerosis. Almost every person diagnosed with the condition tests positive for previous exposure to this virus. Yet, because the virus infects the vast majority of the global population, the mere presence of the virus does not explain why some individuals develop the disease while others do not.

Joseph J. Sabatino Jr., a researcher at the University of California, San Francisco, and his colleagues sought to resolve this ambiguity. They aimed to determine what specific proteins the CD8+ T cells in the central nervous system were recognizing. The team hypothesized that identifying the targets of these cells could reveal the mechanism driving the disease.

The researchers collected samples of cerebrospinal fluid and blood from human participants. The study group included 13 individuals with multiple sclerosis or clinically isolated syndrome, a precursor to the disease. For comparison, they also collected samples from five control participants who were healthy or had other neurological conditions.

Obtaining cerebrospinal fluid is an invasive procedure. This makes such samples relatively rare and difficult to acquire, particularly from patients in the early stages of the disease. The team used a technology called single-cell RNA sequencing to analyze these samples. This method allows scientists to examine the genetic activity of thousands of individual cells simultaneously.

The investigators paid particular attention to the T cell receptors found on the surface of the immune cells. These receptors function like unique identification cards or keys. Each one is shaped to bind with a specific protein fragment, or antigen. When a T cell encounters its specific target, it clones itself repeatedly to create an army capable of eliminating the threat.

In the spinal fluid of patients with multiple sclerosis, the researchers found groups of CD8+ T cells that were genetically identical. This indicated they had undergone clonal expansion. These expanded groups were found in much higher concentrations in the spinal fluid than in the blood of the same patients. This suggests that these cells were not just passing through but were actively recruited to the central nervous system to fight a specific target.

To identify that target, the research team employed several antigen discovery strategies. One method involved a technique known as yeast display. The researchers created a library of hundreds of millions of yeast cells, each displaying a different protein fragment on its surface. They exposed the T cell receptors from the patients to this library to see which proteins they would bind.

This screening process initially identified synthetic protein fragments that acted as “mimics” for the true target. While these mimics bound to the receptors, they did not necessarily provoke a functional immune response. To find the naturally occurring target, the researchers compared the genetic sequences of the receptors against databases of known viral antigens.

This comparison yielded a match for the Epstein-Barr virus. Specifically, the receptors from the expanded CD8+ T cells matched those known to target proteins produced by the virus. To validate this finding, the team used CRISPR gene-editing technology. They engineered fresh T cells from healthy donors to express the exact receptors found in the multiple sclerosis patients.

When these engineered cells were exposed to Epstein-Barr virus peptides, they became activated and released inflammatory cytokines. This confirmed that the receptors identified in the spinal fluid were indeed specific for the virus. The team found that these virus-specific cells were highly activated and possessed the molecular machinery necessary to migrate into tissues and kill cells.

The researchers also investigated whether the virus itself was present in the central nervous system. They analyzed the cerebrospinal fluid for viral DNA. They detected genetic material from the Epstein-Barr virus in the fluid of both patients and controls. However, the presence of DNA alone only indicates that the virus is there, not necessarily that it is active.

To assess viral activity, the team looked for viral RNA transcripts. These are produced when the virus is reading its own genes to make proteins. They found higher levels of a specific transcript called BamHI-W in the fluid of patients with multiple sclerosis compared to the control group. This transcript is associated with the virus’s lytic phase, a period when it is actively replicating.

The detection of lytic transcripts suggests that the virus is not dormant in these patients. Instead, it appears to be reactivating within the central nervous system or the immune cells trafficking there. This reactivation could be the trigger that causes the immune system to expand its army of CD8+ T cells.

Some theories of autoimmune disease propose a mechanism called molecular mimicry. This occurs when a viral protein looks so similar to a human protein that the immune system attacks both. The researchers tested the Epstein-Barr virus-specific receptors against human proteins that resembled the viral targets. They found no evidence of cross-reactivity. The T cells attacked the virus but ignored the human proteins.

This finding implies that the immune system in multiple sclerosis may not be confused. It may be accurately targeting a viral invader. The collateral damage to the nervous system could be a side effect of this ongoing battle between the immune system and the reactivated virus.

The gene expression profile of these cells supported this idea. The virus-specific T cells expressed high levels of genes associated with migrating to tissues and persisting there. They appeared to be an “effector” population, primed for immediate defense rather than long-term memory storage.

“Looking at these understudied CD8+ T cells connects a lot of different dots and gives us a new window on how EBV is likely contributing to this disease,” said senior author Joe Sabatino in a press release. The study provides a clearer picture of the cellular machinery at work in the disease.

There are limitations to the study that warrant consideration. The sample size was small, involving only 18 participants in total. This is a common challenge in studies requiring invasive spinal fluid collection. While the researchers identified Epstein-Barr virus targets for some of the expanded T cell clones, the targets for the majority of the expanded cells remain unidentified.

It is also not yet clear if the viral reactivation causes the disease or if the disease state allows the virus to reactivate. The immune system is complex, and inflammation in the brain could theoretically create an environment that favors viral replication. Further research will be necessary to establish the direction of causality.

Future studies will likely focus on larger cohorts of patients. Researchers will need to determine if these virus-specific cells are present at all stages of the disease or only during early development. Additionally, understanding where the virus resides within the central nervous system remains a priority. The virus typically infects B cells, another type of immune cell, and their presence in the brain is a hallmark of multiple sclerosis.

The implications for treatment are notable. Current therapies for multiple sclerosis largely function by suppressing the immune system broadly or by trapping immune cells in the lymph nodes so they cannot enter the brain. If the disease is driven by a viral infection, therapies targeting the virus itself could offer a new approach. Antiviral drugs or vaccines designed to suppress the Epstein-Barr virus might help reduce the immune activation that leads to neurological damage.

The study, “Antigen specificity of clonally enriched CD8+ T cells in multiple sclerosis,” was authored by Fumie Hayashi, Kristen Mittl, Ravi Dandekar, Josiah Gerdts, Ebtesam Hassan, Ryan D. Schubert, Lindsay Oshiro, Rita Loudermilk, Ariele Greenfield, Danillo G. Augusto, Gregory Havton, Shriya Anumarlu, Arhan Surapaneni, Akshaya Ramesh, Edwina Tran, Kanishka Koshal, Kerry Kizer, Joanna Dreux, Alaina K. Cagalingan, Florian Schustek, Lena Flood, Tamson Moore, Lisa L. Kirkemo, Isabelle J. Fisher, Tiffany Cooper, Meagan Harms, Refujia Gomez, University of California, San Francisco MS-EPIC Team, Claire D. Clelland, Leah Sibener, Bruce A. C. Cree, Stephen L. Hauser, Jill A. Hollenbach, Marvin Gee, Michael R. Wilson, Scott S. Zamvil & Joseph J. Sabatino Jr.

This behavior explains why emotionally intelligent couples are happier

New research suggests that emotional intelligence improves romantic relationships primarily through a single, specific behavior: making a partner feel valued and appreciated. While emotionally intelligent people employ various strategies to manage their partners’ feelings, the act of valuing stands out as the most consistent driver of relationship quality. This finding implies that the key to a happier partnership may be as simple as regularly expressing that one’s partner is special. The study appears in the Journal of Social and Personal Relationships.

Emotional intelligence is broadly defined as the ability to perceive, understand, and manage emotions. Psychologists have recognized a connection between this skill set and successful romances. People with higher emotional intelligence generally report higher satisfaction with their partners. Despite this established link, the specific mechanisms explaining why these individuals have better relationships have remained unclear.

One theory proposes that the answer lies in how people regulate emotions. This concept encompasses not only how individuals manage their own feelings but also how they influence the feelings of those around them. This latter process is known as extrinsic emotion regulation. In a romantic partnership, this often involves one person trying to cheer up, calm down, or validate the other.

To investigate this theory, a research team led by Hester He Xiao from the University of Sydney in Australia conducted a detailed study. They aimed to identify which specific regulatory behaviors bridge the gap between emotional intelligence and relationship satisfaction. The researchers sought to understand if emotionally intelligent people are simply better at helping their partners navigate difficult feelings.

The study included 175 heterosexual couples, comprising 350 individuals in total. The participants were recruited online and ranged in age from their early 20s to their 80s. The researchers designed a longitudinal study that spanned 14 weeks. This design allowed them to track changes and associations over time rather than just capturing a single snapshot.

Participants completed surveys in three separate waves. In the first wave, they assessed their own emotional intelligence levels. They answered questions about their ability to appraise and use emotions. In the second wave, they reported on the specific strategies they used to make their partners feel better. The researchers focused on three “high-engagement” strategies: cognitive reframing, receptive listening, and valuing.

Cognitive reframing involves helping a partner view a situation from a new, more positive perspective. Receptive listening entails encouraging a partner to vent their emotions while paying close attention to what they say. Valuing consists of actions that make the partner feel special, important, and appreciated. In the final wave, participants rated the overall quality of their relationship, considering factors like trust, closeness, and conflict levels.

The researchers used a statistical approach called the Actor-Partner Interdependence Mediation Model. This method treats the couple as a unit. It allows scientists to see how one person’s emotional intelligence affects their own happiness, known as an actor effect. It also reveals how that same person’s intelligence affects their partner’s happiness, known as a partner effect.

The analysis revealed that valuing was the primary mediator for both men and women. Individuals with higher emotional intelligence were more likely to use valuing strategies. In turn, frequent use of valuing was associated with higher relationship quality for both members of the couple. This means that when a person feels their partner values them, the relationship improves. Simultaneously, the person doing the valuing also perceives the relationship as better.

This finding was unique because it applied consistently across genders. Whether the high-emotional-intelligence partner was male or female, the pathway was the same. They used their emotional skills to convey appreciation. This action created a positive feedback loop that boosted satisfaction for everyone involved.

The other two strategies showed less consistent results. Cognitive reframing and receptive listening did play roles, but they functioned differently for men and women. For example, men with higher emotional intelligence were more likely to use receptive listening. When men listened attentively, their female partners reported better relationship quality. However, the men themselves did not report a corresponding increase in their own relationship satisfaction from this behavior.

Women’s use of receptive listening showed a different pattern. When women listened attentively, it was linked to better relationship quality for both themselves and their male partners. This suggests a gender difference in how listening is experienced. For women, engaging deeply with a partner’s emotions appears to be mutually rewarding. For men, it primarily benefits the partner.

Cognitive reframing also displayed gendered nuances. Men’s use of reframing—helping a partner see the bright side—predicted higher relationship quality for their female partners. Women’s use of reframing did not show this same strong association in the primary analysis. These variations highlight that while valuing is universally beneficial, other support strategies may depend on who is using them.

The researchers also looked at whether these behaviors predicted changes in relationship quality over time. They ran an analysis controlling for the couples’ initial satisfaction levels. In this stricter test, the mediation effect of valuing disappeared. This result indicates that while emotional intelligence and valuing are linked to high relationship quality in the present, they may not drive long-term improvements.

This distinction is important for understanding the limits of the findings. The behaviors seem to maintain a good relationship rather than transforming a bad one. High emotional intelligence helps sustain a high level of functioning. It does not necessarily predict that a relationship will grow happier over the 14-week period if it starts at a certain baseline.

There was one unexpected finding in the change-over-time analysis. Men’s emotional intelligence was associated with a decrease in their female partners’ relationship quality relative to the baseline. This hints at a potential “dark side” to emotional intelligence. It is possible that some individuals use their emotional skills for manipulation or self-serving goals, though this interpretation requires further study.

The study had several limitations that affect how the results should be viewed. The sample consisted primarily of White, English-speaking participants from Western countries. Cultural differences in how emotions are expressed and regulated could lead to different results in other populations. Additionally, the study relied on self-reports for all measures. Participants described their own behaviors, which can introduce bias.

People often perceive their own actions differently than their partners do. A person might believe they are listening attentively, while their partner feels ignored. Future research would benefit from asking partners to rate each other’s regulation strategies. This would provide a more objective measure of how well these strategies are actually performed.

The timing of the data collection is another factor to consider. The study took place between August and October 2021. This was a period when many people were still adjusting to life after the peak of the COVID-19 pandemic. The unique stressors of that time may have influenced how couples relied on each other for emotional support.

Future research should also explore the context in which these strategies are used. The current study asked about general attempts to make a partner feel better. It did not distinguish between low-stakes situations and high-conflict arguments. It is possible that cognitive reframing or listening becomes more or less effective depending on the intensity of the distress.

Despite these caveats, the core message offers practical insight. While complex psychological skills help, the most effective behavior is relatively straightforward. Making a partner feel valued acts as a powerful buffer. It connects emotional ability to tangible relationship success. For couples, focusing on simple expressions of appreciation may be the most efficient way to utilize emotional intelligence.

The study, “Valuing your partner more: Linking emotional intelligence to better relationship quality,” was authored by Hester He Xiao, Kit S. Double, Rebecca T. Pinkus, and Carolyn MacCann.

Scientists just mapped the brain architecture that underlies human intelligence

For decades, researchers have attempted to pinpoint the specific areas of the brain responsible for human intelligence. A new analysis suggests that general intelligence involves the coordination of the entire brain rather than the superior function of any single region. By mapping the connections within the human brain, or connectome, scientists found that distinct patterns of global communication predict cognitive ability.

The research indicates that intelligent thought relies on a system-wide architecture optimized for efficiency and flexibility. These findings were published in the journal Nature Communications.

General intelligence represents the capacity to reason, learn, and solve problems across a variety of different contexts. In the past, theories often attributed this capacity to specific networks, such as the areas in the frontal and parietal lobes involved in attention and working memory. While these regions are involved in cognitive tasks, newer perspectives suggest they are part of a larger story.

The Network Neuroscience Theory proposes that intelligence arises from the global topology of the brain. This framework suggests that the physical wiring of the brain and its patterns of activity work in tandem.

Ramsey R. Wilcox, a researcher at the University of Notre Dame, led the study to test the specific predictions of this network theory. Working with senior author Aron K. Barbey and colleagues from the University of Illinois and Stony Brook University, Wilcox sought to move beyond localized models. The team aimed to understand how the brain’s physical structure constrains and directs its functional activity.

To investigate these questions, the research team utilized data from the Human Connectome Project. This massive dataset provided brain imaging and cognitive testing results from 831 healthy young adults. The researchers also validated their findings using an independent sample of 145 participants from a separate study.

The investigators employed a novel method that combined two distinct types of magnetic resonance imaging (MRI) data. They used diffusion-weighted MRI to map the structural white matter tracts, which act as the physical cables connecting brain regions. Simultaneously, they analyzed resting-state functional MRI, which measures the rhythmic activation patterns of brain cells.

By integrating these modalities, Wilcox and his colleagues created a joint model of the brain. This approach allowed them to estimate the capacity of structural connections to transmit information based on observed activity. The model corrected for limitations in traditional scanning, such as the difficulty in detecting crossing fibers within the brain’s white matter.

The team then applied predictive modeling techniques to see if these global network features could estimate a participant’s general intelligence score. The results provided strong support for the idea that intelligence is a distributed phenomenon. Models that incorporated connections across the whole brain successfully predicted intelligence scores.

In contrast, models that relied on single, isolated networks performed with less accuracy. This suggests that while specific networks have roles, the interaction between them is primary. The most predictive connections were not confined to one area but were spread throughout the cortex.

One of the specific predictions the team tested involved the strength and length of neural connections. The researchers found that individuals with higher intelligence scores tended to rely on “weak ties” for long-range communication. In network science, a weak tie represents a connection that is not structurally dense but acts as a bridge between separate communities of neurons.

These long-range, weak connections require less energy to maintain than dense, strong connections. Their weakness allows them to be easily modulated by neural activity. This quality makes the brain more adaptable, enabling it to reconfigure its communication pathways rapidly in response to new problems.

The study showed that in highly intelligent individuals, these predictive weak connections spanned longer physical distances. Conversely, strong connections in these individuals tended to be shorter. This architecture likely balances the high cost of long-distance communication with the need for system-wide integration.

Another key finding concerned “modal control.” This concept refers to the ability of specific brain regions to drive the brain into difficult-to-reach states of activity. Cognitive tasks often require the brain to shift away from its default patterns to process complex information.

Wilcox and his team found that general intelligence was positively associated with the presence of regions exhibiting high modal control. These control hubs were located in areas of the brain associated with executive function and visual processing. The presence of these regulating nodes allows the brain to orchestrate interactions between different networks effectively.

The researchers also examined the overall topology of the brain using a concept known as “small-worldness.” A small-world network is one that features tight-knit local communities of nodes as well as short paths that connect those communities. This organization is efficient because it allows for specialized local processing while maintaining rapid global communication.

The analysis revealed that participants with higher intelligence scores possessed brain networks with greater small-world characteristics. Their brains exhibited high levels of local clustering, meaning nearby regions were tightly interconnected. Simultaneously, they maintained short average path lengths across the entire system.

This balance ensures that information does not get trapped in local modules. It also ensures that the brain does not become a disorganized random network. The findings suggest that deviations from this optimal balance may underlie lower cognitive performance.

There are limitations to the current study that warrant consideration. The research relies on correlational data, so it cannot definitively prove that specific network structures cause higher intelligence. It is possible that engaging in intellectual activities alters the brain’s wiring over time.

Additionally, the study focused primarily on young adults. Future research will need to determine if these network patterns hold true across the lifespan, from childhood development through aging. The team also used linear modeling techniques, which may miss more nuanced, non-linear relationships in the data.

These insights into the biological basis of human intelligence have implications for the development of artificial intelligence. Current AI systems often excel at specific tasks but struggle with the broad flexibility characteristic of human thought. Understanding how the human brain achieves general intelligence through global network architecture could inspire new designs for artificial systems.

By mimicking the brain’s balance of local specialization and global integration, engineers might create AI that is more adaptable. The reliance on weak, flexible connections for integrating information could also serve as a model for efficient data processing.

The shift in perspective offered by this study is substantial. It moves the field away from viewing the brain as a collection of isolated tools. Instead, it presents the brain as a unified, dynamic system where the pattern of connections determines cognitive potential.

Wilcox and his colleagues have provided empirical evidence that validates the core tenets of Network Neuroscience Theory. Their work demonstrates that intelligence is not a localized function but a property of the global connectome. As neuroscience continues to map these connections, the definition of what it means to be intelligent will likely continue to evolve.

The study, “The network architecture of general intelligence in the human connectome,” was authored by Ramsey R. Wilcox, Babak Hemmatian, Lav R. Varshney & Aron K. Barbey.

Divorce history is not linked to signs of brain aging or dementia markers

A new study investigating the biological impact of marital dissolution suggests that a history of divorce does not accelerate physical changes in the brain associated with aging or dementia. Researchers analyzed brain scans from a racially and ethnically diverse group of older adults to look for signs of neurodegeneration. They found no robust link between having been divorced and the presence of Alzheimer’s disease markers or reductions in brain volume. These findings were published in Innovation in Aging.

The rising number of older adults globally has made understanding the causes of cognitive decline a priority for medical researchers. Scientists are increasingly looking beyond diet and exercise to understand how social and psychological experiences shape biology. Psychosocial stress is a primary area of interest in this field. Chronic stress can negatively impact the body, potentially increasing inflammation or hormonal imbalances that harm brain cells over time.

Divorce represents one of the most common and intense sources of psychosocial stress in the United States. Approximately 17 percent of adults over the age of 50 reported being divorced in 2023. The experience often involves not just the emotional pain of a relationship ending but also long-term economic strain and the loss of social standing. These secondary effects are often particularly harsh for women.

Previous research into how divorce affects the aging mind has produced conflicting results. Some past studies indicated that divorced or widowed individuals faced higher odds of developing dementia compared to married peers. Other inquiries found that ending a marriage might actually slow cognitive decline in some cases. Most of this prior work relied on memory tests rather than looking at the physical condition of the brain itself.

To address this gap, a team of researchers sought to determine if divorce leaves a physical imprint on brain structure. The study was led by Suhani Amin and Junxian Liu, who are affiliated with the Leonard Davis School of Gerontology at the University of Southern California. They collaborated with senior colleagues from Kaiser Permanente, the University of California, Davis, and Rush University.

The team hypothesized that the accumulated stress of divorce might correlate with worse brain health in later years. They specifically looked for reductions in brain size and the accumulation of harmful proteins. They also aimed to correct a limitation in previous studies that often focused only on White populations. This new analysis prioritized a cohort that included Asian, Black, Latino, and White participants.

The researchers utilized data from two major ongoing health studies. The first was the Kaiser Healthy Aging and Different Life Experiences (KHANDLE) cohort. The second was the Study of Healthy Aging in African Americans (STAR) cohort. Both groups consisted of long-term members of the Kaiser Permanente Northern California healthcare system.

Participants in these cohorts had previously completed detailed health surveys and were invited to undergo neuroimaging. The researchers identified 664 participants who had complete magnetic resonance imaging (MRI) data. They also analyzed a subset of 385 participants who underwent positron emission tomography (PET) scans. The average age of the participants at the time of their MRI scan was approximately 74 years old.

The primary variable the researchers examined was a history of divorce. They classified participants based on whether they answered yes to having a previous marriage end in divorce. They also included individuals who reported their current marital status as divorced. This approach allowed them to capture lifetime exposure to the event rather than just current status.

The MRI scans provided detailed images allowing the measurement of brain volumes. The team looked at the total size of the cerebrum and specific regions like the hippocampus. The hippocampus is a brain structure vital for learning and memory that often shrinks early in the course of Alzheimer’s disease. They also examined the lobes of the brain and the volume of gray matter and white matter.

In addition to volume, the MRI scans measured white matter hyperintensities. These are bright spots on a scan that indicate damage to the brain’s communication cables. High amounts of these hyperintensities are often associated with vascular problems and cognitive slowing.

The PET scans utilized a radioactive tracer to detect amyloid plaques. Amyloid beta is a sticky protein that clumps between nerve cells and is a hallmark characteristic of Alzheimer’s disease. The researchers calculated the density of these plaques to determine if a person crossed the threshold for amyloid positivity.

The statistical analysis accounted for various factors that could skew the results. The models adjusted for age, sex, race and ethnicity, and education level. They also controlled for whether the participant was born in the American South and whether their own parents had divorced.

The results showed that individuals with a history of divorce had slightly smaller volumes in the total cerebrum and hippocampus. They also displayed slightly greater volumes of white matter hyperintensities. However, these differences were small and not statistically significant. This means the calculations were not precise enough to rule out the possibility that the differences were due to random chance.

The PET scan analysis yielded similar results regarding Alzheimer’s pathology. There was no meaningful association between a history of divorce and the total burden of amyloid plaques. The likelihood of being classified as amyloid-positive was effectively the same for divorced and non-divorced participants.

The researchers performed several sensitivity analyses to ensure their findings were robust. They broke the data down by sex to see if men and women experienced different effects. Although the impact of divorce on brain volume seemed to trend in opposite directions for men and women in some brain regions, the confidence intervals overlapped. This suggests there is no strong evidence of a sex-specific difference in this sample.

They also checked if the definition of the sample population affected the outcome. They ran the numbers again excluding people who had never been married. They also adjusted for childhood socioeconomic status, looking at factors like parental education and financial stability. None of these adjustments altered the primary conclusion that divorce was not associated with brain changes.

There are several potential reasons why this study did not find a link between divorce and neurodegeneration. One possibility is that the stress of divorce acts more like an acute, short-term event rather than a chronic condition. Detectable changes in brain structure usually result from sustained exposure to adversity over many years. It is possible that for many people, the stress of divorce resolves before it causes permanent biological damage.

Another factor is the heterogeneity of the divorce experience. For some individuals, ending a marriage is a devastating source of trauma and financial ruin. For others, it is a relief that removes them from an unhealthy or unsafe environment. These opposing experiences might cancel each other out when analyzing a large group, leading to a null result.

The authors noted several limitations to their work. The study relied on a binary measure of whether a divorce occurred. They did not have data on the timing of the divorce or the reasons behind it. They also lacked information on the subjective level of stress the participants felt during the separation.

Future research could benefit from a more nuanced approach. Gathering data on the duration of the marriage and the economic aftermath of the split could provide clearer insights. Understanding the personal context of the divorce might help reveal specific subgroups of people who are more vulnerable to health consequences.

The study provides a reassuring perspective for the millions of older adults who have experienced marital dissolution. While divorce is undoubtedly a major life event, this research suggests it does not automatically dictate the biological health of the brain in late life. It underscores the resilience of the aging brain in the face of common social stressors.

The study, “The Association Between Divorce and Late-life Brain Health in a Racially and Ethnically Diverse Cohort of Older Adults,” was authored by Suhani Amin, Junxian Liu, Paola Gilsanz, Evan Fletcher, Charles DeCarli, Lisa L. Barnes, Rachel A. Whitmer, and Eleanor Hayes-Larson.

Eye contact discomfort does not explain slower emotion recognition in autistic individuals

Recent findings published in the journal Emotion suggest that the discomfort associated with making eye contact is not exclusive to individuals with a clinical autism diagnosis but scales with autistic traits found in the general population. The research team discovered that while this social unease is common among those with higher levels of autistic traits, it does not appear to be the direct cause of difficulties in recognizing facial expressions.

The concept of autism has evolved significantly in recent years. Mental health professionals and researchers increasingly view the condition not as a binary category but as a spectrum of traits that exist throughout the general public. This perspective implies that the distinction between a person with an autism diagnosis and a neurotypical person is often a matter of degree rather than a difference in kind.

Features associated with autism, such as sensory sensitivities or preferences for repetitive behaviors, can be present in anyone to varying extents. One of the most recognizable features associated with autism is a reduction in mutual gaze during social interactions. Autistic individuals frequently report that meeting another person’s eyes causes intense sensory or emotional overarousal.

Despite these self-reports, the scientific community has not fully determined why this avoidance occurs or how it impacts social cognition. Previous theories posited that avoiding eye contact limits the visual information a person receives. If a person does not look at the eyes, they might miss subtle cues required to identify emotions such as fear or happiness.

To investigate this, a team of researchers led by Sara Landberg from the University of Gothenburg in Sweden designed a study to disentangle these factors. The study included co-authors Jakob Åsberg Johnels, Martyna Galazka, and Nouchine Hadjikhani. Their primary goal was to examine how eye gaze discomfort relates to autistic traits, distinct from a formal diagnosis.

They also sought to understand the role of other conditions that often co-occur with autism. One such condition is alexithymia, which is characterized by a difficulty in identifying and describing one’s own emotions. Another is prosopagnosia, often called “face blindness,” which involves an impairment in recognizing facial identity.

The researchers recruited 187 adults from English-speaking countries through an online platform. This method allowed them to access a diverse sample of the general public rather than relying solely on clinical patients. The participants completed a series of standardized questionnaires to measure their levels of autistic traits, alexithymia, and face recognition abilities.

To assess sensory experiences, the group answered questions about their sensitivity to stimuli like noise, light, and touch. The study also utilized a specific “Eye Contact Questionnaire.” This tool asked participants directly if they found eye contact unpleasant and, if so, what strategies they used to manage that feeling.

In addition to the self-reports, the participants completed an objective performance test called the Emotion Labeling Task. On a computer screen, they viewed faces that had been digitally morphed to display emotions at only 40 percent intensity. This low intensity was chosen to make the task sufficiently challenging for a general adult audience.

Participants had to match the emotion shown on the screen—such as fear, anger, or happiness—to one of four label options. The researchers measured both the accuracy of the answers and the reaction time. This setup allowed the team to determine if people with high levels of specific traits were slower or less accurate at reading faces.

The data revealed clear associations between personality traits and social comfort. Participants who scored higher on the scale for autistic traits were more likely to report finding eye contact unpleasant. This supports the idea that social gaze aversion is a continuous trait in the population.

The study also identified an independent link between alexithymia and eye gaze discomfort. Individuals who struggle to understand their own internal emotional states also tend to find mutual gaze difficult. While these two traits often overlap, the statistical analysis showed that alexithymia predicts discomfort on its own.

A particularly revealing finding emerged regarding the coping strategies participants employed. The researchers asked individuals how they handled the discomfort of looking someone in the eye. The responses indicated that people with high autistic traits tend to look at other parts of the face, such as the mouth or nose.

In contrast, those with high levels of alexithymia were more likely to look away from the face entirely. They might look at the floor or in another direction. This suggests that while the symptom of gaze avoidance looks similar from the outside, the internal mechanism or coping strategy differs depending on the underlying trait.

When analyzing the performance on the Emotion Labeling Task, the researchers found no statistically significant difference in accuracy based on autistic traits. Participants with higher levels of these traits were just as capable of correctly identifying the emotions as their peers. This contrasts with some previous literature that found deficits in emotion recognition accuracy.

However, the results did show a difference in processing speed. Participants with higher levels of autistic traits took longer to identify the emotions. Similarly, those with higher levels of prosopagnosia, or difficulty recognizing identities, also demonstrated slower reaction times.

The researchers then performed a mediation analysis to see if the eye gaze discomfort explained this slower processing. The hypothesis was that discomfort might cause people to look away or avoid the eyes, which would then slow down their ability to read the emotion. The data did not support this hypothesis.

Eye gaze discomfort was not a statistically significant predictor of the reaction time on the emotion task. This implies that the discomfort one feels about eye contact and the cognitive speed of recognizing an emotion are likely separate issues. The slower processing speed associated with autistic traits seems to stem from a different cognitive mechanism than the emotional or sensory aversion to gaze.

The study also explored sensory sensitivity. The researchers hypothesized that general sensory over-responsiveness might drive the discomfort with eye contact. However, the analysis did not find a strong link between general sensory sensitivity scores and the specific report of eye gaze discomfort.

These findings suggest that the difficulty autistic individuals face with emotion recognition may be more about processing efficiency than a lack of visual input due to avoidance. It challenges the assumption that simply training individuals to make more eye contact would automatically improve their ability to read emotions.

There are limitations to this research that must be considered. The data was collected entirely online. While this allows for a large sample, it prevents the researchers from controlling the environment in which participants took the tests. Factors such as screen size, lighting, or distractions at home could influence reaction times.

The sample was also relatively highly educated. A majority of the participants had completed a university degree. This demographic skew might mean the results do not perfectly represent the broader global population. Additionally, the autistic traits in this sample were slightly higher than average, which may reflect a self-selection bias in who chooses to participate in online psychological studies.

The measurement of eye gaze discomfort relied on a binary “yes or no” question followed by strategy selection. This simple metric may not capture the full complexity or intensity of the experience. Future research would benefit from using more granular scales to measure the degree of discomfort.

The researchers note that this study focused on traits rather than diagnostic categories. This approach is beneficial for understanding the continuum of human behavior. However, it means the results might not fully apply to individuals with profound autism who experience high functional impairment.

Future investigations could expand on the distinct coping strategies identified here. Understanding why individuals with alexithymia look away completely, while those with autistic traits look at other facial features, could inform better support strategies. It suggests that interventions should be tailored to the specific underlying profile of the individual.

The study also raises questions about the role of social anxiety. While the team controlled for several factors, they did not specifically measure current anxiety levels. It is possible that general social anxiety plays a role in the strategies people use to avoid eye contact.

The study, “Eye Gaze Discomfort: Associations With Autistic Traits, Alexithymia, Face Recognition, and Emotion Recognition,” was authored by Sara Landberg, Jakob Åsberg Johnels, Martyna Galazka, and Nouchine Hadjikhani.

A high-sugar breakfast may trigger a “rest and digest” state that dampens cognitive focus

Starting the day with a sugary pastry might feel like a treat, but new research suggests it could sabotage your workday before it begins. A study published in the journal Food and Humanity indicates that a high-fat, high-sugar morning meal may dampen cognitive planning abilities and increase sleepiness in young women. The findings imply that nutritional choices at breakfast play a larger role in regulating morning physiological arousal and mental focus than previously realized.

Dietary habits vary widely across populations, yet breakfast is often touted as the foundation for daily energy. Despite this reputation, statistical data indicates that a sizable portion of adult women frequently consume confectionaries or sweet snacks as their first meal of the day. Researchers identify this trend as a potential public health concern, particularly regarding productivity and mental well-being in the workplace.

The autonomic nervous system regulates involuntary body processes, including heart rate and digestion. It functions through two main branches: the sympathetic nervous system and the parasympathetic nervous system. The sympathetic branch prepares the body for action, often described as the “fight or flight” response.

Conversely, the parasympathetic branch promotes a “rest and digest” state, calming the body and conserving energy. Professional work performance typically requires a certain level of alertness and physiological arousal. Fumiaki Hanzawa and colleagues at the University of Hyogo in Japan sought to understand how different breakfast compositions influence this delicate neural balance.

Hanzawa and his team hypothesized that the nutrient density of a meal directly impacts how the nervous system regulates alertness and cognitive processing shortly after eating. To test this, they designed a randomized crossover trial involving 13 healthy female university students. This specific study design ensured that each participant acted as her own control, minimizing the impact of individual biological variations.

On two separate mornings, the women arrived at the laboratory after fasting overnight. They consumed one of two test meals that contained an identical amount of food energy, totaling 497 kilocalories. The researchers allowed for a washout period of at least one week between the two sessions to prevent any lingering effects from the first test.

One meal option was a balanced breakfast modeled after a traditional Japanese meal, known as Washoku. This included boiled rice, salted salmon, an omelet, spinach with sesame sauce, miso soup, and a banana. The nutrient breakdown of this meal favored carbohydrates and protein, with a moderate amount of fat.

The alternative was a high-fat, high-sugar meal designed to mimic a common convenient breakfast of poor nutritional quality. This consisted of sweet doughnut holes and a commercially available strawberry milk drink. This meal derived more than half its total energy from fat and contained very little protein compared to the balanced option.

The researchers monitored several physiological markers for two hours following the meal. They measured body temperature inside the ear to track diet-induced thermogenesis, which is the production of heat in the body caused by metabolizing food. They also recorded heart rate variability to assess the activity of the autonomic nervous system.

At specific intervals, the participants completed computerized cognitive tests. These tasks were designed to measure attention and executive function. Specifically, the researchers looked at “task switching,” which assesses the brain’s ability to shift attention between different rule sets.

The participants also rated their subjective feelings on a sliding scale. They reported their current levels of fatigue, vitality, and sleepiness at multiple time points. This allowed the researchers to compare the women’s internal psychological states with their objective physiological data.

The physiological responses showed distinct patterns depending on the food consumed. The balanced breakfast prompted a measurable rise in body temperature and heart rate shortly after eating. This physiological shift suggests an activation of the sympathetic nervous system, preparing the body for the day’s activities.

In contrast, the doughnut and sweetened milk meal failed to raise body temperature to the same degree. Instead, the data revealed a dominant response from the parasympathetic nervous system immediately after consumption. This suggests the sugary meal induced a state of relaxation and digestion rather than physiological readiness.

Subjective reports from the participants mirrored these physical changes. The women reported feeling higher levels of vitality after consuming the balanced meal containing rice and fish. This feeling of energy persisted during the post-meal monitoring period.

Conversely, when the same women ate the high-fat, high-sugar breakfast, they reported increased sleepiness. This sensation of lethargy aligns with the parasympathetic dominance observed in the heart rate data. The anticipated energy boost from the sugar did not translate into a feeling of vitality.

The cognitive testing revealed that the sugary meal led to a decline in planning function. Specifically, the participants struggled more with task switching after the high-fat, high-sugar breakfast compared to the balanced meal. This function is vital for organizing steps to achieve a goal and adapting to changing work requirements.

Unexpectedly, the high-fat, high-sugar group performed slightly better on a specific visual attention task. The authors suggest this could be due to a temporary dopamine release triggered by the sweet taste. However, this isolated improvement did not extend to the more complex executive functions required for planning.

The researchers propose that the difference in carbohydrate types may explain some of the results. The balanced meal contained rice, which is rich in polysaccharides like amylose and amylopectin. These complex carbohydrates digest differently than the sucrose found in the doughnuts and sweetened milk.

Protein content also likely played a role in the thermal effects observed. The balanced meal contained significantly more protein, which is known to require more energy to metabolize than fat or sugar. This thermogenic effect contributes to the rise in body temperature and the associated feeling of alertness.

The study implies that work performance is not just about caloric intake but the quality of those calories. A breakfast that triggers a “rest and digest” response may be counterproductive for someone attempting to start a workday. The mental fog and sleepiness associated with the high-fat, high-sugar meal could hinder productivity.

While the results provide insight into diet and physiology, the study has limitations that affect broader applications. The sample size was small, involving only 13 participants from a specific age group and gender. This limits the ability to generalize the results to men or older adults with different metabolic profiles.

The study also focused exclusively on young students rather than full-time workers. Actual workplace stress and physical demands might interact with diet in ways this laboratory setting could not replicate. Additionally, the study only examined immediate, short-term effects following a single meal.

It remains unclear how long-term habitual consumption of high-fat, high-sugar breakfasts might alter these responses over months or years. Chronic exposure to such a diet could potentially lead to different adaptations or more severe deficits. The researchers note that habitual poor diet is already linked to cognitive decline in other epidemiological studies.

Hanzawa and the research team suggest that future investigations should expand the demographic pool. Including male participants and older workers would help clarify if these physiological responses are universal. They also recommend examining how these physiological changes translate into actual performance metrics in a real-world office environment.

The study, “High-fat, high-sugar breakfast worsen morning mood, cognitive performance, and cardiac sympathetic nervous system activity in young women,” was authored by Fumiaki Hanzawa, Manaka Hashimoto, Mana Gonda, Miyoko Okuzono, Yumi Takayama, Yukina Yumen, and Narumi Nagai.

A new mouse model links cleared viral infections to ALS-like symptoms

Recent research suggests that a person’s unique genetic makeup may determine whether a temporary viral infection triggers a permanent, debilitating brain disease later in life. A team of scientists found that specific genetic strains of mice developed lasting spinal cord damage resembling amyotrophic lateral sclerosis (ALS) long after their immune systems had successfully cleared the virus. These findings were published in the Journal of Neuropathology & Experimental Neurology.

The origins of neurodegenerative diseases have puzzled medical experts for decades. Conditions such as ALS, often called Lou Gehrig’s disease, involve the progressive death of motor neurons. This leads to muscle weakness, paralysis, and eventually respiratory failure. While a small percentage of cases run in families, the vast majority are sporadic. This means they appear without a clear family history.

Researchers have hypothesized that environmental factors likely initiate these sporadic cases. Viral infections are a primary suspect. The theory suggests a “hit and run” mechanism. A virus enters the body and causes damage or alters the immune system. The body eventually eliminates the virus. However, the pathological process continues long after the pathogen is gone. Proving this connection has been difficult because by the time a patient develops ALS, the triggering virus is no longer detectable.

To investigate this potential link, the research team needed a better animal model. Standard laboratory mice are often genetically identical. This lack of diversity fails to mimic the human population. In humans, one person might catch a cold and recover quickly, while another might develop severe complications. Standard lab mice usually respond to infections in a uniform way.

To overcome this limitation, the researchers utilized the Collaborative Cross. This is a large panel of mouse strains bred to capture immense genetic diversity. The team, led by first author Koedi S. Lawley and senior author Candice Brinkmeyer-Langford from Texas A&M University, selected five distinct strains from this collection. They aimed to see if different genetic backgrounds would result in different disease outcomes following the exact same viral exposure.

The researchers infected these genetically diverse mice with Theiler’s murine encephalomyelitis virus (TMEV). This virus is a well-established tool in neurology research. It is typically used to study conditions like multiple sclerosis and epilepsy. In this context, the scientists used it to examine spinal cord damage. They compared the infected mice to a control group that received a placebo.

The team monitored the animals over a period of three months. They assessed the mice at four days, fourteen days, and ninety days post-infection. These time points represented the acute phase, the transition phase, and the chronic phase of the disease. The researchers utilized a variety of methods to track the health of the animals. They observed clinical signs of motor dysfunction. They also performed detailed microscopic examinations of spinal cord tissues.

In the acute phase, which occurred during the first two weeks, most of the infected mouse strains showed signs of illness. The virus actively replicated within the spinal cord. This triggered a strong immune response. The researchers tracked this response by staining for Iba-1, a marker for microglia and macrophages. These are the immune cells that defend the central nervous system. As expected, inflammation levels spiked as the bodies of the mice fought the invader.

The virus targeted the lumbar region of the spinal cord. This is the lower section of the back that controls the hind legs. Consequently, the mice displayed varying degrees of difficulty walking. Some developed paresis, which is partial weakness. Others developed paralysis. The severity of these early symptoms varied widely depending on the mouse strain. This confirmed that genetics played a major role in the initial susceptibility to the infection.

The most revealing data emerged at the ninety-day mark. By this time, the acute infection had long passed. The researchers used sensitive RNA testing to look for traces of the virus. They found that every single mouse had successfully cleared the infection. There was no detectable viral genetic material left in their spinal cords. In most strains, the inflammation had also subsided.

Despite the absence of the virus, the clinical outcomes diverged sharply. One specific strain, known as CC023, remained severely affected. These mice did not recover. Instead, they exhibited lasting symptoms that mirrored human ALS. They suffered from profound muscle atrophy, or wasting, particularly in the muscles controlled by the lumbar spinal cord. They also displayed kyphosis, a hunching of the back often seen in models of neuromuscular disease.

The microscopic analysis of the CC023 mice revealed the underlying cause of these symptoms. Even though the virus was gone, the damage to the motor neurons persisted. The researchers observed lesions in the ventral horn of the spinal cord. This is the specific area where motor neurons reside. The loss of these neurons disconnected the spinal cord from the muscles, leading to the observed atrophy.

This outcome stood in stark contrast to other strains. For instance, the CC027 strain proved to be highly resistant. These mice showed almost no clinical signs of disease despite being infected with the same amount of virus. Their genetic background seemingly provided a protective shield against the neurological damage that devastated the CC023 strain.

The researchers noted that the inflammation in the spinal cord did not persist at high levels into the chronic phase. At ninety days, the number of active immune cells had returned to near-normal levels. This is a critical observation. It suggests that the ongoing disease in the CC023 mice was not driven by chronic, active inflammation. Instead, the initial viral insult triggered a cascade of damage that continued independently.

These findings support the idea that a person’s genetic background dictates how their body handles the aftermath of an infection. In susceptible individuals, a virus might initiate a neurodegenerative process that outlasts the infection itself. The study provides a concrete example of a virus causing a “hit and run” injury that leads to an ALS-like condition.

Candice Brinkmeyer-Langford, the senior author, highlighted the importance of this discovery in a press release. She noted, “This is exciting because this is the first animal model that affirms the long-standing theory that a virus can trigger permanent neurological damage or disease — like ALS — long after the infection itself occurred.”

The identification of the CC023 mouse strain is a practical advancement for the field. Current mouse models for ALS often rely on artificial genetic mutations found in only a tiny fraction of human patients. The CC023 model represents a different pathway. It models sporadic disease triggered by an environmental event. This could allow scientists to test therapies designed to stop neurodegeneration in a context that more closely resembles the human experience.

There are caveats to the study. While the symptoms in the mice resemble ALS, mice are not humans. The biological pathways may differ. Additionally, the researchers have not yet identified the specific genes responsible for the susceptibility in the CC023 strain. Understanding exactly which genes failed to protect these mice is a necessary next step.

Future research will likely focus on pinpointing these genetic factors. The team plans to investigate why the immune response in the CC023 strain failed to prevent the lasting damage. They also aim to identify biomarkers that appear early in the infection. Such markers could potentially predict which individuals are at risk for developing long-term neurological complications following a viral illness.

The study, “The association between virus-induced spinal cord pathology and the genetic background of the host,” was authored by Koedi S. Lawley, Tae Wook Kang, Raquel R. Rech, Moumita Karmakar, Raymond Carroll, Aracely A. Perez Gomez, Katia Amstalden, Yava Jones-Hall, David W Threadgill, C. Jane Welsh, Colin R. Young, and Candice Brinkmeyer-Langford.

Psilocybin impacts immunity and behavior differently depending on diet and exercise context

A new study published in the journal Psychedelics reveals that the environment and physiological state of an animal profoundly influence the effects of psilocybin. Researchers found that while the drug altered immune markers in mice that exercised, it did not modify social behaviors in mice modeling anorexia nervosa. These findings suggest that the therapeutic potential of psychedelics may depend heavily on the biological context in which they are administered.

Anorexia nervosa is a severe psychiatric condition characterized by restricted eating and excessive exercise. Many patients also struggle with social interactions and understanding the emotions of others. These social challenges often persist even after weight recovery, and they contribute to the isolation associated with the disorder. Current treatments frequently fail to address these specific interpersonal symptoms.

Claire J. Foldi and her colleagues at Monash University in Australia sought to investigate potential biological causes for these issues. They focused on the connection between brain function and the immune system. Previous research suggests that inflammation may play a role in psychiatric disorders. Specifically, molecules like interleukin-6 often appear at abnormal levels in people with depression and anxiety.

Psilocybin, the active compound in magic mushrooms, is known to affect serotonin receptors and possesses potential anti-inflammatory properties. Foldi’s team wanted to see if psilocybin could improve social behavior and regulate immune responses in a living organism. They hypothesized that the drug might rescue social deficits by lowering inflammation.

To test this, the researchers used a method called the activity-based anorexia model. They housed female mice in cages with running wheels and limited their access to food. This combination typically causes mice to run excessively and lose weight rapidly, mimicking human anorexia. The researchers chose female mice because the condition predominantly affects women.

The team compared these mice to three other groups to isolate specific variables. One group had restricted food but no wheel, which tested the effect of hunger alone. Another group had a wheel but unlimited food, testing the effect of exercise alone. A control group lived in standard housing with no restrictions.

Once the mice in the anorexia model lost a specific amount of weight, the researchers administered a single dose of psilocybin or a saline placebo. Later that day, they placed the mice in a special testing apparatus. This box contained three connected chambers designed to measure social interest.

The researchers measured how much time the mice spent interacting with a new mouse versus an inanimate object. In a second phase, they tracked whether the mice preferred spending time with a familiar mouse or a stranger. Finally, the team analyzed blood samples to measure levels of interleukin-6.

The results showed distinct behavioral patterns based on the living conditions of the mice. Mice in the anorexia model did not withdraw socially as the researchers had anticipated. Instead, these mice showed a strong interest in investigating new mice. They preferred novel social interactions over familiar ones.

This intense curiosity was also present in the mice that only had access to running wheels. In contrast, mice that were only food-restricted spent more time investigating the object. This likely indicates a motivation to search for food rather than socialize.

Psilocybin did not alter these social behaviors in the anorexia group, the exercise group, or the food-restricted group. The drug only changed behavior in the healthy control mice. Control mice given psilocybin became less interested in novelty and spent more time with familiar companions. This was an unexpected outcome that contrasted with the other groups.

The physiological results were equally specific to the environment. The researchers found that psilocybin markedly elevated levels of interleukin-6 in the mice that had access to running wheels. This effect was not observed in the anorexia group or the other groups.

In the running wheel group, higher levels of this inflammatory marker correlated with a stronger preference for social novelty. The drug did not reduce inflammation in the anorexia model as originally hypothesized. This suggests that prior exercise primes the immune system to respond differently to the drug.

The study highlights a limitation in how animal models mimic complex human disorders. While human patients often retreat socially, the mice in this model became hyperactive and explorative. This behavior may represent a foraging instinct triggered by hunger. It complicates the ability to translate these specific social findings directly to human psychology.

The increase in inflammation seen in the exercise group suggests a relationship between physical activity and how psychedelics affect the body. Psilocybin is often cited as an anti-inflammatory agent. However, this study indicates that in certain contexts, it may promote immune signaling.

The researchers note that they only measured inflammation at a single time point. Psilocybin may have transient effects that vary over hours or days. Future studies will need to track these markers over a longer period to capture the full picture.

It remains necessary to test different biological markers and brain regions to fully understand these mechanisms. The relationship between serotonin signaling and immune function is not uniform. The data indicate that a “one size fits all” approach to psychedelic therapy may be insufficient.

This research implies that clinical trials should account for the patient’s physical state, including their exercise habits and nutritional status. Factors such as metabolic stress could alter how the drug impacts both behavior and the immune system.

The study, “Psilocybin exerts differential effects on social behavior and inflammation in mice in contexts of activity-based anorexia,” was authored by Sheida Shadani, Erika Greaves, Zane B. Andrews, and Claire J. Foldi.

❌