Reading view

Scientists achieve full neurological recovery from Alzheimer’s in mice by restoring metabolic balance

Researchers have discovered that Alzheimer’s disease may be reversible in animal models through a treatment that restores the brain’s metabolic balance. This study, published in the journal Cell Reports Medicine, demonstrates that restoring levels of a specific energy molecule allows the brain to repair damage and recover cognitive function even in advanced stages of the illness. The results suggest that the cognitive decline associated with the condition is not an inevitable permanent state but rather a result of a loss of brain resilience.

For more than a century, people have considered Alzheimer’s disease an irreversible illness. Consequently, research has focused on preventing or slowing it, rather than recovery. Despite billions of dollars spent on decades of research, there has never been a clinical trial of any drug to reverse and recover from the condition. This new research challenges that long held dogma.

The study was led by Kalyani Chaubey, a researcher at the Case Western Reserve University School of Medicine. She worked alongside senior author Andrew A. Pieper, who is a professor at Case Western Reserve and director of the Brain Health Medicines Center at Harrington Discovery Institute. The team included scientists from University Hospitals and the Louis Stokes Cleveland VA Medical Center.

The researchers focused on a molecule called nicotinamide adenine dinucleotide, known as NAD+. This molecule is essential for cellular energy and repair across the entire body. Scientists have observed that NAD+ levels decline naturally as people age, but this loss is much more pronounced in those with neurodegenerative conditions. Without proper levels of this metabolic currency, cells become unable to execute the processes required for proper functioning and survival.

Previous research has established a foundation for this approach. A 2018 study in the Proceedings of the National Academy of Sciences showed that supplementing with NAD+ precursors could normalize neuroinflammation and DNA damage in mice. That earlier work suggested that a depletion of this molecule sits upstream of many other symptoms like tau protein buildup and synaptic dysfunction.

In 2021, another study published in the same journal found that restoring this energy balance could reduce cell senescence, which is a state where cells stop dividing but do not die. This process is linked to the chronic inflammation seen in aging brains.

Additionally, an international team led by researchers at the University of Oslo recently identified a mechanism where NAD+ helps correct errors in how brain cells process genetic information. That study, published in Science Advances, identified a specific protein called EVA1C as a central player in helping the brain manage damaged proteins.

Despite these promising leads, many existing supplements can push NAD+ to supraphysiologic levels. High levels that exceed what is natural for the body have been linked to an increased risk of cancer in some animal models. The Case Western Reserve team wanted to find a way to restore balance without overshooting the natural range.

They utilized a compound called P7C3-A20, which was originally developed in the Pieper laboratory. This compound is a neuroprotective agent that helps cells maintain their proper balance of NAD+ under conditions of overwhelming stress. It does not elevate the molecule to levels that are unnaturally high.

To test the potential for reversal, the researchers used two distinct mouse models. The first, known as 5xFAD, is designed to develop heavy amyloid plaque buildup and human like tau changes. The second model, PS19, carries a human mutation in the tau protein that causes toxic tangles and the death of neurons. These models allow scientists to study the major biological hallmarks of the human disease.

The researchers first confirmed that brain energy balance deteriorates as the disease progresses. In mice that were two months old and pre-symptomatic, NAD+ levels were normal. By six months, when the mice showed clear signs of cognitive trouble, their levels had dropped by 30 percent. By twelve months, when the disease was very advanced, the deficit reached 45 percent.

The core of the study involved a group of mice designated as the advanced disease stage cohort. These animals did not begin treatment until they were six months old. At this point, they already possessed established brain pathology and measurable cognitive decline. They received daily injections of the treatment until they reached one year of age.

The results showed a comprehensive recovery of function. In memory tests like the Morris water maze, where mice must remember the location of a submerged platform, the treated animals performed as well as healthy controls. Their spatial learning and memory were restored to normal levels despite their genetic mutations.

The mice also showed improvements in physical coordination. On a rotating rod test, which measures motor learning, the advanced stage mice regained their ability to balance and stay on the device. Their performance was not statistically different from healthy mice by the end of the treatment period.

The biological changes inside the brain were equally notable. The treatment repaired the blood brain barrier, which is the protective seal around the brain’s blood vessels. In Alzheimer’s disease, this barrier often develops leaks that allow harmful substances into the brain tissue. Electronic microscope images showed that the treatment had sealed these gaps and restored the health of supporting cells called pericytes.

The researchers also tracked a specific marker called p-tau217. This is a form of the tau protein that is now used as a standard clinical biomarker in human patients. The team found that levels of this marker in the blood were reduced by the treatment. This finding provides an objective way to confirm that the disease was being reversed.

Speaking about the discovery, Pieper noted the importance of the results for future medicine. “We were very excited and encouraged by our results,” he said. “Restoring the brain’s energy balance achieved pathological and functional recovery in both lines of mice with advanced Alzheimer’s. Seeing this effect in two very different animal models, each driven by different genetic causes, strengthens the new idea that recovery from advanced disease might be possible in people with AD when the brain’s NAD+ balance is restored.”

The team also performed a proteomic analysis, which is a massive screen of all the proteins in the brain. They identified 46 specific proteins that are altered in the same way in both human patients and the sick mice. These proteins are involved in tasks like waste management, protein folding, and mitochondrial function. The treatment successfully returned these protein levels to their healthy state.

To ensure the mouse findings were relevant to humans, the scientists studied a unique group of people. These individuals are known as nondemented with Alzheimer’s neuropathology. Their brains are full of amyloid plaques, yet they remained cognitively healthy throughout their lives. The researchers found that these resilient individuals naturally possessed higher levels of the enzymes that produce NAD+.

This human data suggests that the brain has an intrinsic ability to resist damage if its energy balance remains intact. The treatment appears to mimic this natural resilience. “The damaged brain can, under some conditions, repair itself and regain function,” Pieper explained. He emphasized that the takeaway from this work is a message of hope.

The study also included tests on human brain microvascular endothelial cells. These are the cells that make up the blood brain barrier in people. When these cells were exposed to oxidative stress in the laboratory, the treatment protected them from damage. It helped their mitochondria continue to produce energy and prevented the cells from dying.

While the results are promising, there are some limitations to the study. The researchers relied on genetic mouse models, which represent the rare inherited forms of the disease. Most people suffer from the sporadic form of the condition, which may have more varied causes. Additionally, human brain samples used for comparison represent a single moment in time, which makes it difficult to establish a clear cause and effect relationship.

Future research will focus on moving this approach into human clinical trials. The scientists want to determine if the efficacy seen in mice will translate to human patients. They also hope to identify which specific aspects of the brain’s energy balance are the most important for starting the recovery process.

The technology is currently being commercialized by a company called Glengary Brain Health. The goal is to develop a therapy that could one day be used to treat patients who already show signs of cognitive loss. As Chaubey noted, “Through our study, we demonstrated one drug-based way to accomplish this in animal models, and also identified candidate proteins in the human AD brain that may relate to the ability to reverse AD.”

The study, “Pharmacologic reversal of advanced Alzheimer’s disease in mice and identification of potential therapeutic nodes in human brain,” was authored by Kalyani Chaubey, Edwin Vázquez-Rosa, Sunil Jamuna Tripathi, Min-Kyoo Shin, Youngmin Yu, Matasha Dhar, Suwarna Chakraborty, Mai Yamakawa, Xinming Wang, Preethy S. Sridharan, Emiko Miller, Zea Bud, Sofia G. Corella, Sarah Barker, Salvatore G. Caradonna, Yeojung Koh, Kathryn Franke, Coral J. Cintrón-Pérez, Sophia Rose, Hua Fang, Adrian A. Cintrón-Pérez, Taylor Tomco, Xiongwei Zhu, Hisashi Fujioka, Tamar Gefen, Margaret E. Flanagan, Noelle S. Williams, Brigid M. Wilson, Lawrence Chen, Lijun Dou, Feixiong Cheng, Jessica E. Rexach, Jung-A Woo, David E. Kang, Bindu D. Paul, and Andrew A. Pieper.

A simple measurement of body shape may predict future mental health

A new study suggests that body shape, specifically the degree of roundness around the abdomen, may help predict the risk of developing depression. Researchers found that individuals with a higher Body Roundness Index faced a higher likelihood of being diagnosed with this mental health condition over time. These findings were published in the Journal of Affective Disorders.

Depression is a widespread mental health challenge that affects roughly 300 million people globally. It often brings severe physical health burdens and economic costs to individuals and society. Medical professionals have identified obesity as a potential risk factor for mental health issues. The standard tool for measuring obesity is the Body Mass Index, or BMI. This metric calculates a score based solely on a person’s weight and height.

However, the Body Mass Index has limitations regarding accuracy in assessing health risks. It cannot distinguish between muscle mass and fat mass. It also fails to indicate where fat is stored on the body. This distinction is vital because fat stored around the abdomen is often more metabolically harmful than fat stored elsewhere. To address these gaps, scientists developed the Body Roundness Index.

This newer metric uses waist circumference in relation to height to estimate the amount of visceral fat a person carries. Visceral fat is the fat stored deep inside the abdomen, wrapping around internal organs. This type of fat is biologically active and linked to various chronic diseases. Previous research hinted at a connection between this type of fat and mental health, but long-term data was limited.

Yinghong Zhai from the Shanghai Jiao Tong University School of Medicine served as a lead author on this new project. Zhai and colleagues sought to clarify if body roundness could predict future depression better than general weight measures. They also wanted to understand if lifestyle choices like smoking or exercise explained the connection.

To investigate this, the team utilized data from the UK Biobank. This is a massive biomedical database containing genetic, health, and lifestyle information from residents of the United Kingdom. The researchers selected records for 201,813 adults who did not have a diagnosis of depression when they joined the biobank. Participants ranged in age from 40 to 69 years old at the start of the data collection.

The researchers calculated the Body Roundness Index for each person using their waist and height measurements. They then tracked these individuals for an average of nearly 13 years. The goal was to see which participants developed new cases of depression during that decade. To ensure accuracy, the analysis accounted for various influencing factors.

These factors included age, biological sex, socioeconomic status, and ethnicity. The team also controlled for existing health conditions like type 2 diabetes and high blood pressure. They further adjusted for lifestyle habits, such as alcohol consumption and sleep duration. The results showed a clear pattern linking body shape to mental health outcomes.

Participants were divided into four groups, or quartiles, based on their body roundness scores. Those in the highest quartile had the largest waist-to-height ratios. The analysis showed that these individuals had a 30 percent higher risk of developing depression compared to those in the lowest quartile. This association held true even after the researchers adjusted for traditional Body Mass Index scores.

The relationship appeared to follow a “J-shaped” curve. This means that as body roundness increased, the probability of a depression diagnosis rose progressively. The trend was consistent across different subgroups of people. It affected both men and women, as well as people older and younger than 60.

The team also investigated the role of lifestyle behaviors in this relationship. They used statistical mediation analysis to see if habits like smoking or drinking explained the link. The question was whether body roundness led to specific behaviors that then caused depression. They found that smoking status did contribute to the increased risk.

Conversely, physical activity offered a protective effect, slightly lowering the risk. Education levels also played a minor mediating role. However, these lifestyle factors only explained a small portion of the overall connection. The direct link between body roundness and depression remained robust regardless of these behaviors.

The authors discussed potential biological mechanisms that might explain why central obesity correlates with mood disorders. Abdominal fat acts somewhat like an active organ. It releases inflammatory markers, such as cytokines, into the bloodstream. These markers can cross the blood-brain barrier. Once in the brain, they may disrupt the function of neurotransmitters that regulate mood.

Another possibility involves hormonal imbalances. Obesity is often associated with resistance to leptin, a hormone that regulates energy balance. High levels of leptin can interfere with the hypothalamic-pituitary-adrenal axis. This axis is a complex system of neuroendocrine pathways that controls the body’s reaction to stress. Disruption here is a known feature of depression.

The study also considered the social and psychological aspects of body image. While the biological links are strong, the authors noted that societal stigma could play a role. However, the persistence of the link after adjusting for many social factors points toward a physiological connection.

While the study involved a large number of people, it has specific limitations. The majority of participants in the UK Biobank are of white European descent. This lack of diversity means the results might not apply directly to other ethnic groups. The authors advise caution when generalizing these findings to diverse populations.

Additionally, the study is observational rather than experimental. This design means researchers can identify a correlation but cannot definitively prove that body roundness causes depression. There is also the possibility of unmeasured factors influencing the results. For example, changes in body weight or mental health status over the 13-year period were not fully tracked day-to-day.

The researchers also noted that they did not compare the predictive power of the Body Roundness Index against other metrics directly in a competition. They focused on establishing the link between this specific index and depression. Future research would need to validate how this tool performs against others in clinical settings.

The authors suggest that future research should focus on more diverse populations to confirm these trends. They also recommend investigating the specific biological pathways that connect abdominal fat to brain function more deeply. Understanding the role of inflammation and hormones could lead to better treatments.

If confirmed, these results could help doctors use simple body measurements as a screening tool. It highlights the potential mental health benefits of managing central obesity. By monitoring body roundness, healthcare providers might identify individuals at higher risk for depression earlier. This could allow for earlier interventions regarding lifestyle or mental health support.

The study, “Body roundness index, depression, and the mediating role of lifestyle: Insights from the UK biobank cohort,” was authored by Yinghong Zhai, Fangyuan Hu, Yang Cao, Run Du, Chao Xue, and Feng Xu.

Scientists identify dynamic brain patterns linked to symptom severity in children with autism

Recent research has identified specific patterns of brain activity that distinguish young children with autism from their typically developing peers. These patterns involve the way different regions of the brain communicate with one another over time and appear to be directly linked to the severity of autism symptoms. The findings suggest that these neural dynamics influence daily adaptive skills, which in turn affect cognitive performance. The study was published in The Journal of Neuroscience.

Diagnosing Autism Spectrum Disorder in young children currently relies heavily on observing behavior. This process can be subjective because symptoms vary widely from one child to another. Scientists have sought to find objective biological markers to improve the accuracy of early diagnosis. They also aim to understand the underlying neural mechanisms that contribute to the social and cognitive challenges associated with the condition.

Most previous research in this area has looked at the brain as a static object. These earlier studies calculated the average connection strength between brain regions over a long period. This approach assumes that brain activity remains constant during the measurement. However, the brain is highly active and constantly reorganizes its networks to process information.

A team of researchers led by Conghui Su and Yaqiong Xiao at the Shenzhen University of Advanced Technology decided to investigate these changing patterns. They focused on a concept known as dynamic functional connectivity. This method treats brain activity like a movie rather than a photograph. It allows scientists to see how functional networks configure and reconfigure themselves from moment to moment.

To measure this activity, the team used a technology called functional near-infrared spectroscopy. This technique involves placing a cap with light sensors on the child’s head. The sensors emit harmless near-infrared light that penetrates the scalp and skull. The light detects changes in blood oxygen levels in the brain, which serves as a proxy for neural activity.

This method is particularly well suited for studying young children. Unlike magnetic resonance imaging scanners, which are loud and require participants to be perfectly still, this optical system is quiet and tolerates some movement. This flexibility allows researchers to collect data in a more natural and comfortable environment.

The study included 44 children between the ages of two and six years old. Approximately half of the participants had been diagnosed with Autism Spectrum Disorder. The other half were typically developing children who served as a control group. The researchers recorded brain activity while the children sat quietly and watched a silent cartoon.

The researchers analyzed the data using a “sliding window” technique. They looked at short segments of the recording to see which brain regions were synchronized at any given second. By applying mathematical clustering algorithms, the team identified four distinct “states” of brain connectivity that recurred throughout the session.

One specific state, referred to as State 4, emerged as a key point of difference between the two groups. This state was characterized by strong connections between the left and right hemispheres of the brain. It specifically involved robust communication between the temporal and parietal regions, which are areas often associated with language and sensory processing.

The data showed that children with autism spent considerably less time in State 4 compared to the typically developing children. They also transitioned into and out of this state less frequently. The reduced time spent in this high-connectivity state was statistically distinct.

The researchers then compared these brain patterns to clinical assessments of the children. They found a correlation between the brain data and the severity of autism symptoms. Children who spent the least amount of time in State 4 tended to have higher scores on standardized measures of autism severity.

The study also looked at adaptive behavior. This term refers to the collection of conceptual, social, and practical skills that people learn to function in their daily lives. The analysis revealed that children who maintained State 4 for longer durations exhibited better adaptive behavior scores.

In addition to watching cartoons, the children performed a visual search task to measure their cognitive abilities. They were asked to find a specific shape on a touchscreen. The researchers found that the brain patterns observed during the cartoon viewing predicted how well the children performed on this separate game.

The team conducted a statistical mediation analysis to understand the relationship between these variables. This type of analysis helps determine if a third variable explains the relationship between an independent and a dependent variable. The results suggested a specific pathway of influence.

The analysis indicated that the dynamic brain patterns directly influenced the child’s adaptive behavior. In turn, the level of adaptive behavior influenced the child’s cognitive performance on the visual search task. This implies that adaptive skills serve as a bridge connecting neural activity to cognitive outcomes.

To test the robustness of their findings, the researchers analyzed data from an independent group of 24 typically developing children. They observed the same brain states in this new group. The relationship between the duration of State 4 and cognitive response time was replicated in this validation sample.

The researchers also explored whether these brain patterns could be used for classification. They fed the connectivity data into a machine learning algorithm. The computer model was able to distinguish between children with autism and typically developing children with an accuracy of roughly 74 percent.

This accuracy rate suggests that dynamic connectivity features have potential as a diagnostic biomarker. The ability to identify such markers objectively could complement traditional behavioral assessments. It may help clinicians identify the condition earlier or monitor how a child responds to treatment over time.

The study highlights the importance of interhemispheric communication. The reduced connections between the left and right temporal regions in the autism group align with the “underconnectivity” theory of autism. This theory proposes that long-range communication between brain areas is weaker in individuals on the spectrum.

There are limitations to this study that require consideration. The sample size was relatively small. A larger group of participants would be needed to confirm the results and ensure they apply to the broader population.

The demographics of the study participants may also limit generalization. The group with autism was predominantly male, which reflects the general diagnosis rates but leaves the patterns in females less explored. There were also socioeconomic differences between the autism group and the control group in terms of family income.

The technology used in the study has physical limitations. The sensors were placed over the frontal, temporal, and parietal lobes. This placement means the researchers could not analyze activity in the entire brain. Deeper brain structures or other cortical areas might play a role that this study could not detect.

The researchers suggest that future work should focus on longitudinal studies. Tracking children over several years would help scientists understand how these brain dynamics develop as the child grows. It would also clarify whether improvements in adaptive behavior lead to changes in brain connectivity.

The findings point toward potential avenues for intervention. Therapies that target adaptive behaviors might have downstream effects on cognitive performance. Understanding the specific neural deficits could also lead to more targeted treatments designed to enhance connectivity between brain hemispheres.

This research represents a step forward in linking the biology of the brain to the behavioral characteristics of autism. It moves beyond static snapshots of brain activity. Instead, it embraces the dynamic, ever-changing nature of the human mind to find clearer signals of neurodevelopmental differences.

The study, “Linking Connectivity Dynamics to Symptom Severity and Cognitive Abilities in Children with Autism Spectrum Disorder: An FNIRS Study,” was authored by Conghui Su, Yubin Hu, Yifan Liu, Ningxuan Zhang, Liming Tan, Shuiqun Zhang, Aiwen Yi, and Yaqiong Xiao.

Weak muscles linked to higher dementia risk in middle-aged and older adults

A new analysis suggests that physical frailty serves as a robust warning sign for cognitive decline in later life. Researchers found that middle-aged and older adults with weaker muscles faced a much higher likelihood of developing dementia compared to their stronger peers. These findings were published in the Journal of Psychiatric Research.

Dementia rates are climbing globally as life expectancy increases. This condition places a heavy strain on families and healthcare systems. Medical experts are urgently looking for early indicators to identify people at risk before severe memory loss begins. One potential marker is sarcopenia. This is the age-related loss of muscle mass and power. Previous investigations have hinted at a link between physical frailty and brain health. However, many prior attempts to measure this connection did not account for body size differences among individuals.

Wei Jin and colleagues from Xinxiang Medical University in China sought to clarify this relationship. They wanted to see if the connection held true when adjusting for body mass and weight. They also aimed to look at both upper and lower body strength. Most previous work focused only on handgrip strength. The team believed a comprehensive approach could offer better insights into how physical decline might mirror changes in the brain.

The research team utilized data from the English Longitudinal Study of Ageing (ELSA). This is a long-running project that tracks the health and well-being of people living in England. The analysis included nearly 6,000 participants. All subjects were at least 50 years old at the start of the review. The researchers followed these individuals for a median period of about nine years.

To measure upper body strength, the team used a handheld dynamometer. Participants squeezed the device as hard as they could using their dominant hand. The researchers recorded the maximum force exerted during three trials.

Absolute strength is not always the best measure of health. A heavier person typically requires more muscle mass to move their body than a lighter person. To address this, the researchers standardized the grip strength scores. They adjusted the measurements based on the person’s body mass index (BMI) and total weight. This calculation ensured that strength scores were fair comparisons between people of different sizes.

The team also needed a reliable way to assess lower body function. They utilized a test involving a chair. Participants had to stand up from a sitting position five times as fast as possible. They were not allowed to use their arms for support. A stopwatch recorded the time it took to complete the five repetitions. Slower times indicated weaker leg muscles.

During the follow-up period, 197 participants developed dementia. This represented about 3.3 percent of the study population. The data revealed a clear pattern connecting muscle weakness to cognitive diagnoses.

Participants with the lowest absolute handgrip strength faced a high probability of diagnosis. Their risk was roughly 2.8 times higher than those with the strongest grip. This relationship remained consistent even after the researchers accounted for differences in body mass.

When looking at BMI-standardized strength, the trend persisted. Those in the lowest tier of strength relative to their size had more than double the risk of dementia. This suggests that low muscle quality is a danger sign regardless of a person’s weight.

The results for leg strength were similarly distinct. People who took the longest to stand up from a chair had a much higher probability of developing dementia. Their risk was approximately 2.75 times higher than those who could stand up quickly.

The researchers checked to see if these trends varied by demographic. They found the pattern was consistent for both men and women. It also held true for middle-aged adults between 50 and 64, as well as for those over 65. The connection appeared to be linear. This means that for every incremental decrease in strength, the estimated risk of dementia rose.

The team performed a sensitivity analysis to check the robustness of their data. They excluded participants who were diagnosed with dementia within the first two years of the study. This step helps rule out the possibility that the muscle weakness was caused by pre-existing, undiagnosed dementia. The results remained largely the same after this exclusion.

There are several biological theories that might explain these results. One theory involves white matter hyperintensities. These are lesions that appear on brain scans. They represent damage to the brain’s communication network. Previous research shows that declines in muscle strength often correlate with an increase in these lesions.

Another potential mechanism involves the nervous system’s interconnectivity. The systems that control movement, senses, and cognition are linked. Damage to the neural pathways that control muscles might occur alongside damage to cognitive pathways.

Inflammation may also play a specific role. Chronic inflammation is known to damage both muscle tissue and neurons. High levels of inflammatory markers in the blood are associated with both sarcopenia and dementia. This creates a cycle where inflammation degrades the body and the brain simultaneously.

The authors noted several limitations to their work. This was an observational study. It can show a relationship between two factors, but it cannot prove that muscle weakness causes dementia directly. It is possible that unmeasured lifestyle factors contribute to both conditions.

The study also relied partly on self-reported medical diagnoses. This method can sometimes lead to inaccuracies if participants do not recall their medical history perfectly. Additionally, the study did not distinguish between different types of dementia. It grouped Alzheimer’s disease and other forms of cognitive decline together.

The study population was specific to the United Kingdom. The participants were predominantly white and over age 50. The results may not apply perfectly to younger populations or different ethnic groups. Cultural and genetic differences could influence the strength-dementia relationship in other parts of the world.

Despite these caveats, the implications for public health are clear. The study highlights the value of maintaining muscle strength as we age. Grip strength and chair-rising speed are simple, non-invasive tests. Doctors could easily use them to screen patients for dementia risk.

Future research should focus on intervention strategies. Scientists need to determine if building muscle can actively delay the onset of dementia. Clinical trials involving strength training exercises would be a logical next step.

The researchers conclude that muscle strength is a key component of healthy aging. Both upper and lower limb strength appear to matter. Interventions that target total body strength could be an effective way to support brain health. Identifying physical decline early provides a window of opportunity for preventative care.

The study, “Association between muscle strength and dementia in middle-aged and older adults: A nationwide longitudinal study,” was authored by Wei Jin, Sheng Liu, Li Huang, Xi Xiong, Huajian Chen, and Zhenzhen Liang.

Common ADHD medications function differently than scientists previously thought

Prescription stimulants are among the most widely used psychiatric medications in the world. For decades, the prevailing medical consensus held that drugs like methylphenidate treat attention deficit hyperactivity disorder by targeting the brain’s executive control centers. A new study challenges this long-held dogma, revealing that these medications act primarily on neural networks responsible for wakefulness and reward rather than attention. The study was published in the journal Cell.

Medical textbooks have traditionally taught that stimulants function by enhancing activity in the prefrontal cortex. This region of the brain is often associated with voluntary control, planning, and the direction of focus. The assumption was that by boosting activity in these circuits, the drugs allowed patients to filter out distractions and maintain concentration on specific tasks. However, the precise neural mechanisms have remained a subject of debate among neuroscientists.

Earlier research into these medications often produced inconsistent results. Some studies suggested that stimulants improved motivation and reaction times rather than higher-level reasoning. Furthermore, behavioral experiments have shown that the drugs do not universally improve performance. They tend to help individuals who are performing poorly but offer little benefit to those who are already performing well.

To resolve these discrepancies, a research team led by neurologist Benjamin P. Kay at Washington University School of Medicine in St. Louis undertook a massive analysis of brain activity. Working with senior author Nico U.F. Dosenbach, Kay aimed to map the effects of stimulants across the entire brain without restricting their focus to pre-determined areas. They sought to understand which specific brain networks were most altered when a child took these medications.

The researchers utilized data from the Adolescent Brain Cognitive Development Study. This large-scale project tracks the biological and psychological development of thousands of children across the United States. The team selected functional magnetic resonance imaging scans from 5,795 children between the ages of eight and eleven.

Kay and his colleagues compared the brain scans of children who had taken prescription stimulants on the day of their MRI against those who had not. They employed a technique known as resting-state functional connectivity. This method measures how different regions of the brain communicate and synchronize with one another when the person is not performing a specific task.

The analysis did not rely on small, isolated samples. The researchers used a data-driven approach to look at the whole connectome, which is the complete map of neural connections in the brain. They controlled for various factors that could skew the results, such as head motion during the scan and socioeconomic status.

The findings contradicted the traditional “attention-centric” view of stimulant medication. The researchers observed no statistical difference in the functional connectivity of the dorsal or ventral attention networks. The drugs also did not produce measurable changes in the frontoparietal control network, which is usually linked to complex problem-solving.

Instead, the most substantial changes occurred in the sensorimotor cortex and the salience network. The sensorimotor cortex is traditionally associated with physical movement and sensation. However, recent discoveries suggest this area also plays a major role in regulating the body’s overall arousal and wakefulness levels.

The salience network is responsible for determining what is important in the environment. It helps the brain calculate the value of a task and decides whether an action is worth the effort. The study found that stimulants increased connectivity between these reward-processing regions and the motor systems.

This shift in connectivity suggests that the drugs work by altering the brain’s calculation of effort and reward. By boosting activity in the salience network, the medication makes tedious activities feel more rewarding than they otherwise would. This reduces the urge to switch tasks or seek stimulation elsewhere.

“Essentially, we found that stimulants pre-reward our brains and allow us to keep working at things that wouldn’t normally hold our interest — like our least favorite class in school, for example,” Dosenbach said. This explains the paradox of why a stimulant can help a hyperactive child sit still. The drug removes the biological drive to fidget by satisfying the brain’s need for reward.

To verify that these findings were not an artifact of the pediatric data, the team conducted a separate validation study. They recruited five healthy adults who did not have attention deficits. These volunteers underwent repeated brain scans before and after taking a controlled dose of methylphenidate.

The results from the adult trial mirrored the findings in the children. The medication consistently altered the arousal and reward networks while leaving the attention networks largely unchanged. This replication in a controlled setting provides strong evidence that the drugs act on basic physiological drivers of behavior.

The study also uncovered a distinct relationship between stimulant medication and sleep. The researchers compared the brain patterns of medicated children to those of children who reported getting a full night of sleep. The functional connectivity signatures were remarkably similar.

Stimulants appeared to mimic the neurological effects of being well-rested. Children who were sleep-deprived showed specific disruptions in their sensorimotor and arousal networks. When sleep-deprived children took a stimulant, those disruptions disappeared.

This “rescue” effect extended to cognitive performance as well. The researchers analyzed school grades and test scores for the children in the study. As expected, children with attention deficits performed better when taking medication. However, the data revealed a nuance regarding sleep.

Stimulants improved the grades and test scores of children who did not get enough sleep. In fact, the medication raised the performance of sleep-deprived children to the level of their well-rested peers. Conversely, for children who did not have attention deficits and already got sufficient sleep, the drugs provided no statistical benefit to performance.

“We saw that if a participant didn’t sleep enough, but they took a stimulant, the brain signature of insufficient sleep was erased, as were the associated behavioral and cognitive decrements,” Dosenbach noted. The medication effectively masked the neural and behavioral symptoms of fatigue.

This finding raises important questions about the use of stimulants as performance enhancers. The data suggests that the drugs do not make a well-rested brain smarter or more attentive. They simply counteract the drag of fatigue and lack of motivation.

The authors of the study advise caution regarding this sleep-masking effect. While the drugs can hide the immediate signs of sleep deprivation, they do not replace the biological necessity of sleep. Chronic sleep loss is linked to cellular stress, metabolic issues, and other long-term health consequences that stimulants cannot fix.

Kay highlighted the clinical implications of these findings for doctors and parents. Symptoms of sleep deprivation often mimic the symptoms of attention deficit hyperactivity disorder, including lack of focus and irritability. Treating a sleep-deprived child with stimulants might mask the root cause of their struggles.

“Not getting enough sleep is always bad for you, and it’s especially bad for kids,” Kay said. He suggested that clinicians should screen for sleep disturbances before prescribing these medications. It is possible that some children diagnosed with attention deficits are actually suffering from chronic exhaustion.

The study also provides a new framework for understanding the brain’s motor cortex. The researchers noted that the changes in the motor system align with the recently discovered Somato-Cognitive Action Network. This network integrates body control with planning and arousal, further cementing the link between movement and alertness.

Future research will need to investigate the long-term effects of using stimulants to override sleep signals. The current study looked at a snapshot in time, but the cumulative impact of masking fatigue over years remains unknown. The researchers also hope to explore whether these arousal mechanisms differ in various subtypes of attention disorders.

By shifting the focus from attention to arousal and reward, this research fundamentally alters the understanding of how psychostimulants function. It suggests that these drugs are not “smart pills” that boost intelligence. Instead, they are endurance tools that help the brain maintain effort and wakefulness in the face of boredom or fatigue.

The study, “Stimulant medications affect arousal and reward, not attention,” was authored by Benjamin P. Kay, Muriah D. Wheelock, Joshua S. Siegel, Ryan Raut, Roselyne J. Chauvin, Athanasia Metoki, Aishwarya Rajesh, Andrew Eck, Jim Pollaro, Anxu Wang, Vahdeta Suljic, Babatunde Adeyemo, Noah J. Baden, Kristen M. Scheidter, Julia Monk, Nadeshka Ramirez-Perez, Samuel R. Krimmel, Russel T. Shinohara, Brenden Tervo-Clemmens, Robert J. M. Hermosillo, Steven M. Nelson, Timothy J. Hendrickson, Thomas Madison, Lucille A. Moore, Óscar Miranda-Domínguez, Anita Randolph, Eric Feczko, Jarod L. Roland, Ginger E. Nicol, Timothy O. Laumann, Scott Marek, Evan M. Gordon, Marcus E. Raichle, Deanna M. Barch, Damien A. Fair, and Nico U.F. Dosenbach.

Neuroticism predicts stronger emotional bonds with AI chatbots

As artificial intelligence becomes a staple of modern life, people are increasingly turning to chatbots for companionship and comfort. A new study suggests that while users often rely on these digital entities for stability, the resulting bond is built more on habit and trust than deep emotional connection. These findings on the psychology of human-machine relationships were published in the journal Psychology of Popular Media.

The rise of sophisticated chatbots has created a unique social phenomenon where humans interact with software as if it were a living being. This dynamic draws upon a concept known as social presence theory. This theory describes the psychological sensation that another entity is physically or emotionally present during a mediated interaction.

Designers of these systems often aim to create a sense of social presence to make the user experience more engaging. The goal is for the artificial agent to appear to have a personality and the capacity for a relationship. However, the academic community has not fully reached a consensus on what constitutes intimacy in these synthetic scenarios.

Researchers wanted to understand the mechanics of this perceived intimacy. They sought to determine if personality traits influence how a user connects with a machine. The investigation was led by Yingjia Huang from the Department of Philosophy at Peking University and Jianfeng Lan from the School of Media and Communication at Shanghai Jiao Tong University.

The team recruited 103 participants who actively use AI companion applications such as Doubao and Xingye. These apps are designed to provide emotional interaction through text and voice. The participants completed detailed surveys designed to measure their personality traits and their perceived closeness to the AI.

To measure personality, the researchers utilized the “Big Five” framework. This model assesses individuals based on neuroticism, conscientiousness, agreeableness, openness, and extraversion. The survey also evaluated intimacy through five specific dimensions: trust, attachment, self-disclosure, virtual rapport, and addiction.

In addition to the quantitative survey, the researchers conducted in-depth interviews with eight selected participants. These conversations provided qualitative data regarding why users turn to digital companions. The interview subjects were chosen because they reported higher levels of intimacy in the initial survey.

The study revealed that most users do not experience a profound sense of intimacy with their chatbots. The average scores for emotional closeness were relatively low. This suggests that current technology has not yet bridged the gap required to foster deep interpersonal connections.

When analyzing what composed the relationship, the authors identified trust and addiction as the primary drivers. Users viewed the AI as a reliable outlet that is always available. The researchers interpreted the “addiction” component not necessarily as a pathology, but as a habit formed through daily routines.

The data showed that specific personality types are more prone to bonding with algorithms. Individuals scoring high in neuroticism reported stronger feelings of intimacy. Neuroticism is a trait often associated with emotional instability and anxiety.

For these users, the predictability of the computer program offers a sense of safety. Humans can be unpredictable or judgmental, but a coded companion provides consistent responses. One participant noted in an interview, “He’s always there, no matter what mood I’m in.”

People with high openness to experience also developed tighter bonds. These users tend to be imaginative and curious about new technologies. They engage with the AI as a form of exploration.

Users with high openness are willing to suspend disbelief to enjoy the interaction. They view the exchange as a form of experimental play rather than a replacement for human contact. They do not require the AI to be “real” to find value in the conversation.

The interviews highlighted that users often engage in emotional projection. They attribute feelings to the bot even while knowing it has no consciousness. This allows them to feel understood without the complexities of reciprocal human relationships.

The researchers identified three distinct ways users engaged with these systems. The first is “objectified companionship.” These users treat the AI like a digital pet, engaging in routine check-ins without deep emotional investment.

The second category is “emotional projection.” Users in this group use the AI as a safe container for their vulnerabilities. They vent their frustrations and anxieties, finding comfort in the machine’s non-judgmental nature.

The third category is “rational support.” These users do not seek emotional warmth. Instead, they value the AI for its logic and objectivity, using it as a counselor or advisor to help regulate their thoughts.

Despite these uses, participants frequently expressed frustration with technological limitations. Many described the AI’s language as too formal or repetitive. One user compared the experience to reading a customer service script.

This lack of spontaneity hinders the development of genuine immersion. Users noted that the AI lacks the warmth and fluidity of human conversation. Consequently, the relationship remains functional rather than truly affective.

The study posits that this form of intimacy relies on a “functional-affective gap.” Users maintain a high frequency of interaction for functional reasons, such as boredom relief or anxiety management. However, this does not translate into high emotional intimacy.

Trust in this context is defined by reliability rather than emotional closeness. Users trust the AI not to leak secrets or judge them. This form of trust acts as a substitute for the intuitive understanding found in human bonds.

The authors reference the philosophical concept of “I–Thou” versus “I–It” relationships. A true intimate bond is usually an “I–Thou” connection involving mutual recognition. Interactions with AI are technically “I–It” relationships because the machine lacks subjectivity.

However, the findings suggest that users psychologically approximate an “I–Thou” dynamic. They project meaning onto the AI’s output. The experience of intimacy is co-constructed by the user’s imagination and needs.

This dynamic creates a new relational paradigm. The line between simulation and reality becomes blurred. The user feels supported, which matters more to them than the ontological reality of the supporter.

The researchers argue that AI serves as a technological mediator of social affect. It functions as a mirror for the user’s emotions. The intimacy is layered and highly dependent on the context of the user’s life.

The study relies on a relatively small sample size of users from a specific cultural context. This focus on Chinese users may limit how well the results apply to other populations. Cultural attitudes toward technology and privacy could influence these results in different regions.

The cross-sectional nature of the survey also limits the ability to determine causality. It is unclear if neuroticism causes users to seek AI, or if the interaction appeals to those traits. Longitudinal studies would be needed to track how these relationships evolve over time.

Future investigations could examine how improved AI memory and emotional mimicry might alter these dynamics. As the technology becomes more lifelike, the distinction between functional and emotional intimacy may narrow. The authors imply that ethical design is essential as these bonds become more common.

The study, “Personality Meets the Machine: Traits and Attributes in Human–Artificial Intelligence Intimate Interactions,” was authored by Yingjia Huang and Jianfeng Lan.

Study finds links between personality, parenting, and moral emotions

A new study suggests that the way young adults process moral emotions is shaped by a combination of their own personality traits and their memories of how they were raised. The research indicates that mothers and fathers may influence a child’s moral development in distinct ways, but these effects depend heavily on the child’s individual temperament. These findings regarding the roots of shame, guilt, and moral identity were published in the journal Psychological Reports.

To understand these findings, it is necessary to first distinguish between two powerful emotions: guilt and shame. While these feelings are often grouped together, psychologists view them as having different functions and outcomes. Guilt is generally considered a helpful moral emotion. It focuses on a specific behavior, such as realizing one has made a mistake or hurt someone.

Because guilt focuses on an action, it often motivates people to apologize or repair the damage. In contrast, shame is viewed as a negative evaluation of the self. Instead of feeling that they did something bad, a person experiencing shame feels that they are bad. This emotion often leads to withdrawal, avoidance, or hiding from others rather than fixing the problem.

Researchers have previously established that family environments play a major role in which of these emotions a person tends to feel. Warm parenting, characterized by affection and structure, generally helps children internalize morality and develop healthy guilt. Conversely, cold parenting, marked by hostility or rejection, is often linked to higher levels of shame.

However, parents are not the only factor in this equation. A theory known as the bidirectional model suggests that children also influence their parents and their own development through their innate personalities. Lead author CaSandra L. Swearingen-Stanbrough and her colleagues at Missouri State University sought to examine this two-way street. They investigated whether a child’s specific personality traits might change the way parenting styles affect their moral identity.

The researchers recruited ninety-nine undergraduate students from a university in the Midwest. The participants provided demographic information and completed a series of standardized psychological questionnaires. Most participants were white and female, with an average age of roughly 20 years.

The first step for the researchers was to assess the participants’ personalities using the “Big Five” model. This model evaluates traits such as agreeableness, which involves kindness and cooperation, and conscientiousness, which involves organization and reliability. It also measures neuroticism, a trait associated with emotional instability and a tendency toward anxiety.

Next, the students reflected on their upbringing. They completed surveys regarding the parenting styles of their mother and father figures. They rated statements to determine if their parents were perceived as “warm,” meaning supportive and affectionate, or “cold,” meaning harsh or chaotic.

Finally, the researchers measured the participants’ moral tendencies. They used the Moral Identity Questionnaire to assess how central morality was to the students’ self-image. They also used the Guilt and Shame Proneness Scale. This tool presents hypothetical scenarios, such as making a mistake at work, and asks how likely the person is to feel bad about the act (guilt) or feel like a bad person (shame).

The results revealed that mothers and fathers appear to influence different aspects of moral development. The study showed that perceiving a mother as warm was strongly linked to a tendency to feel guilt rather than shame. This connection suggests that affectionate maternal figures help children focus on their behavior rather than internalizing failures as character flaws.

However, this effect was not uniform for everyone. The researchers found that the participant’s personality acted as a moderator. The link between a warm mother and the tendency to feel healthy guilt was strongest in participants who scored high on agreeableness. This means that an agreeable child might be more receptive to a warm mother’s influence in developing reparative moral emotions.

The study also examined “shame withdrawal,” which is the urge to hide or pull away from others when one has done something wrong. Generally, having a warm mother reduced this unhealthy reaction. Yet, this relationship was moderated by neuroticism. For individuals with different levels of emotional stability, the protective effect of a warm mother against shame withdrawal manifested differently.

The findings regarding father figures presented a different pattern. The researchers found that fathers had a stronger statistical connection to “moral integrity” than to the emotional processing of guilt or shame. In this specific study, moral integrity referred to behavioral consistency, such as doing the right thing even when no one is watching.

The data indicated that perceiving a father as cold—characterized by rejection or coercion—was actually associated with higher reported moral integrity. This counter-intuitive finding suggests that strict or harsh paternal environments might sometimes prompt young adults to strictly adhere to rules. However, this relationship was also dependent on personality.

Conscientiousness moderated the link between a cold father and moral integrity. While the general trend showed a link between cold fathers and higher reported integrity, this dynamic changed based on how conscientious the student was. The results imply that highly conscientious individuals process harsh parenting differently than those who are less organized or self-disciplined.

The authors note that these distinct roles align with previous theories about family dynamics. Mothers are often viewed as the primary source of emotional warmth and acceptance. Consequently, their parenting style has a greater impact on emotional responses like guilt and shame. Fathers, who may exhibit more variable interactions or rougher play, appear to influence the behavioral enforcement of moral rules.

There are limitations to this research that affect how the results should be interpreted. The study relied entirely on self-reported data from the students. This means the results represent the participants’ perceptions of their parents, which may not match what actually occurred during their childhood.

Additionally, the sample size was relatively small and lacked diversity. The participants were primarily white, female college students. This specific demographic does not represent the broader population. Cultural differences in parenting styles and moral values could lead to different results in other groups.

The study is also correlational, meaning it cannot prove that the parenting styles caused the moral outcomes. It is possible that other unmeasured factors influenced the results. Future research would benefit from observing actual moral behavior rather than relying on hypothetical survey questions.

The researchers suggest that future studies should include the parents’ perspectives as well. Comparing what parents believe they did with what children perceived could offer a more complete picture of the family dynamic. Despite these caveats, the study highlights that moral development is not a one-size-fits-all process.

The authors conclude that children are active participants in their own upbringing. A child’s personality filters the parenting they receive. This helps explain why siblings raised in the same household can grow up to have very different emotional and moral responses to the world.

The study, “Mom, Dad, and Me: Personality Moderates the Relationships Between Parenting Traits, Shame, and Morality,” was authored by CaSandra L. Swearingen-Stanbrough, Lauren Smith, and Olive Baron.

Distinct personality traits found in those who use sex to cope

Recent psychological research has identified distinct personality profiles that shed light on why some individuals turn to sexual behavior to manage emotional distress while others do not. The findings suggest that hostility, impulsivity, and deep-seated self-criticism are key factors that distinguish hypersexual coping mechanisms from other forms of emotional insecurity. This research was published in the journal Sexual Health & Compulsivity.

Psychologists classify hypersexuality as a condition involving excessive sexual fantasies, urges, and behaviors. While high sexual desire is natural for many, hypersexuality becomes a clinical concern when it causes distress or disrupts daily life. Many experts view this behavior not merely as a drive for pleasure but as a coping strategy.

Individuals may engage in sexual activity to escape negative emotions such as anxiety, depression, boredom, or loneliness. This creates a cycle where the temporary relief provided by sexual activity reinforces the behavior. Eventually, this pattern can lead to feelings of guilt or shame, which may trigger further urges to cope through sex.

To understand this dynamic, researchers look to attachment theory. This psychological framework describes how early bonds with caregivers shape the way adults relate to others and regulate their emotions. People with secure attachment styles generally feel comfortable with intimacy and trust others.

Those with insecure attachment styles often struggle with these bonds. Anxious attachment involves a fear of abandonment and a constant need for approval. Avoidant attachment involves a discomfort with closeness and a desire for emotional distance.

Prior studies have linked insecure attachment to difficulties in regulating emotions. When individuals cannot manage their feelings effectively, they may seek external ways to soothe themselves. For some, this external method becomes sexuality.

However, not everyone with an insecure attachment style develops hypersexual behaviors. Camilla Tacchino and her colleagues at Sapienza University of Rome sought to understand what separates these groups. They aimed to identify specific psychological profiles based on attachment, self-criticism, and personality traits.

The researchers recruited 562 participants from the general population in Italy. The group was predominantly female and had an average age of roughly 31 years. The participants completed a series of detailed self-report questionnaires.

One survey measured the tendency to use sex as a coping mechanism to deal with emotional pain. Another assessed attachment styles, looking for signs of anxiety or avoidance in relationships. Additional surveys evaluated pathological personality traits and levels of self-criticism.

The team used a statistical method known as latent profile analysis. This technique allows researchers to group participants based on shared patterns across multiple variables. Instead of looking at averages for the whole group, this method identifies distinct “types” of people within the data.

The analysis revealed three specific profiles. The largest group, comprising 50% of the sample, was labeled “Secure without Sexual Coping.” These individuals showed low levels of attachment anxiety and avoidance. They also reported very low reliance on sex to manage their emotions.

Demographically, this secure group tended to be older than the other groups. They were also more likely to be in romantic relationships and to have children. Psychologically, they displayed the highest levels of emotional stability.

The second profile was labeled “Insecure with Sexual Coping.” This group made up about 13% of the sample. These participants exhibited high levels of attachment insecurity, characterized by both a fear of intimacy and a strong need for approval.

The defining feature of this second profile was their high score on using sex to cope. They frequently reported engaging in sexual acts to deal with life problems or negative feelings. This group was generally younger and less likely to be in a committed relationship.

The third profile was labeled “Insecure without Sexual Coping.” Comprising 37% of the sample, these individuals also scored high on attachment insecurity. They experienced significant worries about relationships and discomfort with closeness. However, unlike the second group, they did not use sex as a coping strategy.

The researchers then compared the personality traits and self-criticism levels of these three groups. The “Secure” group scored the lowest on all measures of pathology. They were generally less self-critical and had fewer negative personality traits.

The “Insecure with Sexual Coping” group displayed a specific set of personality markers. They scored highest in the domains of Antagonism and Disinhibition. Antagonism refers to behaviors that put an individual at odds with others, such as hostility or grandiosity.

Disinhibition involves an orientation toward immediate gratification and impulsive behavior. This suggests that for this group, the drive to use sex as a coping mechanism is linked to difficulties in impulse control. They may act on their urges without fully considering the long-term consequences.

This group also reported high levels of self-hatred. They experienced feelings of disgust and aggression toward themselves. The authors suggest that this self-loathing may be both a cause and a result of their compulsive sexual behavior.

The “Insecure without Sexual Coping” group presented a different psychological landscape. While they shared the attachment insecurities of the second group, they did not exhibit the same levels of impulsivity or hostility. Instead, they scored highest on a dimension called “Negative Affect.”

Negative Affect involves the frequent experience of unpleasant emotions like sadness, worry, and anxiety. This group also reported the highest levels of feeling “inadequate.” They viewed themselves as inferior or flawed but did not turn to impulsive behaviors to manage these feelings.

The researchers interpreted this distinction as a difference in how these groups process distress. The group that uses sex to cope appears to “externalize” their pain. They act out through impulsive and potentially risky behaviors.

In contrast, the insecure group that avoids sexual coping appears to “internalize” their distress. They may be more prone to rumination, depression, or self-blame. Their feelings of inadequacy might paralyze them or lead to withdrawal rather than active coping strategies like sex.

The study highlights that attachment insecurity is a vulnerability factor but does not guarantee hypersexuality. The presence of specific personality traits determines the direction that insecurity takes. Impulsivity and antagonism seem to steer individuals toward hypersexual coping.

Conversely, feelings of deep inadequacy and sadness may steer individuals away from sexual coping. It is possible that their low self-esteem inhibits sexual pursuit. They may fear rejection too deeply to engage with others sexually, even for coping purposes.

There are limitations to this study that contextualize the results. The participants were drawn from the general population rather than a clinical setting. This means the findings describe trends in everyday people rather than patients diagnosed with hypersexual disorders.

Additionally, the data relied entirely on self-report questionnaires. Participants may not always assess their own behaviors or feelings accurately. Social desirability bias could lead some to underreport sexual behaviors or negative traits.

The cross-sectional nature of the study is another consideration. The researchers collected data at a single point in time. This prevents them from determining causality. It is unclear if personality traits cause the sexual coping or if the behavior influences personality over time.

Future research could address these gaps by studying clinical populations. Investigating individuals who are seeking treatment for compulsive sexual behavior would provide a clearer picture of severe cases. Longitudinal studies could also track how these profiles develop over time.

The authors also suggest investigating the role of guilt and shame more deeply. While self-criticism was measured, the specific emotions following sexual acts could offer further insight. Understanding the cycle of shame is essential for treating hypersexuality.

These findings have implications for mental health treatment. They suggest that therapy for hypersexuality should not focus solely on the sexual behavior itself. Clinicians should also address the underlying attachment insecurities and personality traits.

For patients fitting the “Insecure with Sexual Coping” profile, interventions might focus on impulse control. Therapies that target emotion regulation and reduce antagonism could be beneficial. Helping patients find healthier ways to soothe themselves is a primary goal.

For those in the “Insecure without Sexual Coping” profile, treatment might differ. Although they do not present with hypersexuality, their high levels of negative affect require attention. Therapy for this group might focus on building self-esteem and combating feelings of inadequacy.

This study provides a nuanced view of the relationship between personality and sexual behavior. It challenges the idea that hypersexuality is simply a matter of high sex drive. Instead, it frames the behavior as a complex response to emotional and relational deficits.

By identifying these distinct profiles, the researchers have offered a roadmap for better assessment. Mental health professionals can use this information to tailor their approaches. Understanding the specific psychological makeup of a patient allows for more precise and effective care.

The study, “Decoding Hypersexuality: A Latent Profile Approach to Attachment, Self-Criticism, and Personality Disorders,” was authored by Camilla Tacchino, Guyonne Rogier, and Patrizia Velotti.

Playing Super Mario Bros. and Yoshi games may reduce burnout risk in young adults, stud finds

A new study published in JMIR Serious Games suggests that playing whimsical video games may help young adults manage the symptoms of burnout. The research indicates that titles like Super Mario Bros. can foster a sense of “childlike wonder” that boosts happiness and lowers emotional exhaustion. This effect offers a potential mental health tool for students facing high levels of stress and anxiety.

Young adults today are navigating a developmental period often referred to as “emerging adulthood.” This stage involves identity exploration but also brings specific types of instability and anxiety. Rising costs of living and competitive academic environments contribute to a high risk of burnout among this demographic. The digital world often exacerbates these pressures through constant social media comparisons and an “always-on” work culture.

These cumulative stressors can lead to a state of chronic exhaustion and cynicism. Researchers Winze Tam, Congcong Hou, and Andreas Benedikt Eisingerich sought to understand if specific digital games could offer a solution. They focused on whether the lighthearted nature of Nintendo platformers could provide a necessary mental reset. The team hypothesized that the specific design of these games might counteract the negativity associated with burnout.

The researchers employed a mixed-methods approach to explore this theory. First, they conducted detailed interviews with 41 university students. These participants had experience playing Super Mario Bros. or Yoshi games. The goal was to understand the subjective emotional experience of gameplay in a natural setting. The researchers asked students to reflect on how the games affected their daily lives and emotional states.

During these interviews, students described the bright colors and optimistic music as creating a safe atmosphere. One respondent compared the experience to being “wrapped in a cozy, warm blanket.” Others noted that the games allowed them to appreciate small details, like the animation of clouds or the sounds of jumping. This shift in perspective helped them detach from real-world cynicism. The games offered clear, achievable goals, which stood in contrast to the ambiguous challenges of adult life.

Following the interviews, the team administered a survey to 336 students. This quantitative phase measured three specific variables: burnout risk, overall happiness, and the experience of childlike wonder. The researchers defined childlike wonder as a state of openness, curiosity, and delight in discovery. They used statistical modeling to analyze the relationships between these factors.

The data revealed a positive association between the game-induced wonder and general life happiness. The results indicated that happiness fully mediated the relationship between wonder and burnout. This means that the games appear to reduce burnout specifically by increasing happiness through the mechanism of wonder. The findings were consistent across genders.

Eisingerich noted the implications of these results for mental health strategies. He stated, “This study suggests that the path to combating burnout in young adults may lie not just in traditional wellness but also in reclaiming joy.” The authors argue that these games act as a “vacation for the mind.”

This research adds a new dimension to existing knowledge about how video games affect the brain. Previous studies have largely focused on cognitive skills or physiological stress rather than emotional restoration. For instance, a study published in Experimental Brain Research found that 3D platformers could improve memory and focus in older adults. That work highlighted the cognitive demands of navigating virtual spaces to improve executive function.

The current study also contrasts with research that focuses solely on the physiological effects of gaming. A study in the International Journal of Psychophysiology showed that while gaming generally lowered physiological stress markers, violent sections of a game could increase self-reported aggression. The Super Mario study differs by focusing on non-violent content that promotes positive emotions. It suggests that the aesthetic and tone of the game are vital components of its psychological impact.

Recent work from the University of Oxford challenged the idea that the amount of time spent playing matters most. Published in Royal Society Open Science, that study found that the sheer number of hours played did not predict mental well-being. Instead, the player’s perception of how gaming fit into their life was the deciding factor. The current findings support this by emphasizing the quality of the experience—specifically the feeling of wonder—over the duration of play.

Additionally, a longitudinal analysis of PowerWash Simulator players published in ACM Games found that mood improves slightly within the first 15 minutes of play. This aligns with the idea that games can provide immediate emotional uplift. The Super Mario study extends this by linking that uplift to a reduction in long-term burnout symptoms. It identifies a specific emotional pathway involving wonder, rather than just general relaxation.

While the results are promising, the authors note that video games are not a cure-all for systemic issues like financial hardship or workplace inequity. The study relied on self-reported data, which depends on participants accurately assessing their own feelings. It is also possible that people who are already happier are more prone to experiencing wonder.

The researchers also point out that the benefits are likely contingent on moderate, voluntary play. Compulsive gaming used solely to avoid real-world problems could potentially have negative effects. The study focused specifically on university students, so the results may not apply to all age groups.

Future research is needed to track these effects over a longer period to see if the reduction in burnout is sustained. Scientists also need to determine if other genres of games can produce similar benefits or if this effect is unique to the whimsical style of Nintendo platformers. Exploring how these effects vary across different cultures and demographics would also be beneficial.

The study, “Super Mario Bros. and Yoshi Games’ Affordance of Childlike Wonder and Reduced Burnout Risk in Young Adults: In-Depth Mixed Methods Cross-Sectional Study,” was authored by Winze Tam, Congcong Hou, and Andreas Benedikt Eisingerich.

Inflammation linked to brain reward dysfunction in American Indians with depression

Recent research indicates that bodily inflammation may disrupt the brain’s ability to process rewards and risks in American Indian adults who have experienced depression. The study found that higher levels of specific inflammatory markers in the blood corresponded with reduced activity in brain regions essential for motivation. These findings were published in Biological Psychiatry: Cognitive Neuroscience and Neuroimaging.

Major Depressive Disorder is a complex mental health condition that goes beyond feelings of sadness. One of its hallmark symptoms is anhedonia, which is a reduced ability to experience pleasure or interest in daily activities. This symptom is often linked to dysfunctions in the brain’s reward circuitry. This system governs how the brain anticipates positive outcomes, such as winning a prize, or negative outcomes, like a financial loss.

Scientists are increasingly looking at the immune system to understand these brain changes. Physical inflammation is the body’s natural response to injury or stress. However, chronic stress can lead to persistent, low-grade inflammation that affects the entire body. Over time, the immune system releases signaling proteins called cytokines that can cross into the brain. Once there, these proteins may alter how neural circuits function.

This biological connection is particularly relevant for American Indian populations. Many Indigenous communities face unique and chronic stressors rooted in historical trauma. These stressors include the long-term psychological impacts of colonization and systemic health disparities. Previous research links symptoms of historical loss to higher risks for both depression and physical health issues.

The researchers hypothesized that this unique stress environment might elevate inflammation levels. They proposed that this inflammation could, in turn, impair the brain’s reward system. This pathway might explain why depression prevalence and severity can be higher in these communities. To test this, the study focused on American Indian individuals who had been diagnosed with Major Depressive Disorder at some point in their lives.

Leading the investigation was Lizbeth Rojas from the Department of Psychology at Oklahoma State University. She collaborated with a team of experts from the Laureate Institute for Brain Research and other academic institutions. The team aimed to move beyond simple surveys by looking at direct biological and neurological evidence. They sought to connect blood markers of inflammation with real-time brain activity.

The study included 73 adult participants who identified as American Indian. All participants had a history of clinical depression. To assess their biological state, the researchers collected blood samples from each individual. They analyzed these samples for specific biomarkers related to the immune system.

The team measured levels of proinflammatory cytokines, which promote inflammation. These included tumor necrosis factor (TNF) and interleukin-6 (IL-6). They also measured C-reactive protein (CRP), a general marker of inflammation produced by the liver. Additionally, they looked at interleukin-10 (IL-10), a cytokine that helps reduce inflammation.

To observe brain function, the researchers utilized two advanced imaging technologies simultaneously. Participants entered a functional magnetic resonance imaging (fMRI) scanner. This machine measures brain activity by tracking changes in blood oxygen levels. At the same time, participants wore caps to record electroencephalography (EEG) data. EEG measures the electrical activity of the brain with high time precision.

While inside the scanner, the participants performed a specific psychological test called the Monetary Incentive Delay task. This task is designed to activate the brain’s reward centers. Participants viewed a screen that displayed different visual cues. Some cues indicated a chance to win money, while others indicated a risk of losing money.

After seeing a cue, the participant had to press a button rapidly. If they were fast enough on a “win” trial, they gained a small amount of cash. If they were fast enough on a “loss” trial, they avoided a financial penalty. The researchers focused on the “anticipation phase” of this task. This is the brief moment after seeing the cue but before pressing the button.

During this anticipation phase, a healthy brain typically shows high activity in the basal ganglia. This is a group of structures deep in the brain that includes the striatum. The striatum is essential for processing incentives and generating the motivation to act. In people with depression, this area often shows “blunted” or reduced activity.

The study’s results revealed a clear link between the immune system and this brain activity. The researchers used statistical models to predict brain response based on inflammation levels. They found that higher concentrations of TNF were associated with reduced activation in the basal ganglia during the anticipation of a potential win.

This relationship was notably influenced by the sex of the participant. The negative association between TNF and brain activity was observed specifically in male participants. This suggests that for men in this sample, high inflammation dampened the brain’s excitement about a potential reward.

The researchers also examined how the brain reacted to the threat of losing money. In this context, they looked at the interaction between TNF and CRP. They found that elevated levels of both markers predicted reduced brain activation. The basal ganglia were less responsive even when the participant was trying to avoid a negative outcome.

Another finding involved the nucleus accumbens, a key part of the brain’s reward circuit. The study showed that medication status played a role here. Among participants taking psychotropic medication, higher TNF levels were linked to lower activity in this region during loss anticipation. This highlights the complexity of how treatments and biology interact.

The study also attempted to use EEG to measure a specific brain wave called the P300. The P300 is a spike in electrical activity that relates to attention and updating working memory. Previous studies have suggested that people with depression have a smaller P300 response. The researchers expected inflammation to predict the size of this brain wave.

However, the analysis did not find a statistical link between the inflammatory markers and the P300 amplitude. The electrical signals did not show the same clear pattern as the blood flow changes measured by the fMRI. This suggests that inflammation might affect the metabolic demand of brain regions more than the specific electrical timing measured by this task.

These findings support the idea that the immune system plays a role in the biology of depression. The presence of high inflammation appears to “turn down” the brain’s sensitivity to incentives. When the brain is less responsive to rewards, a person may feel less motivation. This aligns with the clinical experience of patients who feel a lack of drive or pleasure.

The authors described several limitations that provide context for these results. The study relied on a relatively small sample size of 73 people. A larger group would provide more statistical certainty. Additionally, the data came from parent studies that were not designed exclusively for this specific investigation.

Another limitation was the lack of a healthy control group. The study only looked at people with a history of depression. Without a non-depressed comparison group, it is difficult to determine if these patterns are unique to depression. They might also appear in people with high inflammation who are not depressed.

The study also could not fully account for cultural factors. While the background emphasizes the role of historical trauma, the analysis did not measure cultural connectedness. Previous research suggests that connection to one’s culture can protect against stress. It acts as a buffer that might improve mental health outcomes.

Despite these caveats, the research offers a specific biological target for understanding depression in American Indian populations. It moves away from purely psychological explanations. Instead, it frames mental health within a “biopsychosocial” model. This model considers how biological stress and social history combine to affect the brain.

The authors suggest that future research should focus on resilience. Understanding how some individuals maintain low inflammation despite stress could be key. This could lead to better prevention strategies. Interventions might focus on reducing inflammation as a way to help restore normal brain function.

Treating depression in these communities may require addressing physical health alongside mental health. If inflammation drives brain dysfunction, then reducing stress on the body is vital. This reinforces the need for holistic healthcare approaches. Such approaches would respect the unique history and challenges faced by American Indian communities.

The study, “Major Depressive Disorder and Serum Inflammatory Biomarkers as Predictors of Reward-Processing Dysfunction in an American Indian Sample,” was authored by Lizbeth Rojas, Eric Mann, Xi Ren, Danielle Bethel, Nicole Baughman, Kaiping Burrows, Rayus Kuplicki, Leandra K. Figueroa-Hall, Robin L. Aupperle, Jennifer L. Stewart, Salvador M. Guinjoan, Sahib S. Khalsa, Jonathan Savitz, Martin P. Paulus, Ricardo A. Wilhelm, Neha A. John-Henderson, Hung-Wen Yeh, and Evan J. White.

Why scientists are linking mitochondria to the physical toll of loneliness

Chronic stress and social isolation are frequently cited as precursors to physical illness, yet the biological machinery driving this connection has remained partially obscured. A new scientific review proposes that mitochondria, the energy-generating structures within cells, serve as the primary translator between psychological experience and physical health. By altering their function in response to stress, these cellular components may drive conditions ranging from depression to cardiovascular disease. The paper detailing these connections was published in Current Directions in Psychological Science.

For decades, researchers have utilized the biopsychosocial model to understand how social and psychological factors influence the body. This framework links biological processes with social environments, yet it has historically lacked specific details on how feelings physically alter cells. Critics of the model note that it offers limited mechanistic specificity regarding how an experience like loneliness translates into molecular change. Without identifying the precise biological pathways, it is difficult to predict or treat stress-related diseases effectively.

To address this gap, a team of researchers synthesized evidence linking cellular biology with psychology. Christopher P. Fagundes, a professor in the Department of Psychological Sciences at Rice University, led the review. He collaborated with E. Lydia Wu-Chung from the University of Pittsburgh and Cobi J. Heijnen from Rice University. They sought to identify a cellular system sensitive enough to respond to mood but powerful enough to regulate whole-body health.

The researchers conducted their review by examining existing literature from the fields of psychoneuroimmunology and mitochondrial biology. They analyzed data from preclinical animal models and human studies to construct a clearer picture of cellular adaptation. Their analysis focused on how mitochondria function as a hub for stress physiology, immune regulation, and energy balance.

Mitochondria are often called the powerhouses of the cell because they generate adenosine triphosphate, or ATP. This molecule fuels nearly all biological activity, including brain function and muscle movement. The review highlights that these structures do much more than produce fuel.

They serve as sophisticated sensors that detect hormonal signals and environmental shifts. Mitochondria possess the ability to adjust their activity based on the body’s immediate needs. This adaptability is known as metabolic flexibility.

During moments of acute stress, the body releases hormones like cortisol and catecholamines. These hormones prompt mitochondria to increase energy production to handle the immediate challenge. This rapid adjustment supports resilience by providing the resources needed for a “fight or flight” response.

However, the authors note that chronic stress creates a vastly different outcome. Prolonged exposure to stress hormones causes mitochondrial efficiency to plummet. Instead of adapting, the machinery begins to malfunction.

When these structures become overworked, they produce excess reactive oxygen species. These are volatile by-products that function like cellular exhaust fumes. While small amounts are necessary for signaling, an accumulation leads to oxidative stress.

This damage disrupts the balance of energy and leads to cellular dysfunction. The researchers point to this breakdown as a potential root cause of fatigue and cognitive decline. The brain is particularly susceptible to these energy deficits because of its immense fuel requirements.

Even slight mitochondrial impairments can limit the energy available for neurotransmission. This can undermine the neural processes that support mood regulation and memory. Consequently, mitochondrial dysfunction is increasingly linked to psychiatric conditions such as anxiety and depression.

The review also details how mitochondria communicate with the immune system. When mitochondria sustain damage, they can release fragments of their own DNA into the bloodstream. They may also release other internal molecules that are usually contained within the cell.

The immune system perceives these fragments as danger signals. This triggers an inflammatory response similar to how the body reacts to a virus. Chronic inflammation is a well-established risk factor for heart disease, diabetes, and neurodegenerative disorders.

This pathway suggests that psychological stress creates physical inflammation through mitochondrial damage. Fagundes and his colleagues cite studies involving human subjects to illustrate this connection. One highlighted area of research involves caregivers for family members with dementia.

Caregiving is often used as a model for chronic psychological stress. Research indicates that caregivers often display lower mitochondrial health indices compared to non-caregivers. Those with lower mitochondrial efficiency reported worse physical functioning.

Conversely, caregivers with higher mitochondrial capacity appeared more resilient. They were better buffered against the negative emotional effects of their heavy burden. This suggests that cellular health may dictate how well a person withstands psychological pressure.

Social isolation also appears to leave a biological mark on these cellular structures. The review mentions that individuals reporting high levels of loneliness possess lower levels of specific mitochondrial proteins in the brain. This creates a feedback loop where social disconnection degrades physical health.

Fagundes notes the importance of this cellular perspective in understanding disease. He states, “The actual cellular machinery that links these experiences to disease really starts at the level of the mitochondria.” This insight moves the field beyond vague associations to concrete mechanisms.

The authors argue that this helps explain the overlap between mental health disorders and physical ailments. Conditions like anxiety and diabetes may share this common cellular origin. It provides a unified theory for why emotional distress so often accompanies physical illness.

The team also reviewed interventions that might restore mitochondrial health. Exercise provided the most consistent results in the analyzed literature. Endurance training boosts the number of mitochondria and improves their efficiency.

Physical activity stimulates a process called mitochondrial biogenesis. This creates new power plants within the cell to replace old or damaged ones. The authors suggest this is a primary reason why exercise supports both physical and psychological resilience.

Mindfulness and psychotherapy showed potential but lacked robust evidence in the current literature. Some studies indicated biological changes following these interventions. For example, a mindfulness program was associated with altered oxidative metabolism markers.

However, these biological shifts did not always align with reported symptom improvement. In some cases, the studies lacked necessary control groups to confirm causality. The researchers characterize these findings as promising proof of concept rather than definitive proof.

Social support is another theorized intervention. It is believed to protect mitochondrial health by reducing cortisol and dampening inflammatory activity. However, the authors note that very few studies have measured mitochondrial outcomes directly in relation to social support.

The authors acknowledge that much of the current evidence relies on correlations. It remains unclear if mitochondrial dysfunction causes psychological distress or if distress drives the dysfunction. There is likely a bidirectional relationship that exacerbates over time.

Most human studies reviewed were cross-sectional, meaning they looked at a single point in time. This limits the ability to determine the direction of the effect. The researchers emphasize the need for longitudinal designs to clarify these pathways.

Future work must integrate mitochondrial measures with broader systems. These include the immune system, the autonomic nervous system, and the brain. Studying these systems in isolation often misses the complexity of the human stress response.

The authors also call for standardized ways to measure mitochondrial health in psychological studies. Current methods vary widely in cost and accessibility. Developing consistent biomarkers will allow for larger studies that reflect diverse populations.

Fagundes emphasizes the potential of this approach for future medicine. He says, “If we focus more at the cellular level, we’ll have a much deeper understanding of underlying processes.” This could lead to new treatments that target the cell to heal the mind.

By establishing mitochondria as a key player, this review refines the biopsychosocial model. It offers a testable biological mechanism for decades of psychological theory. Ultimately, it suggests that resilience is not just a state of mind but a state of cellular energy.

The paper, “Psychological Science at the Cellular Level: Mitochondria’s Role in Health and Behavior,” was authored by Christopher P. Fagundes, E. Lydia Wu-Chung, and Cobi J. Heijnen.

Antibiotic use during pregnancy linked to slightly increased risk of ADHD

A new comprehensive analysis suggests that maternal use of antibiotics during pregnancy is associated with a slightly elevated likelihood of the child receiving a diagnosis of attention-deficit/hyperactivity disorder (ADHD). The research indicates that this statistical link is stronger when antibiotics are administered during the second or third trimesters. These findings were published recently in the Journal of Affective Disorders.

ADHD is a neurodevelopmental condition that has become increasingly common in recent years. It is characterized by symptoms such as difficulty sustaining attention, impulsive actions, and hyperactivity. While genetics play a major role in the development of the disorder, scientists believe that environmental factors also contribute. Researchers have increasingly focused on exposures that occur before birth.

Antibiotics are among the most frequently prescribed medications for pregnant women. They are essential for treating bacterial infections that could otherwise harm the mother or the fetus. However, these drugs do not only target harmful bacteria. They also affect the vast community of helpful microbes living in the human gut, known as the microbiota.

There is a growing body of evidence suggesting a connection between the gut and the brain. This concept is often referred to as the gut-brain axis. The theory posits that the composition of gut bacteria can influence brain development and function. This influence may occur through various biological pathways, such as the production of neurotransmitters or the regulation of inflammation.

Mothers pass aspects of their microbiota to their children. Additionally, the environment within the womb influences the initial development of the fetus’s own biological systems. Consequently, some scientists hypothesize that disrupting the maternal microbiome with antibiotics could have downstream effects on the child’s neurodevelopment. Previous studies on this topic have produced conflicting results, with some finding a risk and others finding none.

To address these inconsistencies, a research team led by Jiali Fan from West China Second University Hospital at Sichuan University initiated a new investigation. They sought to clarify the potential relationship by combining data from many different sources. This approach allows for a more robust statistical analysis than any single study could provide on its own.

The researchers conducted a meta-analysis. This is a scientific method that pools statistical data from multiple independent studies to identify broader trends. The team searched major medical databases for observational cohort studies published up to October 2024. They followed strict guidelines to select high-quality research.

The final analysis included nine major studies. These studies represented a massive combined pool of participants, totaling more than 6.1 million mother-child pairs. The data encompassed populations from several different regions. These included countries in North America, Europe, and Asia.

The researchers used a scoring system called the Newcastle-Ottawa Scale to evaluate the quality of the included research. This scale assesses how well a study selected its participants and how accurately it measured outcomes. The team found that the included studies were generally of moderate to high methodological quality.

The primary finding of the analysis identified a positive association. The overall data showed that children exposed to antibiotics in the womb had a hazard ratio of 1.15 compared to those who were not exposed. This figure represents a 15 percent increase in the relative risk of developing ADHD. Another statistical measure used in the study, the odds ratio, placed this increased likelihood at 28 percent.

The researchers then broke down the data to see if the timing of the exposure mattered. Pregnancy is divided into three distinct periods known as trimesters. The analysis found no statistical connection between antibiotic use in the first trimester and a later ADHD diagnosis. This lack of association in early pregnancy was consistent across the data.

However, a different pattern emerged for the later stages of pregnancy. The study identified a link when antibiotics were used during the mid-pregnancy period. A similar association was observed for antibiotic use during late pregnancy. This suggests that the timing of exposure may be a relevant factor in this potential relationship.

In addition to timing, the team investigated the frequency of antibiotic use. They wanted to know if taking more courses of medication changed the risk profile. The data showed that a single course of antibiotics was not statistically linked to an increased risk of ADHD. The association only became apparent with repeated use.

When mothers received two separate courses of antibiotics, the risk of their children developing ADHD rose. The risk appeared to increase further for those who received three or more courses. This finding hints at a potential cumulative effect. It suggests that more frequent disruptions to the maternal microbiome might correspond to a higher probability of the neurodevelopmental outcome.

The researchers performed sensitivity analyses to test the strength of their conclusions. This process involves removing one study at a time from the calculations to ensure no single dataset is skewing the results. The findings remained stable throughout this process. This consistency suggests that the observed link is robust across the different study populations included.

Despite these findings, the authors emphasize that the results must be interpreted with caution. The study design is observational. This means it can detect a correlation between two events, but it cannot prove that one caused the other. There are other factors that could explain the association.

The most prominent alternative explanation is the underlying infection itself. Women are prescribed antibiotics because they are sick. Infections trigger immune responses and inflammation in the body. It is possible that the maternal fever or inflammation affects fetal brain development, rather than the medication.

Some of the studies included in the analysis attempted to adjust for this factor. For instance, one study accounted for maternal infection and still found a link to the medication. However, not all studies could fully separate the effects of the illness from the effects of the cure. This remains a primary challenge in this field of research.

Another limitation of the analysis is the lack of detail regarding specific drugs. Antibiotics are a diverse class of medications. Different types of antibiotics target bacteria in different ways and have varying effects on the microbiome. The current data did not allow the researchers to determine if specific classes of drugs carried higher risks than others.

The study also lacked precise information on dosages. Without knowing the exact amount of medication taken, it is difficult to determine a precise biological threshold for risk. The researchers relied on prescription records and medical files. These records confirm a prescription was filled but do not always guarantee it was taken as directed.

The biological mechanisms remain theoretical. While animal studies have shown that antibiotics can alter behavior in mice by changing gut bacteria, this has not been definitively proven in humans. The pathway from maternal gut bacteria to fetal brain development is a subject of ongoing scientific inquiry.

The authors recommend that future research should be prospective in nature. This means designing studies that recruit pregnant women and follow them forward in time. Such studies should meticulously record the specific type, dosage, and duration of antibiotic use. This would allow for a much finer-grained analysis of the risks.

The researchers also suggest using advanced study designs to rule out genetic factors. Sibling comparisons can be a powerful tool. By comparing one sibling who was exposed to antibiotics to another who was not, scientists can control for shared genetics and household environments. This would help isolate the effect of the medication.

In clinical practice, antibiotics remain vital tools. The risks of leaving a bacterial infection untreated during pregnancy are well-documented and can be severe. The authors state that their findings should not discourage necessary treatment. Instead, they suggest the results highlight the need for prudent prescribing.

Physicians should continue to weigh the benefits and risks. The study supports the idea that antibiotics should be used only when clearly indicated. Avoiding unnecessary or repeated courses of these drugs may be beneficial. This aligns with general medical guidance regarding antibiotic stewardship.

The study, “Meta-analysis of the association between prenatal antibiotic exposure and risk of childhood attention-deficit/hyperactivity disorder,” was authored by Jiali Fan, Shanshan Wu, Chengshuang Huang, Dongqiong Xiao, and Fajuan Tang.

Subtle physical traits may hint at the biological roots of gender dysphoria

Researchers in Turkey have identified a potential biological link between early fetal development and the later emergence of gender dysphoria. The study indicates that adults diagnosed with gender dysphoria possess a higher frequency of subtle physical irregularities, known as minor physical anomalies, compared to cisgender individuals.

These physical traits develop during the initial stages of pregnancy and may serve as external markers for variations in brain development that occur during the same prenatal window. The research findings appear in the Journal of Sex & Marital Therapy.

The origins of gender dysphoria remain a subject of persistent scientific inquiry. Current theoretical models often divide potential causes into biological influences and psychosocial factors. A growing subset of neuroscience research examines whether the condition arises from variations in how the brain develops before birth. This perspective suggests that the biological pathways shaping the brain might also leave physical traces elsewhere on the body. This concept relies on the biological reality of fetal development.

During the early weeks of gestation, the human embryo consists of distinct tissue layers. One of these layers, the ectoderm, eventually differentiates to form both the skin and the central nervous system. Because these systems share a common embryological origin, disruptions or variations affecting one system often impact the other. Scientists have previously utilized this connection to study conditions such as schizophrenia and autism spectrum disorder. The presence of minute physical irregularities is often interpreted as a record of developmental stability in the womb.

These irregularities are classified as minor physical anomalies. They are slight deviations in morphology that do not cause medical problems or cosmetic concerns. Examples include low-set ears, specific hair whorl patterns, or a high arch in the palate. These features form primarily during the first and second trimesters of pregnancy. This timeframe overlaps with critical periods of fetal brain architecture formation. By quantifying these traits, researchers attempt to estimate the degree of neurodevelopmental deviation that occurred prior to birth.

Psychiatrist Yasin Kavla and his colleagues at Istanbul University-Cerrahpasa sought to apply this framework to the study of gender identity. They reasoned that if gender dysphoria has a neurodevelopmental basis, individuals with the diagnosis might exhibit these physical markers at higher rates than the general population. The team designed a case-control study to test this hypothesis. They aimed to determine if there is a measurable difference in the prevalence of these anomalies between transgender and cisgender adults.

The investigators recruited 108 adults diagnosed with gender dysphoria. These participants were patients at a university clinic who had not yet undergone hormonal or surgical gender-affirming treatments. The exclusion of individuals on hormone therapy was necessary to ensure that any observed physical traits were congenital rather than acquired. The group included 60 individuals assigned female at birth and 48 assigned male at birth. Most participants in this group reported experiencing gender dysphoria since early childhood.

For comparison, the researchers recruited a control group of 117 cisgender individuals. This group consisted of people who sought administrative health documents from the hospital. The control group included 60 females and 57 males who reported attraction to the opposite sex. The researchers implemented strict exclusion criteria for the control group. They removed any potential candidates who had a personal or family history of neurodevelopmental disorders, such as autism or attention deficit hyperactivity disorder.

Two psychiatrists examined each participant using the Waldrop Minor Physical Anomaly Scale. This assessment tool is a standardized method for evaluating 18 specific physical features across six body regions. The regions include the head, eyes, ears, mouth, hands, and feet. To ensure objectivity, the examiners used precise tools like calipers and tape measures for items requiring specific dimensions. They looked for specific signs such as a curved fifth finger, a gap between the first and second toes, or asymmetrical ears.

The analysis revealed distinct differences between the groups regarding the total number of anomalies. Individuals diagnosed with gender dysphoria had higher total scores for physical anomalies compared to the cisgender control group. This trend held true for both those assigned female at birth and those assigned male at birth. The data suggests a generalized increase in these developmental markers among the transgender participants. The researchers then broke down the data by specific body regions to identify patterns.

The disparity was most evident in the craniofacial region. This area includes the head, eyes, ears, and mouth. Both groups of transgender participants showed elevated scores in this region relative to the cisgender participants. Specific anomalies appeared more frequently in the gender dysphoria group. These included a furrowed tongue and skin folds covering the inner corner of the eye, known as epicanthus. The study notes that the face and brain exert reciprocal influences on each other during early embryogenesis.

The researchers also examined peripheral anomalies located on the hands and feet. Participants assigned female at birth showed higher scores in this category than both cisgender males and females. The results for participants assigned male at birth were more nuanced. Their peripheral scores were not statistically distinct from cisgender males. However, their scores were higher than those of the cisgender female control group. This suggests that the distribution of these traits may vary based on biological sex as well as gender identity.

Another measurement taken was head circumference. The study found that individuals assigned male at birth had larger head circumferences than those assigned female at birth, regardless of gender identity. There was no statistical difference in head size between cisgender males and transgender women. Similarly, there was no statistical difference between cisgender females and transgender men. This specific metric appeared to align with biological sex rather than gender identity or developmental instability.

The authors interpret these findings as support for a neurodevelopmental etiology of gender dysphoria. They propose that genetic and environmental factors in the womb likely drive the observed patterns. The presence of craniofacial anomalies specifically points to developmental variations occurring in the first two trimesters. This timing aligns with the period when the brain undergoes sexual differentiation. The findings challenge the notion that gender dysphoria is purely a psychosocial phenomenon.

However, the authors note several limitations that contextualize their results. The control group excluded anyone with a history of neurodevelopmental disorders. This exclusion might have artificially lowered the average anomaly score for the cisgender group. A control group including such histories might have produced different comparisons. Comparing the gender dysphoria group to a clinical psychiatric control group would clarify if these high scores are unique to gender dysphoria.

Additionally, the examiners could not be fully blinded to the participants’ gender presentation. This visibility might have introduced unconscious bias during the physical measurements. The study population also came from a single tertiary care center in Turkey. This sample may not represent the global diversity of gender-diverse individuals. Cultural and genetic background can influence the baseline prevalence of certain minor physical anomalies.

Sexual orientation represents another variable to consider. The majority of the transgender participants in the study were attracted to their same biological sex. The cisgender control group consisted entirely of heterosexual individuals. Future investigations would benefit from including cisgender control groups with same-sex attractions. This would help researchers isolate gender identity from sexual orientation as the primary variable.

The study concludes that minor physical anomalies are more prevalent in this specific cohort of individuals with gender dysphoria. This suggests that the biological roots of the condition may lie in early prenatal development. The authors emphasize that these anomalies are likely markers of underlying genetic or epigenetic processes. They call for future research to integrate genetic analysis to map the specific pathways involved.

The study, “Minor Physical Anomalies as a Gateway to Understanding the Neurodevelopmental Roots of Gender Dysphoria,” was authored by Yasin Kavla, Tuncay Sandıkçı, and Şenol Turan.

Outrage at individual bigotry may undermine support for systemic racial justice

Recent psychological research suggests that for some White Americans, expressing anger at individual acts of racism may actually decrease their motivation to support broader systemic change. The study indicates that voicing outrage at a specific bigot can serve as a psychological release that alleviates feelings of guilt associated with racial privilege, thereby reducing the drive to take further reparative action. These findings were published in the Personality and Social Psychology Bulletin.

The year 2020 saw a global surge in protests following high-profile incidents of police violence against Black individuals. This period introduced many Americans to concepts such as structural racism and White privilege. White privilege refers to the unearned societal advantages that White individuals experience simply due to their racial identity.

Psychological theory posits that acknowledging these unearned advantages can be psychologically threatening to a person’s moral self-image. This awareness often triggers a specific emotion known as White collective guilt. This is not guilt over one’s own personal actions, but rather distress arising from the advantages one receives at the expense of another group.

Psychologists have previously established that this type of guilt can be a powerful motivator. It often drives individuals to support policies or organizations aimed at restoring equity. However, the discomfort of guilt also motivates people to find ways to reduce the negative feeling.

Zachary K. Rothschild, a researcher at Bowdoin College, sought to understand how this dynamic plays out in the age of viral news stories. Rothschild and his colleague, Myles Hugee, investigated whether focusing anger on a specific “bad apple”—an individual acting in a clearly racist manner—might function as a defense mechanism.

The researchers proposed that expressing outrage at a third party could allow individuals to separate themselves from the problem of racism. By condemning a specific bigot, a person reaffirms their own moral standing. This “moral cleansing” might satisfy the internal need to address the threat of racism, leaving the individual with less motivation to contribute to solving systemic issues.

To test this hypothesis, the researchers conducted three separate experiments involving White American adults. The first study involved 896 participants recruited through an online platform. The team first measured the participants’ “justice sensitivity,” which is a personality trait reflecting how strongly a person reacts to unfairness faced by others.

The researchers then manipulated whether the participants felt a sense of racial privilege. Half of the group completed a survey designed to make them think about the unearned advantages they possess as White Americans. The other half completed a control survey about the privileges of being an adult.

Following this, all participants read a news story based on real events. The article described a White woman falsely accusing a Black man of threatening her and then assaulting him. This scenario was chosen to mirror viral incidents that typically spark public anger.

After reading the story, the researchers divided the participants again. One group was asked to write a short paragraph expressing their feelings about the woman in the story. This gave them an opportunity to vent their outrage. The other group was asked to write an objective summary of the events, denying them the chance to express emotion.

Finally, the researchers gave the participants a bonus payment for the study. They offered the participants the option to donate some or all of this money to the National Association for the Advancement of Colored People (NAACP). This donation served as a concrete measure of their willingness to address racial inequity.

The results revealed a specific pattern among participants who scored low in justice sensitivity. For these individuals, being reminded of their White privilege increased their feelings of guilt. If they were not given the chance to express outrage, this guilt drove them to donate more money to the NAACP.

However, the dynamic changed for those who were allowed to vent their anger. Among the low justice sensitivity participants, the opportunity to express outrage at the woman in the story completely eliminated the privilege-induced increase in donations. The act of condemning the individual racist appeared to neutralize the motivating power of their collective guilt.

This effect was not present among participants who scored high in justice sensitivity. For those individuals, the motivation to support racial justice appeared to be intrinsic. Their donations were less dependent on momentary feelings of guilt or the opportunity to express outrage.

The second study, involving 1,344 participants, aimed to determine if this effect was specific to racial issues. The researchers followed a similar procedure but introduced a variation in the news story. Half the participants read the original story about a White woman and a Black man. The other half read a modified version where both the perpetrator and the victim were White.

The researchers found that expressing outrage reduced donations only when the injustice was racial in nature. When the story involved white-on-white conflict, expressing anger did not lower the donation amounts. This suggests that the “moral cleansing” function of outrage is specific to the domain where the person feels a moral threat.

The third study was designed to address potential limitations of the first two. The researchers recruited 1,133 participants and used a more controlled method to measure outrage. Instead of an open-ended writing task, participants in the “expression” condition completed a survey explicitly rating their anger at the perpetrator’s racism.

The researchers also changed the outcome measure to something more substantial than a small donation. They presented participants with a campaign by the American Civil Liberties Union (ACLU) focused on systemic equality. Participants could choose to sign a pledge and select specific volunteer activities they would commit to over the coming year.

The findings from the third study replicated the earlier results. For participants with low justice sensitivity, being reminded of White privilege increased their willingness to volunteer for the ACLU. However, if these participants were first given the opportunity to report their outrage at the individual racist, their willingness to volunteer dropped significantly.

The study provides evidence for what the authors call “defensive outrage.” It suggests that for some people, participating in the public condemnation of racist individuals may serve a self-serving psychological function. It allows them to feel that they have handled their moral obligation, thereby reducing their engagement with the more difficult work of addressing systemic inequality.

There are several caveats to consider regarding this research. The participants were recruited online, which may not perfectly represent the general population. Additionally, the third study relied on self-reported intentions to volunteer, which does not always guarantee that the participants would follow through with the actions.

The study focused exclusively on White Americans. The psychological dynamics of guilt and outrage may function differently in other racial or ethnic groups. Future research would need to investigate whether similar patterns exist in different cultural contexts or regarding other types of social inequality.

The authors note that these findings should not be interpreted to mean that all outrage is counterproductive. For many people, anger is a genuine fuel for sustained activism. The study specifically highlights a mechanism where outrage replaces, rather than complements, constructive action among those who are less naturally inclined toward justice concerns.

The study, “Demotivating Justice: White Americans’ Outrage at Individual Bigotry May Reduce Action to Address Systematic Racial Inequity,” was authored by Zachary K. Rothschild and Myles Hugee.

Consumption of common mineral associated with lower risk of suicidal thoughts

Increased intake of dietary selenium is associated with a lower likelihood of reporting suicidal thoughts among American adults. A recent analysis of population health data indicates that as consumption of this trace mineral rises, the odds of experiencing suicidal ideation decrease. These findings were published in the Journal of Affective Disorders.

Suicide remains a persistent public health challenge around the world. Public health officials and medical professionals prioritize identifying early warning signs to prevent tragic outcomes. Suicidal ideation, characterized by thinking about self-harm or ending one’s life, is a primary indicator of future suicide attempts.

Most prevention strategies currently focus on psychological and social risk factors. Mental health professionals typically look for signs of depression, anxiety, or social isolation. However, researchers are increasingly investigating how physical health and nutrition influence psychiatric well-being.

Trace minerals play specific roles in brain function and mood regulation. Selenium is one such essential element. It is found naturally in soil and appears in foods such as nuts, seafood, meats, and whole grains.

The body utilizes selenium to create selenoproteins. These proteins help manage oxidative stress and regulate the immune system. Previous research has hinted at a link between low selenium levels and mood disorders like depression.

Haobiao Liu of Xi’an Jiaotong University and Zhuohang Chen of Fudan University sought to explore this connection specifically regarding suicidal ideation. They noted that prior studies on trace elements and suicide yielded inconsistent results. Some earlier investigations were limited by small participant numbers or specific demographic focuses.

Liu and Chen designed their study to analyze a much larger and more representative group of people. They utilized data from the National Health and Nutrition Examination Survey (NHANES). This program collects health and nutritional information from a cross-section of the United States population.

The researchers aggregated data from survey cycles spanning from 2005 to 2016. They applied strict exclusion criteria to ensure the reliability of their dataset. For example, they removed individuals with implausible daily calorie counts to avoid data errors.

The final analysis included 23,942 participants. To assess what these individuals ate, the survey employed a dietary recall interview. Participants described all food and beverages consumed over the preceding 24 hours.

Interviewers conducted two separate recalls for each participant to improve accuracy. The first took place in person, and the second occurred via telephone days later. The researchers calculated average daily selenium intake from these reports.

To measure mental health outcomes, the study relied on the Patient Health Questionnaire (PHQ-9). This is a standard screening tool used by doctors to identify depression. The researchers focused specifically on the ninth item of this questionnaire.

This specific question asks participants if they have been bothered by thoughts that they would be “better off dead” or of hurting themselves. Respondents answered based on their experience over the previous two weeks. Those who reported having these thoughts for several days or more were classified as having suicidal ideation.

The researchers used statistical models to look for associations between selenium levels and these reported thoughts. They accounted for various confounding factors that could skew the results. These included age, gender, income, body mass index, and overall diet quality.

The analysis showed an inverse relationship. Participants with higher levels of dietary selenium were less likely to report suicidal ideation. This association persisted even after the researchers adjusted for the other demographic and health variables.

The researchers calculated the change in risk based on units of selenium intake. In their fully adjusted model, they found that a specific unit increase in intake corresponded to a 41 percent decrease in the odds of suicidal ideation. This suggests a strong statistical link between the nutrient and mental health status.

To understand the trend better, the researchers divided participants into four groups based on intake levels. These groups are known as quartiles. The first quartile had the lowest selenium intake, while the fourth had the highest.

Comparing these groups revealed a consistent pattern. Individuals in the top three groups all had a lower risk of suicidal thoughts compared to the bottom group. The risk reduction was most pronounced in the group with the highest consumption.

The study also tested for a “dose-response” relationship. The analysis indicated a linear negative association. As the amount of selenium in the diet went up, the reports of suicidal thinking went down.

The authors propose several biological reasons why this might happen. One theory involves oxidative stress. The brain is sensitive to damage from free radicals, and selenium-based enzymes help neutralize these threats.

Another potential mechanism involves inflammation. High levels of inflammation in the body are often found in people with depression and suicidal behaviors. Selenium has anti-inflammatory properties that might help protect the brain from these effects.

Neurotransmitters may also play a role. These are the chemical messengers that allow nerve cells to communicate. The study authors note that selenium might influence the regulation of serotonin and dopamine, which are critical for mood stability.

Despite these promising findings, the study has several limitations. The research was cross-sectional in design. This means it captured a snapshot of data at a single point in time rather than following people over years.

Because of this design, the study cannot prove that low selenium causes suicidal thoughts. It only shows that the two things are related mathematically. It is possible that people who are depressed simply eat fewer nutrient-rich foods.

Another limitation is the reliance on memory for dietary data. It is difficult for people to remember exactly what they ate in the last 24 hours. This can lead to inaccuracies in the estimated nutrient intake.

The assessment of suicidal ideation also had constraints. Using a single question from a depression screener provides a limited view of a complex behavior. It does not capture the severity or duration of the thoughts in detail.

The researchers also acknowledged that individual biology varies. People absorb nutrients differently based on their genetics and gut health. The study could not account for how well each participant’s body utilized the selenium they consumed.

Future research is necessary to confirm these results. The authors suggest that prospective studies are needed. These would follow large groups of people over time to see if baseline selenium levels predict future mental health issues.

Clinical trials could also provide stronger evidence. In such studies, researchers would provide selenium supplements to some participants and placebos to others. This would help determine if increasing intake directly improves mental well-being.

Investigating the biological pathways is another priority. Scientists need to understand exactly how selenium interacts with brain chemistry. This could lead to new treatments or dietary recommendations for people at risk of suicide.

Until then, the findings add to a growing body of evidence linking diet to mental health. They highlight the potential importance of proper nutrition in maintaining psychological resilience. Public health strategies might one day include dietary optimization as part of suicide prevention efforts.

The study, “Does dietary selenium protect against suicidal ideation? Findings from a U.S. population study,” was authored by Haobiao Liu and Zhuohang Chen.

❌