Normal view

Today — 17 December 2025Main stream

Volume reduction in amygdala tracks with depression relief after ketamine infusions

17 December 2025 at 01:00

Researchers have identified a specific structural change in the brain that appears to track with the relief of depressive symptoms following ketamine treatment. In a group of patients with treatment-resistant depression, a reduction in the volume of a specific sub-region of the amygdala was linked to a decrease in feelings of unhappiness and unease. These findings were published in the Journal of Psychiatric Research.

Major depressive disorder is a pervasive condition that affects millions of individuals globally. Standard treatments, such as selective serotonin reuptake inhibitors, are effective for many. However, roughly thirty percent of patients do not experience adequate relief even after trying multiple different medications. This condition is categorized as treatment-resistant depression.

Ketamine has emerged in recent years as a potent alternative for these difficult-to-treat cases. It is an anesthetic drug that functions differently from traditional antidepressants. Clinical trials have repeatedly demonstrated that it can provide rapid relief for severe depression. Despite its proven efficacy, the precise biological mechanisms by which ketamine alters the brain to improve mood remain largely unknown.

The amygdala is a small, almond-shaped structure located deep within the temporal lobes of the brain. It is widely recognized as a central hub for processing emotions, particularly fear and negative stimuli. In people suffering from depression, this brain region often exhibits excessive activity. Neuroscientists have long suspected that this hyperactivity contributes to the persistent negative emotional state associated with the disorder.

Previous research using functional magnetic resonance imaging has supported this idea. Studies have shown that ketamine administration can dampen this excessive activity in the amygdala. However, the relationship between the physical size, or volume, of the amygdala and the therapeutic effects of ketamine has been less clear.

Past attempts to measure amygdalar volume in depressed patients have yielded inconsistent results. Some studies reported shrinkage, while others reported enlargement. A potential reason for these discrepancies is that the amygdala is not a single, uniform object. It is a complex of multiple distinct nuclei, or subfields. These subfields have different cellular structures and connect to different parts of the brain.

The research team was led by Kengo Yonezawa and Shinichiro Nakajima from the Department of Neuropsychiatry at Keio University School of Medicine in Tokyo, Japan. They hypothesized that looking at the amygdala as a whole might obscure important changes occurring within its specific internal structures. They proposed that changes in the volume of specific subfields might correlate with how well a patient responds to ketamine.

To test this hypothesis, the investigators utilized data from a rigorous clinical trial. The study was a double-blind, randomized, placebo-controlled trial. This design is the gold standard in medical research because it minimizes bias. Participants were adults between the ages of 20 and 59 who had failed to respond to at least two different antidepressants.

The study enrolled 34 participants with treatment-resistant depression. These individuals were randomly assigned to receive either intravenous ketamine or a saline placebo. The infusions were administered twice a week for a period of two weeks.

The researchers used high-resolution magnetic resonance imaging to scan the brains of the participants. Scans were taken at two specific time points. The first scan occurred before the treatment began. The second scan took place approximately five to six days after the final infusion.

The team employed advanced software called FreeSurfer to analyze the brain images. This automated tool allowed them to digitally segment the amygdala into three distinct functional sub-regions. These were the laterobasal nuclei, the centromedial nuclei, and the superficial nuclei.

The laterobasal nuclei are considered the primary input centers of the amygdala. They receive sensory information from the cortex and other brain areas. The centromedial nuclei act as the output center, sending signals to the brainstem to trigger behavioral responses. The superficial nuclei are connected to the olfactory cortex.

Depressive symptoms were measured using the Montgomery Åsberg Depression Rating Scale. The researchers looked at the total score, but they also broke the scores down into subdomains. These subdomains included dysphoria, which covers sadness and pessimistic thoughts; retardation, which covers lethargy and lack of feeling; and vegetative symptoms, such as sleep and appetite changes.

The analysis revealed a specific statistical interaction. In the group of patients who received ketamine, there was a positive association between the change in volume of the right laterobasal nuclei and the change in dysphoria scores. Specifically, as the volume of this brain region decreased, the patients’ reported feelings of sadness and unease also decreased.

This correlation was exclusive to the ketamine group. Patients in the placebo group did not show this relationship between brain structure and symptom improvement. This distinction suggests that the observation is not merely a general feature of feeling better. It implies a specific neurobiological effect induced by the drug.

The researchers did not find similar associations in the other subfields of the amygdala. The left side of the amygdala also did not show this specific correlation. The connection was isolated to the right laterobasal nuclei and the improvement of dysphoric symptoms.

These findings align with the theory that the amygdala is overactive in depression. The laterobasal nuclei receive inputs from the prefrontal cortex. This is the part of the brain responsible for higher-order thinking and regulation. In depression, the communication between the prefrontal cortex and the amygdala is often impaired.

The authors suggest that ketamine may help restore normal function to the prefrontal cortex. This restoration allows for better “top-down” control of the amygdala. The observed reduction in volume might represent a physical manifestation of this reduced hyperactivity. Essentially, as the region becomes less overactive, it may undergo subtle structural changes that reflect a more normalized state.

It is worth noting that the study did not find a significant difference in the average volume change between the ketamine group and the placebo group when looking at the participants as a whole. The effect was only visible when looking at the correlation with symptom improvement. This means ketamine did not simply shrink the amygdala in everyone. Rather, the shrinkage tracked with who got better.

There are several limitations to this study that require consideration. The sample size was relatively small. Due to logistical issues and dropouts, the final analysis included eleven patients in the ketamine group and fifteen in the placebo group. Small sample sizes can sometimes lead to results that are not reproducible in larger populations.

Another limitation is that the participants remained on their standard antidepressant medications during the trial. While this reflects real-world clinical practice, it introduces a variable. It is theoretically possible that the background medications influenced the brain structure in some way.

The study was also exploratory in nature. The researchers examined multiple brain regions and symptom scores. While they used statistical methods to validate their findings, larger studies are needed to confirm these results.

Additionally, the study did not include a healthy control group. Without healthy subjects for comparison, it is difficult to know if the amygdalar volumes in these patients were abnormal to begin with. It is also unclear whether the volume reduction represents a return to a “normal” size or a change to a new state.

The duration of the study was short. The second MRI scan was taken less than a week after the final treatment. It remains unknown whether these structural changes persist over time. It is also unclear if the volume of the amygdala would return to its previous state if the depressive symptoms recurred.

Despite these caveats, the research offers a new perspective on how ketamine treats depression. It moves beyond general ideas of “chemical imbalance” to look at specific structural changes in emotional processing centers. The identification of the right laterobasal nuclei as a region of interest provides a target for future investigations.

Understanding these biological markers is essential for the development of personalized medicine in psychiatry. If doctors can identify which brain structures need to change for recovery to occur, they may be able to predict which patients will respond to ketamine. This could spare patients from undergoing treatments that are unlikely to work for their specific biology.

The study, “The association between amygdalar volume changes and depressive symptom improvements after repeated ketamine infusion in treatment-resistant depression: a double-blind, randomized, placebo-controlled trial with the following open-label study,” was authored by Kengo Yonezawa, Shinichiro Nakajima, Nobuaki Hondo, Yohei Ohtani, Kie Nomoto-Takahashi, Taisuke Yatomi, Sota Tomiyama, Nobuhiro Nagai, Keisuke Kusudo, Koki Takahashi, Shiori Honda, Sotaro Moriyama, Takashige Yamada, Shinsuke Koike, Hiroyuki Uchida, and Hideaki Tani.

Couples share a unique form of contagious forgetting, new research suggests

16 December 2025 at 23:00

Couples often finish each other’s sentences. New research suggests they may also help edit each other’s memories. A study published in the Quarterly Journal of Experimental Psychology provides evidence that romantic partners synchronize their brain activity during storytelling. This neural alignment leads to a specific type of shared forgetting that does not occur between strangers.

The research indicates that the closeness of a relationship fundamentally alters how two people process information together. When one partner selectively remembers certain details of an event, the other partner tends to forget related but unmentioned details. This phenomenon suggests that memory is not just an individual archive but a collaborative system shaped by social bonds.

Psychologists have long understood that human memory is reconstructive rather than reproductive. When a person tries to recall a specific piece of information, their brain must actively select that target memory. In doing so, the brain suppresses competing memories that might interfere with the retrieval.

For example, if a person tries to remember “Fruit-Orange,” they may temporarily suppress the memory of “Fruit-Banana” to avoid confusion. This process is known as retrieval-induced forgetting. It is a standard mechanism that helps keep cognitive processes efficient and focused.

However, this pruning process is not confined to a single mind. Previous research has identified a phenomenon called socially shared retrieval-induced forgetting. This occurs when a listener experiences the same memory suppression as the speaker.

If a speaker recalls “Fruit-Orange,” a listener who is paying attention will also involuntarily suppress “Fruit-Banana.” Later, that listener will find it harder to recall “Banana” than if they had never heard the speaker at all. The current study aimed to see if this “contagious” forgetting is stronger between people who love each other.

Huan Zhang, the study’s first author, conducted this research with colleagues at Tianjin Normal University in China. The team posited that romantic partners share a unique cognitive reality. Over time, couples develop aligned patterns of thinking and communicating.

The researchers hypothesized that this deep connection would make partners more susceptible to influencing each other’s memory systems. To test this, they designed two experiments involving heterosexual couples and pairs of strangers.

The first experiment focused on whether the type of memory mattered. The researchers recruited 38 adults forming 19 romantic couples. These participants had been in relationships for at least six months.

The researchers used cue words to trigger autobiographical memories. Some cues prompted “joint” memories, which were events the couple experienced together. Other cues prompted “non-joint” memories, which were private events unknown to the partner.

During the learning phase, participants studied these memories. Then, they entered a retrieval practice phase. One partner acted as the speaker and the other as the listener. The speaker practiced recalling specific details of the memories while the listener simply listened.

Later, both participants performed a final recall test individually. They tried to remember all the details associated with the original cues. This included the items the speaker practiced and the related items the speaker skipped.

The results showed a clear pattern of forgetting. Listeners struggled to recall the unmentioned details related to what the speaker had practiced. This happened for both shared memories and private memories. The connection between the partners seemed to facilitate this effect regardless of the content.

The researchers then expanded the study to include strangers. In a second experiment, they recruited 76 participants. These included 20 romantic couples and 18 pairs of strangers who were introduced just before the test.

To ensure a fair comparison, all pairs used non-joint memories. This eliminated the advantage couples might have from knowing their partner’s past. The procedure remained the same, with one person speaking and the other listening.

The findings revealed a divergence between the groups. Romantic partners again exhibited socially shared retrieval-induced forgetting. When the speaker recalled specific details, the listening partner forgot the related, unmentioned details.

In contrast, the pairs of strangers did not show this effect. The listeners in the stranger group did not experience significant memory suppression. This result differs from some prior studies that found effects among strangers, but it highlights the potential power of intimacy.

The researchers propose that romantic partners have a higher motivation to align their thinking. Listeners in a relationship may simulate the speaker’s retrieval process more intensely. This leads to the same suppression mechanisms triggering in the listener’s brain.

To understand the biology behind this, the team conducted a third experiment using functional near-infrared spectroscopy. This is a non-invasive imaging technique. It uses light to measure blood flow and oxygen levels in the brain.

The researchers focused on the prefrontal cortex. This brain region is associated with executive control and memory regulation. They attached sensors to the foreheads of 38 pairs of participants, comprising both couples and strangers.

The brain imaging data showed higher overall activation in the prefrontal cortex for romantic couples compared to strangers. This suggests that couples engaged more cognitive resources during the collaborative task.

More revealing was the analysis of neural synchronization. The researchers looked at how the brain waves of the speaker and listener matched up over time. They found a high degree of interpersonal neural synchronization in the romantic pairs.

Specifically, the signals in the lateral prefrontal cortex of the listeners synced with those of the speakers. This synchronization was significantly stronger in couples than in stranger pairs. The brains of the partners effectively began to operate in rhythm.

The researchers then looked for a link between this brain activity and the memory test results. They found a statistical correlation. The stronger the neural synchronization between a couple, the more the listener experienced memory forgetting.

This suggests that the synchronization is not just a side effect of being together. It appears to be the mechanism that allows one partner’s memory process to reshape the other’s. The brain data explained about 10 percent of the variation in the forgetting effect.

The authors argue that this synchronization helps couples build a “shared reality.” By aligning what they remember and what they forget, partners maintain a coherent shared view of the world. This comes at the cost of losing some individual details.

There are caveats to these findings. The study participants were Chinese university students. Cultural factors regarding relationships and social influence could play a role. The results might differ in Western cultures where independence is often prioritized over social alignment.

The experimental setup was also artificial. Participants had fixed roles as speakers or listeners. Real conversations are dynamic, with partners swapping roles rapidly. Future research needs to examine these effects in naturalistic settings.

The imaging technique also has limitations. Functional near-infrared spectroscopy only measures the surface of the brain. It cannot reach deeper structures that might be involved in memory. It also has lower spatial resolution than MRI scans.

Despite these limitations, the study offers a new perspective on social cognition. It provides biological evidence that romantic love creates a neural link between partners. This link facilitates the updating and pruning of memories across two brains.

The findings imply that our memories are not entirely our own. Who we spend our time with helps determine what we remember and what we forget. In a romantic relationship, the price of harmony may be the loss of specific unshared details.

Future research aims to explore different types of relationships. It remains to be seen if close friends or family members show similar synchronization. The researchers also hope to see if this effect holds true for emotional memories versus neutral ones.

For now, the data suggests that becoming a couple involves a convergence of minds. This convergence is visible in the blood flow of the prefrontal cortex. It manifests as a synchronized reshaping of the past.

The study, “The role of romantic relationships in socially shared retrieval-induced forgetting: Cognitive and neural evidence,” was authored by Huan Zhang, Yuyao Chang, Shamali Ahati, Jiaying Pu and Tour Liu.

Yesterday — 16 December 2025Main stream

The mood-enhancing benefits of caffeine are strongest right after waking up

16 December 2025 at 17:00

Recent research suggests that the consumption of caffeinated beverages is linked to a measurable increase in positive feelings, particularly during the morning hours. While caffeine reliably lifts spirits, its ability to reduce negative emotions appears less consistent and does not depend on the time of day. These findings were detailed in a paper published in the journal Scientific Reports.

Caffeine is the most widely consumed psychoactive substance in the world. Estimates suggest that nearly 80 percent of the global population ingests it in some form. Common sources include coffee, tea, soda, and chocolate. Consumers often rely on these products to combat fatigue or improve their focus. Many also anecdotally report that a cup of coffee improves their general disposition.

Researchers have studied the effects of caffeine extensively in laboratory settings. These controlled environments have confirmed that the substance acts as a stimulant for the central nervous system. However, laboratories are artificial environments. They strip away the messy variables of daily life. They cannot easily account for social interactions, work stress, or the natural fluctuations of the biological clock.

Justin Hachenberger, a researcher at Bielefeld University in Germany, led a team to investigate these effects in the real world. The team sought to understand how caffeine interacts with an individual’s emotional state outside of the laboratory. They also wanted to see if factors like the time of day or social setting changed the outcome.

To understand the study, it is helpful to distinguish between “mood” and “affect.” In psychology, mood typically refers to a sustained emotional state that lasts for a long period. Affect refers to short-term, reactive emotional states. These are the immediate feelings a person experiences in response to a stimulus. The researchers focused specifically on momentary affect.

The biological mechanism behind caffeine is well understood. The substance acts as an adenosine antagonist. Adenosine is a chemical that accumulates in the brain throughout the day. It binds to specific receptors and slows down nerve cell activity. This process creates the sensation of drowsiness.

Caffeine mimics the shape of adenosine. It binds to the same receptors but does not activate them. This blocks the real adenosine from doing its job. By preventing this slowdown, caffeine allows stimulating neurotransmitters like dopamine to remain active. This leads to increased alertness and potentially improved feelings of well-being.

The researchers employed a technique known as the Experience Sampling Method. This approach involves asking participants to report on their experiences repeatedly throughout the day in their natural environments. This method reduces memory errors. Participants report what they are feeling right now rather than what they remember feeling yesterday.

The investigation consisted of two separate studies involving young adults. The first study tracked 115 participants for two weeks. The second tracked 121 participants for four weeks. Participants ranged in age from 18 to 29. They used smartphones to answer short surveys seven times a day.

In each survey, participants reported whether they had consumed any caffeinated beverages in the past 90 minutes. They also rated their current feelings. They used a sliding scale to indicate how enthusiastic, happy, or content they felt. These items combined to form a score for positive affect. They also rated how sad, upset, or worried they felt. These items formed a score for negative affect.

The data showed a clear association between caffeine and positive feelings. In both studies, participants reported higher levels of enthusiasm and happiness after consuming caffeine. The statistical analysis accounted for sleep duration and sleep quality. This suggests the mood boost was not simply a result of being well-rested.

The timing of consumption played a major role in the intensity of this effect. The association between caffeine and positive affect was strongest in the first few hours after waking up. Specifically, the boost was most pronounced within 2.5 hours of rising.

This morning peak aligns with the concept of sleep inertia. This is the groggy transition period between sleep and full wakefulness. The researchers propose that caffeine may help individuals overcome this state more effectively. It helps jump-start the sympathetic nervous system. As the day progressed, the link between caffeine and positive feelings weakened.

The results regarding negative affect were different. The researchers hypothesized that caffeine would reduce feelings of sadness or worry. The data only partially supported this. A reduction in negative affect was observed in the second, longer study. It was not observed in the first study.

Unlike positive feelings, the reduction in negative feelings did not change based on the time of day. If caffeine helped mitigate sadness, it did so regardless of whether it was morning or evening. This suggests that the mechanisms driving positive and negative affect may differ.

The study also examined whether the context of consumption mattered. The researchers looked at whether participants were alone or with others. They also asked about levels of tiredness.

Tiredness acted as a moderator for the effect. Participants who felt more tired than usual experienced a greater increase in positive affect after consuming caffeine. This supports the common use of caffeine as a countermeasure against fatigue.

Social context also influenced the results. The link between caffeine and positive affect was weaker when participants were around other people. This finding is somewhat counterintuitive. One might expect socializing over coffee to boost mood further.

The authors suggest a “ceiling effect” might be at play. Social interaction often increases positive affect on its own. If a person is already feeling good because they are with friends, caffeine may not be able to push their positive feelings much higher. The chemical effect becomes less noticeable amidst the social stimulation.

The researchers also looked for differences based on individual traits. They collected data on participants’ habitual caffeine intake. They also screened for symptoms of anxiety and depression using standardized questionnaires.

Surprisingly, these individual differences did not alter the results. The relationship between caffeine and mood remained consistent across the board. Frequent consumers did not show a different pattern of emotional response compared to lighter users.

This challenges the “withdrawal reversal” hypothesis. Some scientists argue that caffeine only makes people feel better because it cures withdrawal symptoms. If that were the only factor, heavy users would experience a massive boost while light users would feel little. The consistency across groups suggests there may be a direct mood-enhancing effect beyond just fixing withdrawal.

Hachenberger noted this consistency in the press materials. He stated, ‘We were somewhat surprised to find no differences between individuals with varying levels of caffeine consumption or differing degrees of depressive symptoms, anxiety, or sleep problems. The links between caffeine intake and positive or negative emotions were fairly consistent across all groups.’

However, there are caveats to consider. The study relied on self-reports. While the sampling method is robust, it still depends on participant honesty and accuracy. The sample consisted entirely of young adults. The way an 18-year-old metabolizes caffeine may differ from that of an older adult.

Additionally, the study is observational. It shows a correlation but cannot prove causation. It is possible that people who are already in a good mood are more likely to seek out coffee. However, the use of within-person analysis helps control for this to some degree.

There is also the question of anxiety. High doses of caffeine can induce jitteriness and anxiety. The study did not find a link between caffeine and increased worry. However, the researchers note that individuals prone to caffeine-induced anxiety might avoid the substance entirely. These people would naturally exclude themselves from a study on caffeine consumption.

The researchers recommend future studies use more objective measures. Wearable technology could track heart rate and skin temperature. This would provide precise physiological data to match the psychological reports. Tracking the exact moment of consumption, rather than a 90-minute window, would also improve precision.

Understanding these daily fluctuations helps paint a clearer picture of human behavior. It moves the science of nutrition and psychology out of the lab and into the rhythm of daily life. For now, the data supports the habit of the morning coffee. It appears to be an effective tool for boosting positive engagement with the day, particularly in those first groggy hours.

The study, “The association of caffeine consumption with positive affect but not with negative affect changes across the day,” was authored by Justin Hachenberger, Yu-Mei Li, Anu Realo, and Sakari Lemola.

Formal schooling boosts executive functions beyond natural maturation

16 December 2025 at 03:00

Going to school helps children learn how to read and solve math problems, but it also appears to upgrade the fundamental operating system of their brains. A new analysis suggests that the structured environment of formal education leads to improvements in executive functions, which are the cognitive skills required to control behavior and achieve goals. These findings were published in the Journal of Experimental Child Psychology.

To understand why this research matters, one must first understand what executive functions are. Psychologists use this term to describe a specific set of mental abilities that allow people to manage their thoughts and actions. These skills act like an air traffic control system for the brain. They help a person pay attention, switch focus between tasks, and remember instructions.

There are three main components to this system. The first is working memory, which is the ability to hold information in your mind and use it over a short period. The second is inhibitory control. This is the ability to ignore distractions and resist the urge to do something impulsive. The third is cognitive flexibility. This allows a person to shift their thinking when the rules change or when a new problem arises.

Researchers have known for a long time that these skills get better as children get older. A seven-year-old is almost always better at sitting still and following directions than a four-year-old. The difficult question for scientists has been determining what causes this change. It is hard to tell if children improve simply because their brains are biologically maturing or if the experience of going to school actually speeds up the process.

This is the question that Jamie Donenfeld and her colleagues sought to answer. Donenfeld is a researcher at the University of Massachusetts Boston. She worked alongside Mahita Mudundi, Erik Blaser, and Zsuzsa Kaldy, who are also affiliated with the Department of Psychology at the same university. The team wanted to isolate the specific impact of the classroom environment from the natural effects of aging.

To do this, the researchers relied on a clever quirk of the educational system known as the school entry cutoff date. In many school districts, a child must turn five by a specific date, such as September 1, to enter kindergarten. This creates a natural experiment.

Consider two children who are practically the same age. One was born on August 31, and the other was born on September 2. The child born in August enters kindergarten. The child born in September must wait another year. By comparing these two groups, scientists can look at children who are virtually identical in biological maturity but have vastly different experiences with formal schooling.

The research team did not conduct a single new experiment with a specific group of children. Instead, they performed a meta-analysis. This is a statistical method that allows scientists to combine the results of many previous studies to find a common trend. They searched through databases for studies published between 1995 and 2023.

They started with over 400 potential studies. They screened these records to find ones that met strict criteria. The studies had to compare children of similar ages who had different levels of schooling. They also had to use objective measures of executive function.

The team ultimately identified 12 studies that fit all their requirements. These studies included data from approximately 1,611 children. The participants ranged in age from about four and a half to nine years old. The studies covered various locations, including the United States, Germany, Israel, and Scotland.

By pooling the data from these different sources, the researchers calculated a standardized mean difference. This number represents the size of the “schooling effect.” The analysis revealed a small but consistent positive effect. The data showed that attending school does improve a child’s executive functions.

The improvement was not massive, but it was reliable. The researchers described the effect as modest. It suggests that the experience of school provides a unique boost to cognitive development that goes beyond just getting older.

The researchers also conducted a secondary analysis using the longitudinal studies in their set. These were studies that followed children over time. They compared two types of groups. The first group consisted of children who did not advance a grade level during the study period, such as those remaining in preschool. This group provided a baseline for how much executive function improves due to natural maturation alone.

The second group consisted of children who completed a grade, such as first grade, during the same timeframe. This group represented the combined effect of biological maturation plus the experience of schooling.

The results showed a clear difference. The children who experienced a year of schooling showed greater gains in executive functions than those who only grew a year older. The estimated effect size for the schooling group was higher than for the maturation-only group. This supports the idea that the classroom environment acts as a training ground for the brain.

It is important to consider why school has this effect. The authors argue that formal education places heavy demands on a child. Students must sit still for extended periods. They must listen to instructions from teachers. They have to wait their turn to speak. They must remember rules and complete tasks even when they are tired or bored.

This daily routine serves as an intense practice session for inhibitory control and working memory. The state of Massachusetts, for example, requires 900 hours of structured learning time per year. That is a massive amount of practice.

The authors compared this to commercial “brain training” games. Many companies sell video games that claim to improve cognitive skills. However, research has largely shown that these games do not work very well. Players get better at the specific game, but the skills do not transfer to real life.

The researchers suggest that school succeeds where these games fail because of the intensity and duration of the experience. A few hours of gaming cannot compare to hundreds of hours of managing one’s behavior in a social classroom setting. The context of school is immersive. It requires children to use their executive functions in real-world situations to achieve social and academic goals.

There are limitations to this study that should be noted. The number of studies included in the final analysis was relatively small. Finding research that strictly followed the cutoff-date design is difficult. This means the total pool of participants was not as large as it is in some medical meta-analyses.

The studies also used a wide variety of tasks to measure executive functions. Some used memory games involving numbers. Others used tasks where children had to sort cards by changing rules. Some tested inhibitory control by asking children to touch their toes when told to touch their head.

This variety makes it harder to compare results perfectly across different papers. The educational systems in the different countries also vary. Kindergarten in Switzerland might focus more on play than kindergarten in the United States. This could influence how much “training” the children actually receive.

The authors also noted that they could not examine specific transitions in detail. It is possible that the jump from preschool to kindergarten has a bigger impact than the move from first to second grade. The current data did not allow them to break down the results by specific grade levels with high precision.

Future research is needed to understand which parts of schooling are the most effective. It might be the structured curriculum. It might be the social interaction with peers. It might be the relationship with the teacher. Understanding the specific mechanisms could help educators design classrooms that better support cognitive development.

The researchers also point out that the tests used in these studies are laboratory tasks. They are artificial by nature. Future studies should try to measure how children use these skills in real-world scenarios. We need to know if better scores on a memory test translate to better behavior on the playground or at home.

The study, “School changes minds: A meta-analysis shows that schooling modestly improves children’s executive functions,” was authored by Jamie Donenfeld, Mahita Mudundi, Erik Blaser, and Zsuzsa Kaldy.

Recent LSD use linked to lower odds of alcohol use disorder

16 December 2025 at 00:00

Recent analysis of federal health data suggests that the recreational use of LSD is associated with a lower likelihood of alcohol use disorder. This finding stands in contrast to the use of other psychedelic substances, which did not show a similar protective link in the past year. The results were published recently in the Journal of Psychoactive Drugs.

Alcohol use disorder affects millions of adults and stands as one of the most persistent public health challenges in the United States. The condition involves a pattern of alcohol consumption that leads to clinically detectable distress or impairment. Individuals with this disorder often find themselves unable to control their intake despite knowing it causes physical or social harm. Standard treatments exist, but relapse rates remain high. Consequently, medical researchers are exploring alternative therapeutic avenues.

In recent years, attention has shifted toward the potential utility of psychedelic compounds. Substances such as psilocybin and MDMA have shown promise in controlled clinical trials for treating various psychiatric conditions. However, there is a substantial distinction between administering a drug in a hospital with trained therapists and taking a drug recreationally. James M. Zech, a researcher at Florida State University, sought to investigate this difference. Zech collaborated with Jérémie Richard from Johns Hopkins School of Medicine and Grant M. Jones from Harvard University.

The team aimed to determine if the therapeutic signals seen in small clinical trials would appear in the general population. They utilized data from the National Survey on Drug Use and Health. This government project recruits a representative group of American citizens to answer detailed questions about their lifestyle and health. The researchers pooled data collected from 2021 through 2023. The final dataset included responses from 139,524 adults.

To ensure accuracy, the investigators did not simply look at who used drugs and who drank alcohol. They employed statistical models designed to account for confounding factors. They adjusted their calculations for variables such as age, biological sex, income, and education level. They also controlled for the use of other substances, including tobacco and cannabis. This process helped them isolate the specific relationship between psychedelics and alcohol problems.

The researchers assessed whether participants met the diagnostic criteria for alcohol use disorder within the past year. They also looked at the severity of the disorder by counting the number of symptoms reported. These symptoms range from experiencing cravings to neglecting responsibilities due to drinking.

The analysis revealed a distinct association regarding lysergic acid diethylamide, better known as LSD. Adults who reported using LSD in the past year were significantly less likely to meet the criteria for alcohol use disorder. The adjusted odds ratio indicated a 30 percent reduction in likelihood compared to non-users. Among those who did have the disorder, LSD users reported approximately 15 percent fewer symptoms.

The study did not find the same pattern for other popular substances. The researchers analyzed the use of MDMA and ketamine over the same twelve-month period. Neither of these drugs showed a statistical association with the presence or absence of alcohol use disorder. This suggests that the potential protective effect observed with LSD might be specific to that compound or the context in which it is typically used.

A more complex picture emerged when the team examined lifetime usage histories. The survey asked participants if they had ever used certain drugs, even if they had not done so recently. Individuals who had used psilocybin or MDMA at any point in their lives were actually more likely to meet the criteria for alcohol use disorder in the past year. In contrast, lifetime use of DMT was linked to a lower probability of having the disorder.

These contradictory findings highlight the difficulty of interpreting observational data. The researchers propose several theories to explain why lifetime psilocybin use might track with higher alcohol problems while past-year LSD use tracks with lower ones. It is possible that individuals with existing substance use issues are more inclined to experiment with psilocybin.

Another possibility involves the nature of the psychedelic experience itself. While clinical trials optimize the setting to ensure a positive outcome, recreational use carries risks. The authors note that unsupervised trips can sometimes be distressing or psychologically destabilizing. If a person has a negative experience, they might increase their alcohol consumption as a way to cope with the resulting stress.

Conversely, the potential benefits of LSD could stem from psychological shifts often reported by users. Previous studies indicate that psychedelics can alter personality traits. Users often report increased “openness” and decreased “neuroticism” after a profound experience. If LSD facilitates such changes more reliably in naturalistic settings, it could theoretically reduce the psychological drivers of heavy drinking.

These results contribute to a growing body of literature that often points in different directions. For example, a survey of Canadian adults previously found that people self-reported large reductions in alcohol use after taking psychedelics. In that study, respondents specifically cited psilocybin as the most effective agent for change. The discrepancy between that survey and the current findings underscores the difference between self-perception and objective diagnostic criteria.

Clinical research has also provided evidence for the efficacy of psilocybin, provided it is administered professionally. A small trial conducted in Denmark tested a single high dose of psilocybin on patients with severe alcohol use disorder. In that experiment, patients received psychological support before and after the session. The clinicians observed a reduction in heavy drinking days and cravings.

The contrast between the clinical success of psilocybin and the negative association found in the general population data is noteworthy. It suggests that the element of therapy and professional guidance may be essential for achieving therapeutic outcomes. Without the safety net of a clinical setting, the risks of using these powerful substances may outweigh the benefits for some individuals.

There are some limitations to the current study that affect how the results should be viewed. The analysis is cross-sectional, meaning it captures a snapshot in time rather than following people forward. As a result, the researchers cannot prove that LSD causes a reduction in drinking. It is equally possible that people who choose to use LSD simply have different lifestyle patterns that protect them from alcohol addiction.

The study also faced constraints regarding the data available. The federal survey only asked about past-year use for a subset of drugs. For psilocybin, the survey only asked about lifetime use. This prevented the researchers from seeing if recent psilocybin use might have shown a positive benefit similar to LSD. Additionally, the data relies on self-reporting. Participants may not always be truthful about their involvement with illegal substances or the extent of their alcohol consumption.

The researchers emphasize the need for longitudinal studies in the future. Tracking individuals over many years would clarify the order of events. It would show whether psychedelic use typically precedes a change in drinking behavior. The authors also suggest that future research should measure the dosage and frequency of use. Understanding whether a person took a substance once or heavily and repeatedly is necessary to fully understand the risks and benefits.

The study, “The Relationship Between Psychedelic Use and Alcohol Use Disorder in a Nationally Representative Sample,” was authored by James M. Zech, Jérémie Richard, and Grant M. Jones.

Before yesterdayMain stream

Authoritarian leadership linked to higher innovation in family-owned companies

15 December 2025 at 03:00

Top-down, commanding leadership is frequently viewed with skepticism in the modern business world. Management experts typically champion collaborative environments where employees feel free to share ideas without fear of retribution. A new study challenges the universality of this view. The findings suggest that in family-owned businesses, a strict, authoritarian leadership style can actually boost innovation.

This positive effect is particularly strong when family members feel a deep emotional connection to the company and when the business operates in an emerging economy. The research was published in the Journal of Small Business Management.

Family businesses face a unique set of challenges compared to their non-family counterparts. They must balance professional goals with personal relationships. Previous research into how these firms innovate has produced conflicting results. Some observers argue that family firms are too conservative and risk-averse to innovate effectively. Others contend that their long-term focus allows them to be more efficient with resources.

Chelsea Sherlock from Mississippi State University led the research team. Her co-authors included David R. Marshall, Clay Dibrell, and Eric Clinton. The team sought to resolve existing debates by looking at leadership styles. They specifically examined authoritarian leadership. This style is characterized by a leader who exerts absolute control over decisions and demands unquestioning obedience from subordinates.

In a general corporate setting, such heavy-handed management often crushes creativity. Employees may feel stifled or resentful. Sherlock and her colleagues proposed that family firms operate under a different psychological contract. In these organizations, the leader is often a matriarch or patriarch. Their authority is derived not just from a job title but from their position within the family unit.

The researchers hypothesized that this unique context changes how leadership impacts innovation. Innovation requires the rapid mobilization of resources. It often demands quick, decisive action. An authoritarian leader can cut through bureaucratic red tape. They can allocate funds and personnel without engaging in lengthy debates. The team believed this efficiency could drive new product development and service improvements.

To test this theory, the researchers utilized data from the Successful Transgenerational Entrepreneurship Project (STEP). This is a global survey of family business leaders. The final sample included 1,267 family firms from 56 different countries. The businesses were small to medium-sized enterprises with fewer than 500 employees. The study covered a diverse range of nations, separating them into emerging economies and advanced economies.

The survey asked CEOs to rate their firm’s innovativeness. Questions focused on their emphasis on research and development and their history of introducing new product lines. They also rated the level of authoritarian leadership within the firm. These questions assessed how much the leader retained decision-making authority and expected strict compliance.

A third key variable was emotional attachment. The researchers measured how strongly family members identified with the business. This concept reflects a sense of psychological ownership. In firms with high emotional attachment, the business is not just a source of income. It is a central part of the family’s identity and legacy.

The analysis revealed a positive relationship between authoritarian leadership and firm innovativeness. Contrary to popular management theories that favor flat hierarchies, the data showed that strict family leaders often drove their companies to be more innovative. The researchers suggest this is because authoritarian leaders in family firms are deeply committed to the business’s survival. They possess the power to force the organization to adapt and evolve.

This relationship was not uniform across all companies. The study found that emotional attachment played a vital moderating role. The positive effect of authoritarian leadership was significantly stronger in firms where the family felt a deep emotional bond.

When family members are emotionally invested, they are more likely to trust the leader’s intentions. They view the leader’s strict commands as necessary for protecting the family legacy. This trust reduces resistance. Family employees interpret top-down directives as focused decision-making rather than oppression. This alignment allows the firm to move quickly and cohesively toward innovative goals.

Conversely, in firms where emotional attachment was low, the benefits of authoritarian leadership were less apparent. Without that emotional buffer, strict control is more likely to breed resentment. If the family does not care deeply about the business, they may view an authoritarian leader as a tyrant rather than a guardian. This friction can stall progress and hinder the creative process.

The researchers also investigated how the economic environment influenced these dynamics. They distinguished between advanced economies, such as Germany and the United States, and emerging economies, such as Brazil and China. Emerging economies often lack robust institutional support structures. In these environments, the rule of law may be weaker, and resources may be scarcer.

The study found a specific “three-way interaction” between leadership, emotion, and economy. The combination of authoritarian leadership and high emotional attachment was most effective for innovation in emerging economies. In these unpredictable markets, a strong hand at the helm is often necessary to navigate external chaos.

In an emerging economy, a family firm cannot always rely on external institutions for stability. They must rely on themselves. A strict leader provides direction. When that leadership is backed by a family united by strong emotional ties, the firm becomes a resilient, innovative unit. The family accepts the hierarchy because it ensures their collective survival and prosperity.

The results were different for firms in advanced economies with low emotional attachment. In countries with stable markets and strong institutions, the need for a “strongman” leader is less pronounced. If a family in an advanced economy lacks an emotional connection to the business, an authoritarian leader may actually hurt innovation. The rigidity of the leadership style conflicts with the cultural norms of autonomy common in these regions.

These findings suggest that there is no “one size fits all” approach to leading a family business. The effectiveness of a leadership style depends heavily on the internal culture of the family and the external economic reality. What works for a tight-knit family business in an emerging market might fail for a disconnected family firm in a developed nation.

Sherlock and her team noted several caveats to their work. The study relied on cross-sectional data. This means it captured a snapshot of these firms at a single point in time. It is impossible to definitively prove that authoritarian leadership caused the innovation. It is possible that innovative firms simply tend to adopt stricter leadership structures to manage their growth.

Additionally, the data relied on self-reports from CEOs. While this is common in management research, it introduces the possibility of bias. Leaders may perceive themselves or their firms more favorably than an objective observer would. The study also focused on small and medium-sized firms. The dynamics in massive, publicly traded family conglomerates could be entirely different.

The authors recommend that future research look at these relationships over time. A longitudinal study could track how changes in leadership style affect innovation rates in subsequent years. They also suggest exploring other leadership styles, such as servant leadership or participative leadership, to see how they interact with family dynamics.

This research offers a practical message for family business owners. It indicates that consolidating power is not inherently bad for business growth. However, this authority must be exercised in a way that resonates with the family. Leaders who wish to drive innovation through strict control must ensure they also cultivate the family’s emotional bond to the firm. Without that emotional buy-in, the strategy is likely to fail.

The study, “The bright side of authoritarian leadership in family firms: An emotional attachment perspective on innovativeness,” was authored by Chelsea Sherlock, David R. Marshall, Clay Dibrell, and Eric Clinton.

Sexual difficulties in eating disorders may stem from different causes in men and women

15 December 2025 at 01:00

The underlying causes of sexual difficulties may differ between men and women who experience symptoms of eating disorders, according to new research. While depression appears to be the primary driver of sexual challenges among women with these symptoms, eating disorder behaviors themselves play a more direct role for men. These findings were published in the International Journal of Sexual Health.

Sexual functioning is a fundamental aspect of human health and quality of life. It encompasses desire, arousal, and the ability to achieve orgasm. Problems in these areas can lead to lower psychological well-being and relationship dissatisfaction.

Previous research has established a clear link between eating disorders and sexual dysfunction. Individuals struggling with disordered eating often report higher rates of sexual dissatisfaction and physiological difficulties. This connection makes intuitive sense given that eating disorders involve severe disturbances in body image and physical health.

Hormonal imbalances caused by malnutrition can physically impede sexual response. Simultaneously, psychological factors such as body shame and anxiety about appearance can create mental barriers to intimacy. However, the exact nature of this relationship remains a subject of scientific inquiry.

A complicating factor is the presence of other mental health conditions. Anxiety and depression are highly common among people with eating disorders. These conditions are also well-known causes of sexual dysfunction on their own.

It has been difficult for researchers to determine if sexual problems are caused specifically by the eating disorder or by co-occurring depression and anxiety. Additionally, the vast majority of research on this topic has focused on women. There is a lack of data regarding how these dynamics play out in men.

To address these gaps, a team of researchers led by Maegan B. Nation undertook a comprehensive investigation. Nation is affiliated with the Department of Psychology at the University of Nevada Las Vegas. The team aimed to disentangle the effects of eating pathology from the effects of general distress.

The researchers sought to understand if eating disorder symptoms predict sexual problems when the influence of anxiety and depression is mathematically removed. They also aimed to compare these patterns across genders. This approach allows for a more precise understanding of which symptoms should be targeted in treatment.

The study recruited a large sample of undergraduate students from two public universities in the United States. The final analysis included 1,488 cisgender women and 646 cisgender men. Cisgender refers to individuals whose gender identity matches the sex they were assigned at birth.

Participants completed a series of online questionnaires. To assess eating disorder symptoms, the researchers used the Eating Disorder Examination Questionnaire. This tool measures behaviors such as dietary restraint and concerns regarding body shape and weight.

To evaluate sexual health, the team utilized the Medical Outcomes Study Sexual Functioning Scale. This measure asks participants to rate the severity of various problems. These issues include a lack of sexual interest, difficulty becoming aroused, inability to relax during sex, and difficulty reaching orgasm.

The researchers also administered a standard assessment for anxiety and depression. This allowed them to control for these variables in their statistical models. By doing so, they could isolate the unique contribution of eating disorder symptoms to sexual functioning.

The results revealed distinct patterns for men and women. Among the female participants, sexual functioning problems were quite common. Approximately 73 percent of women reported some level of difficulty.

The most frequent complaints among women were difficulty reaching orgasm and an inability to relax and enjoy sex. When the researchers ran their statistical models, they found an association between eating disorder symptoms and sexual problems.

However, once the researchers adjusted for anxiety and depression, the picture changed. For women, the direct link between eating disorder symptoms and sexual dysfunction became very weak. The effect sizes were small enough that they might not be clinically meaningful.

Instead, depression symptoms emerged as the stronger predictor of sexual difficulties in women. This suggests that the sexual problems often seen in women with disordered eating may actually be a byproduct of depressive symptoms. The eating disorder itself may not be the primary culprit for the sexual dysfunction.

The findings for men told a different story. About half of the male participants reported sexual functioning problems. The most common issues for men were a lack of sexual interest and an inability to relax.

For men, eating disorder symptoms continued to predict sexual dysfunction even after controlling for anxiety and depression. While the effect was small, it remained statistically relevant. This implies that for men, there is a unique pathway between disordered eating and sexual health that is independent of general mood.

The authors propose several explanations for this gender disparity. One possibility involves the drive for muscularity. Men with body image issues often strive for a hyper-muscular physique rather than thinness.

This specific drive might influence sexual self-esteem and functioning in ways that differ from the drive for thinness typically seen in women. It is also possible that men experience unique sociocultural pressures regarding sexual performance and body image. These pressures could interact with eating pathology to disrupt sexual function.

The results for women align with existing theories about the heavy impact of depression on libido and arousal. It reinforces the idea that treating depression could alleviate sexual side effects in women with eating disorders.

For men, the results suggest that clinicians should look specifically at eating behaviors and body image cognitions. Addressing depression alone might not fully resolve sexual issues for male patients.

The study also examined sexual attraction as a variable. The researchers found that sexual orientation was linked to different levels of functioning. Men who reported attraction to the same gender or multiple genders reported higher levels of sexual problems compared to heterosexual men.

Conversely, women who were exclusively attracted to women reported fewer sexual functioning problems than those attracted to men. This adds nuance to the understanding of how sexual orientation interacts with sexual health.

There are limitations to this study that warrant consideration. The sample consisted of undergraduate students rather than a clinical population. People with diagnosed, severe eating disorders might show different patterns.

The study was also cross-sectional. This means the data represents a single snapshot in time. Researchers cannot definitively say that one factor causes another, only that they are related.

It is possible that the relationship is bidirectional. Sexual problems could contribute to body dissatisfaction, or vice versa. Longitudinal research, which follows participants over time, would be needed to establish causality.

The researchers also noted that the study focused on cisgender individuals. The experiences of transgender and gender-diverse individuals were not analyzed due to sample size constraints. Given that gender-diverse people often face higher rates of eating disorders, this is an area for future investigation.

Despite these limitations, the study offers new insights. It challenges the assumption that the relationship between eating disorders and sex is the same for everyone. It highlights the importance of considering gender when assessing and treating these co-occurring issues.

Maegan Nation and her colleagues suggest that screening for sexual functioning problems should be a routine part of mental health care. For women, this might involve a closer look at depressive symptoms. For men, it might require a specific focus on body image and eating behaviors.

Future research should aim to replicate these findings in clinical settings. Studies involving older adults or community samples would also be beneficial. Understanding the mechanisms behind these associations could lead to more effective interventions.

This research underscores the complexity of human sexuality and its relationship to mental health. It serves as a reminder that broad assumptions often fail to capture individual experiences. By breaking down these associations by gender and accounting for mood disorders, scientists can develop more targeted treatments.

The study, “Sexual Functioning and Eating Disorder Symptoms: Examining the Role of Gender and Internalizing Symptoms in an Undergraduate Population,” was authored by Maegan B. Nation, Shane W. Kraus, Melanie Garcia, Nicholas C. Borgogna, and Kara A. Christensen Pacella.

Dim morning light triggers biological markers of depression in healthy adults

14 December 2025 at 15:00

Spending the morning hours in dim indoor lighting may cause healthy individuals to exhibit biological changes typically seen in people with depression. A study published in the Journal of Psychiatric Research indicates that a lack of bright light before noon can disrupt sleep cycles and hormonal rhythms. These physiological shifts suggest that dimly lit environments could increase a person’s vulnerability to mood disorders.

The human body relies on environmental cues to regulate its internal clock. This system is known as the circadian rhythm. It dictates when we feel alert and when we feel ready for sleep. The most powerful of these cues is light. When sunlight enters the eye, it signals a region of the brain called the suprachiasmatic nucleus. This brain region then coordinates hormone production and body temperature. In a natural setting, humans would experience bright light in the morning and darkness at night.

Modern life has altered this natural pattern. Many people spend the vast majority of their waking hours inside buildings. The artificial light in these spaces is often far less intense than natural daylight.

Jan de Zeeuw, Dieter Kunz, and their colleagues at St. Hedwig Hospital and Charité–Universitätsmedizin Berlin have spent years investigating this phenomenon. They describe this lifestyle as “Living in Biological Darkness.” Their previous research found that urban residents spend approximately half of their daytime hours in light levels lower than 25 lux. For comparison, a cloudy day outside might measure over 1,000 lux.

The researchers wanted to understand the specific consequences of this low-light lifestyle. They were particularly interested in how it affects the hypothalamic-pituitary-adrenal axis. This system controls the release of cortisol. Cortisol is often called the stress hormone. In a healthy person, cortisol levels peak early in the morning to help wake the body. These levels then gradually decline throughout the day and reach their lowest point in the evening. This rhythm allows the body to wind down for sleep.

In patients diagnosed with depression, this rhythm often malfunctions. Their cortisol levels frequently remain elevated throughout the day and into the evening. Another biological marker of depression involves specific changes in sleep architecture. Sleep is composed of different stages, including rapid eye movement, or REM, and deep slow-wave sleep.

Depressed patients often experience a shift in deep sleep from the beginning of the night to later cycles. The researchers aimed to see if dim light alone could induce these depression-like symptoms in healthy volunteers.

The study recruited twenty healthy young adults to participate in a controlled experiment. The group consisted of ten men and ten women with an average age of about twenty-four. To ensure accuracy, the participants maintained a consistent sleep schedule for a week before the testing began. The researchers monitored their adherence using wrist-worn activity trackers.

The participants were randomly divided into two groups. The experiment focused on the morning hours between 8:00 AM and 12:00 PM. For five days, one group spent these hours in a room with low-intensity incandescent lighting. This light measured 55 lux and had a warm, yellowish color temperature. This environment simulated a dimly lit living room or a workspace with poor lighting.

The second group spent the same morning hours in a room with higher-intensity fluorescent lighting. This light measured 800 lux and had a cooler, bluish tone. This intensity mimics a brightly lit office or classroom. It served as a control condition. During the afternoons and evenings, participants left the laboratory and went about their normal lives. They returned to the lab for specific testing sessions.

The research team used several methods to track biological changes. They collected urine and saliva samples to measure hormone concentrations. They focused on cortisol and melatonin. They also utilized polysomnography to record sleep patterns. This involves placing sensors on the head to measure brain waves during the night. The team also assessed the participants’ mood and reaction times using standard psychological tests.

The findings revealed distinct differences between the two groups. The participants exposed to the dim incandescent light showed a disruption in their cortisol rhythms. Their cortisol levels were elevated in the late afternoon and evening. This elevation occurred at a time when the hormone should ideally be decreasing. The statistical analysis showed that this increase was not a random fluctuation. The result mirrors the blunted circadian rhythm often observed in depressive illnesses.

Sleep patterns in the dim light group also deteriorated. After repetitive exposure to low morning light, these individuals slept for a shorter duration. On average, their total sleep time decreased by about twenty-five minutes. The internal structure of their sleep changed as well. Deep sleep is characterized by slow-wave activity in the brain. Typically, the bulk of this restorative sleep occurs in the first few cycles of the night.

In the dim light group, this slow-wave activity shifted. It decreased in the earlier part of the night and appeared more frequently in later sleep cycles. This delay in deep sleep is a known characteristic of sleep architecture in patients with depression. The participants in this group also reported feeling subjectively worse. They rated themselves as sleepier and sadder after days of low light exposure compared to the bright light group.

The group exposed to the brighter fluorescent light did not show these negative markers. Their cortisol levels followed a more standard daily curve. Their deep sleep remained anchored in the early part of the night. The researchers did note one specific change in this group. The bright light appeared to increase the amount of REM sleep they experienced toward the end of the night.

The study suggests that light intensity affects more than just vision. It serves as a biological signal that keeps the body’s systems synchronized. The “master clock” in the brain requires sufficient light input to function correctly. This input comes largely from specialized cells in the retina that are sensitive to blue light. Incandescent bulbs, like those used for the dim group, emit very little blue light. Fluorescent bulbs emit more of these wavelengths.

When the brain does not receive a strong morning light signal, the circadian system may weaken. This weakening can lead to a misalignment of internal rhythms. The researchers note that the suprachiasmatic nucleus has direct neural pathways to the adrenal glands. This connection explains how light—or the lack of it—can directly influence cortisol production.

The authors propose that the observed changes could represent a “vulnerability” to depression. The participants were healthy and did not develop clinical depression during the short study. However, their bodies began to mimic the physiological state of a depressed person. The combination of high evening cortisol and disrupted sleep creates a physical environment where mood disorders might more easily take root.

The researchers stated, “In healthy subjects repetitive exposure to low-intensity lighting during pre-midday hours was associated with increased cortisol levels over the day and delayed slow-wave-activity within nighttime sleep, changes known to occur in patients with depressive illnesses.”

They continued by noting the implications of these sleep changes. “Insomnia-like changes in sleep architecture shown here may pave the avenue to more vulnerability to depression and contribute to the understanding of pathophysiology in depressive illnesses.”

There are limitations to this study that should be considered. The sample size was relatively small, with only ten people in each group. A larger pool of participants would provide more robust data. The design compared two different groups of people rather than testing the same people under both conditions. This introduces the possibility that individual differences influenced the results.

Additionally, the researchers could not control the light exposure participants received after leaving the lab at noon. While they wore activity monitors, these devices cannot always perfectly track light intake. However, previous studies by the same team suggest that urban residents generally encounter low light levels throughout the day. It is plausible that the participants did not receive significant bright light in the afternoons to counteract the morning dimness.

Future research should investigate these effects over longer periods. A study lasting weeks or months could determine if these biological changes eventually lead to psychological symptoms. It would also be beneficial to test different light sources, such as LED lighting, which is now common. Understanding the specific wavelengths of light that best support the circadian rhythm is an ongoing area of scientific inquiry.

The findings carry practical implications for building design and public health. They suggest that the standard lighting found in many homes and offices may be insufficient for biological health. Increasing light levels during the morning could serve as a simple preventative measure. This might involve using brighter artificial lights or designing spaces that admit more daylight.

The concept of “Living in Biological Darkness” highlights a mismatch between human biology and the modern environment. Our bodies evolved to expect bright mornings. Depriving the brain of this signal appears to set off a chain reaction of hormonal and neurological disruptions. While a few days of dim light may not cause immediate harm, chronic exposure could erode mental resilience.

Jan de Zeeuw and his co-authors argue that it is time to reconsider how we light our indoor spaces. They suggest that integrating bright light into schools, workplaces, and nursing homes could improve overall health. By mimicking the natural rising of the sun, we may be able to stabilize our internal rhythms. This stabilization could protect against the physiological precursors of depression.

The study, “Living in biological darkness III: Effects of low-level pre-midday lighting on markers of depression in healthy subjects,” was authored by Jan de Zeeuw, Claudia Nowozin, Martin Haberecht, Sven Hädel, Frederik Bes, and Dieter Kunz.

Amphetamine overrides brain signals associated with sexual rejection

14 December 2025 at 03:00

Recent experimental findings suggest that d-amphetamine, a potent central nervous system stimulant, can override learned sexual inhibitions in male rats. The research demonstrates that the drug causes animals to pursue sexual partners they had previously learned to avoid due to negative reinforcement. These results, which highlight a disruption in the brain’s reward and inhibition circuitry, were published in the journal Psychopharmacology.

To understand the specific nature of this study, one must first look at how animals learn to navigate sexual environments. In the wild, animals must determine when it is appropriate to engage in mating behavior and when it is not. A male rat that attempts to mate with a female that is not sexually receptive will be rejected.

Over time, the animal learns to associate certain cues, such as scents or locations, with this rejection. This learning process is known as conditioned sexual inhibition. It serves an evolutionary purpose by preventing the male from wasting energy on mating attempts that will not result in reproduction.

Researchers have long sought to understand how recreational drugs alter this specific type of decision-making. While it is well documented that stimulants can physically enable or enhance sexual behavior, less is understood about how they affect the psychological choice to engage in sex when an individual knows they should not. Previous work has established that alcohol can dismantle this learned inhibition. The current research aimed to see if d-amphetamine, a drug with a very different chemical mechanism, would produce a similar result.

The research team was led by Katuschia Germé from the Centre for Studies in Behavioral Neurobiology at Concordia University in Montreal. The team also included Dhillon Persad, Justine Petit-Robinson, Shimon Amir, and James G. Pfaus. They designed an experiment to create a strong mental association in the subjects. They used male Long-Evans rats as the subjects for the experiment.

The researchers began by training the rats over the course of twenty sessions. This training took place in specific testing chambers. During these sessions, the males were exposed to two different types of female rats. Some females were sexually receptive and carried no added scent. Other females were not sexually receptive and were scented with an almond extract.

The male rats quickly learned the difference. They associated the neutral, unscented females with sexual reward. Conversely, they associated the almond scent with rejection and a lack of reward. After the training phase, the males would reliably ignore females that smelled like almond, even if those females were actually receptive. The almond smell had become a “stop” signal. This state represents the conditioned sexual inhibition that the study sought to investigate.

Once this inhibition was established, the researchers moved to the testing phase. They divided the rats into groups and administered varying doses of d-amphetamine. Some rats received a saline solution which served as a control group with no drug effect. Others received doses of 0.5, 1.0, or 2.0 milligrams per kilogram of body weight.

The researchers then placed the male rats in a large open arena. This environment was different from the training cages to ensure the rats were reacting to the females and not the room itself. Two sexually receptive females were placed in the arena with the male. One female was unscented. The other female was scented with the almond extract.

Under normal circumstances, a trained rat would ignore the almond-scented female. This is exactly what the researchers observed in the group given the saline solution. These sober rats directed their attention almost exclusively toward the unscented female. They adhered to their training and avoided the scent associated with past rejection.

The behavior of the rats treated with d-amphetamine was distinct. Regardless of the dose administered, the drug-treated rats copulated with both the unscented and the almond-scented females. The drug had completely eroded the learned inhibition. The almond scent, which previously acted as a deterrent, no longer stopped the males from initiating copulation.

It is important to note that the drug did not simply make the rats hyperactive or indiscriminate due to confusion. The researchers tracked the total amount of sexual activity. They found that while the choice of partner changed, the overall mechanics of the sexual behavior remained competent. The drug did not create a chaotic frenzy. It specifically removed the psychological barrier that had been built during training.

Following the behavioral tests, the researchers investigated what was happening inside the brains of these animals. They utilized a technique that stains for the Fos protein. This protein is produced within neurons shortly after they have been active. By counting the cells containing Fos, scientists can create a map of which brain regions were working during a specific event.

To do this, the researchers re-exposed the rats to the almond odor while they were under the influence of the drug or saline. They did not include females in this phase. This allowed the team to see how the brain processed the cue of the almond scent in isolation.

The analysis revealed distinct patterns of brain activation. In the rats that received saline, the almond odor triggered activity in the piriform cortex. This is a region of the brain involved in processing the sense of smell. However, these sober rats showed lower activity in the medial preoptic area. This area is critical for male sexual behavior. This pattern suggests that the sober brain registered the smell and dampened the sexual control center in response.

The rats treated with d-amphetamine showed a reversal of this pattern. When exposed to the almond scent, these rats displayed increased activity in the nucleus accumbens. The nucleus accumbens is a central component of the brain’s reward system. It is heavily involved in processing motivation and pleasure.

The drug also increased activity in the ventral tegmental area. This region produces dopamine and sends it to the nucleus accumbens. The presence of the drug appeared to hijack the processing of the inhibitory cue. Instead of the almond smell triggering a “stop” signal, the drug caused the brain to treat the smell as a neutral or potentially positive stimulus.

The researchers noted that the activation in the nucleus accumbens was particularly telling. This region lights up in response to rewards. By chemically stimulating this area with d-amphetamine, the drug may have overridden the negative memory associated with the almond scent. The cue for rejection was seemingly transformed into a cue for potential reward.

The team also observed changes in the amygdala. This part of the brain is often associated with emotional processing and fear. The drug-treated rats showed different activity levels in the central and basolateral nuclei of the amygdala compared to the control group. This suggests that the drug alters the emotional weight of the memory.

These findings align with previous research conducted by this laboratory regarding alcohol. In prior studies, the researchers found that alcohol also disrupted conditioned sexual inhibition. The fact that two very different drugs—one a depressant and one a stimulant—produce the same behavioral outcome suggests they may act on a shared neural pathway.

The authors propose that this shared pathway likely involves the mesolimbic dopamine system. This is the circuit connecting the ventral tegmental area to the nucleus accumbens. Both alcohol and amphetamines are known to increase dopamine release in this system. This surge in dopamine appears to be strong enough to wash out the learned signals that tell an individual to stop or refrain from a behavior.

There are limitations to how these findings can be interpreted. The study was conducted on rats, and animal models do not perfectly replicate human psychology. The complexity of human sexual decision-making involves social and cultural factors that cannot be simulated in a rodent model. Additionally, the study looked at acute administration of the drug. The effects of chronic, long-term use might result in different behavioral adaptations.

The researchers also point out that while the inhibition was broken, the drug did not strictly enhance sexual performance. In fact, at the highest doses, some rats failed to reach ejaculation despite engaging in the behavior. This distinction separates the concept of sexual arousal from sexual execution. The drug increased the drive to engage but did not necessarily improve the physical conclusion of the act.

Future research will likely focus on pinpointing the exact chemical interactions within the amygdala and nucleus accumbens. Understanding the precise receptors involved could shed light on how addiction affects risk assessment. If a drug can chemically overwrite a learned warning signal, it explains why individuals under the influence often engage in risky behaviors they would logically avoid when sober.

The study provides a neurobiological framework for understanding drug-induced disinhibition. It suggests that drugs like d-amphetamine do not merely lower inhibitions in a vague sense. Rather, they actively reconfigure how the brain perceives specific cues. A stimulus that once meant “danger” or “rejection” is reprocessed through the reward system. This chemical deception allows the behavior to proceed unchecked.

The study, “Disruptive effects of d-amphetamine on conditioned sexual inhibition in the male rat,” was authored by Katuschia Germé, Dhillon Persad, Justine Petit-Robinson, Shimon Amir, and James G. Pfaus.

Survey reveals rapid adoption of AI tools in mental health care despite safety concerns

14 December 2025 at 01:00

The integration of artificial intelligence into mental health care has accelerated rapidly, with more than half of psychologists now utilizing these tools to assist with their daily professional duties. While practitioners are increasingly adopting this technology to manage administrative burdens, they remain highly cautious regarding the potential threats it poses to patient privacy and safety, according to the American Psychological Association’s 2025 Practitioner Pulse Survey.

The American Psychological Association represents the largest scientific and professional organization of psychologists in the United States. Its leadership monitors the evolving landscape of mental health practice to understand how professionals navigate changes in technology and patient needs.

In recent years, the field has faced a dual challenge of high demand for services and increasing bureaucratic requirements from insurance providers. These pressures have created an environment where digital tools promise relief from time-consuming paperwork.

However, the introduction of automated systems into sensitive therapeutic environments raises ethical questions regarding confidentiality and the human element of care. To gauge how these tensions are playing out in real-world offices, the association commissioned its annual inquiry into the state of the profession.

The 2025 Practitioner Pulse Survey targeted doctoral-level psychologists who held active licenses to practice in at least one U.S. state. To ensure the results accurately reflected the profession, the research team utilized a probability-based random sampling method. They generated a list of more than 126,000 licensed psychologists using state board data and randomly selected 30,000 individuals to receive invitations.

This approach allowed the researchers to minimize selection bias. Ultimately, 1,742 psychologists completed the survey, providing a snapshot of the workforce. The respondents were primarily female and White, which aligns with historical demographic trends in the field. The majority worked full-time, with private practice being the most common setting.

The survey results revealed a sharp increase in the adoption of artificial intelligence compared to the previous year. In 2024, only 29% of psychologists reported using AI tools. By 2025, that figure had climbed to 56%. The frequency of use also intensified. Nearly three out of 10 psychologists reported using these tools on at least a monthly basis. This represents a substantial shift from 2024, when only about one in 10 reported such frequent usage.

Detailed analysis of the data shows that psychologists are primarily using these tools to handle logistics rather than patient care. Among those who utilized AI, more than half used it to assist with writing emails and other materials. About one-third used it to generate content or summarize clinical notes. These functions address the administrative workload that often detracts from face-to-face time with clients.

Arthur C. Evans Jr., PhD, the CEO of the association, commented on this trend.

“Psychologists are drawn to this field because they’re passionate about improving peoples’ lives, but they can lose hours each day on paperwork and managing the often byzantine requirements of insurance companies,” said Evans. “Leveraging safe and ethical AI tools can increase psychologists’ efficiency, allowing them to reach more people and better serve them.”

Despite the utility of these tools for office management, the survey highlighted deep reservations about their safety. An overwhelming 92% of psychologists cited concerns regarding the use of AI in their field. The most prevalent worry, cited by 67% of respondents, was the potential for data breaches. This is a particularly acute issue in mental health care, where maintaining the confidentiality of patient disclosures is foundational to the therapeutic relationship.

Other concerns focused on the reliability and social impact of the technology. Unanticipated social harms were cited by 64% of respondents. Biases in the input and output of AI models worried 63% of the psychologists surveyed. There is a documented risk that AI models trained on unrepresentative data may perpetuate stereotypes or offer unequal quality of care to marginalized groups.

Additionally, 60% of practitioners expressed concern over inaccurate output or “hallucinations.” This term refers to the tendency of generative AI models to confidently present false or fabricated information as fact. In a clinical setting, such errors could lead to misdiagnosis or inappropriate treatment plans if not caught by a human supervisor.

“Artificial intelligence can help ease some of the pressures that psychologists are facing—for instance, by increasing efficiency and improving access to care—but human oversight remains essential,” said Evans. “Patients need to know they can trust their provider to identify and mitigate risks or biases that arise from using these technologies in their treatment.”

The survey data suggests that psychologists are heeding this need for oversight by keeping AI largely separate from direct clinical tasks. Only 8% of those who used the technology employed it to assist with clinical diagnosis. Furthermore, only 5% utilized chatbot assistance for direct patient interaction. This indicates that while practitioners are willing to delegate paperwork to algorithms, they are hesitant to trust them with the nuances of human psychology.

This hesitation correlates with fears about the future of the profession. The survey found that 38% of psychologists worried that AI might eventually make some of their job duties obsolete. However, the current low rates of clinical adoption suggest that the core functions of therapy remain firmly in human hands for the time being.

The context for this technological shift is a workforce that remains under immense pressure. The survey explored factors beyond technology, painting a picture of a profession straining to meet demand. Nearly half of all psychologists reported that they had no openings for new patients.

Simultaneously, practitioners observed that the mental health crisis has not abated. About 45% of respondents indicated that the severity of their patients’ symptoms is increasing. This rising acuity requires more intensive care and energy from providers, further limiting the number of patients they can effectively treat.

Economic factors also complicate the landscape. The survey revealed that fewer than two-thirds of psychologists accept some form of insurance. Respondents pointed to insufficient reimbursement rates as a primary driver for this decision. They also cited struggles with pre-authorization requirements and audits. These administrative hurdles consume time that could otherwise be spent on treatment.

The association has issued recommendations for psychologists considering the use of AI to ensure ethical practice. They advise obtaining informed consent from patients by clearly communicating how AI tools are used. Practitioners are encouraged to evaluate tools for potential biases that could worsen health disparities.

Compliance with data privacy laws is another priority. The recommendations urge psychologists to understand exactly how patient data is used, stored, or shared by the third-party companies that provide AI services. This due diligence is intended to protect the sanctity of the doctor-patient privilege in a digital age.

The methodology of the 2025 survey differed slightly from previous years to improve accuracy. In prior iterations, the survey screened out ineligible participants. In 2025, the instrument included a section for those who did not meet the criteria, allowing the organization to gather internal data on who was receiving the invites.

The response rate for the survey was 6.6%. While this may appear low to a layperson, it is a typical rate for this type of professional survey and provided a robust sample size for analysis. The demographic breakdown of the sample showed slight shifts toward a younger workforce. The 2025 sample had the highest proportion of early-career practitioners in the history of the survey.

This influx of younger psychologists may influence the adoption rates of new technologies. Early-career professionals are often more accustomed to integrating digital solutions into their workflows. However, the high levels of concern across the board suggest that skepticism of AI is not limited to older generations of practitioners.

The findings from the 2025 Practitioner Pulse Survey illustrate a profession at a crossroads. Psychologists are actively seeking ways to manage an unsustainable workload. AI offers a potential solution to the administrative bottleneck. Yet, the ethical mandates of the profession demand a cautious approach.

The data indicates that while the tools are entering the office, they have not yet entered the therapy room in a meaningful way. Practitioners are balancing the need for efficiency with the imperative to do no harm. As the technology evolves, the field will likely continue to grapple with how to harness the benefits of automation without compromising the human connection that defines psychological care.

New research maps how the brain processes different aspects of life satisfaction

13 December 2025 at 23:00

A new study suggests that the brain uses distinct neural pathways to process different aspects of personal well-being. The research indicates that evaluating family relationships activates specific memory-related brain regions, while assessing how one handles stress engages areas responsible for cognitive control. These findings were published recently in the journal Emotion.

Psychologists and neuroscientists have struggled to define exactly what constitutes a sense of well-being. Historically, many experts viewed well-being as a single, general concept. It was often equated simply with happiness or life satisfaction. This approach assumes that feeling good about life is a uniform experience. However, more recent scholarship argues that well-being is multidimensional. It is likely composed of various distinct facets that contribute to overall mental health.

To understand how we can improve mental health, it is necessary to identify the mechanisms behind these different components. A team of researchers set out to map the brain activity associated with specific types of life satisfaction. The study was conducted by Kayla H. Green, Suzanne van de Groep, Renske van der Cruijsen, Esther A. H. Warnert, and Eveline A. Crone. These scientists are affiliated with Erasmus University Rotterdam and Radboud University in the Netherlands.

The researchers based their work on the idea that young adults face unique challenges in the modern world. They utilized a measurement tool called the Multidimensional Well-being in Youth Scale. This scale was previously developed in collaboration with panels of young people. It divides well-being into five specific domains.

The first domain is family relationships. The second is the ability to deal with stress. The third domain covers self-confidence. The fourth involves having impact, purpose, and meaning in life. The final domain is the feeling of being loved, appreciated, and respected. The researchers hypothesized that the brain would respond differently depending on which of these domains a person was considering.

To test this hypothesis, the team recruited 34 young adults. The participants ranged in age from 20 to 25 years old. This age group is often referred to as emerging adulthood. It is a period characterized by identity exploration and significant life changes. The researchers used functional magnetic resonance imaging, or fMRI, to observe brain activity. This technology tracks blood flow to different parts of the brain to determine which areas are working hardest at any given moment.

While inside the MRI scanner, the participants completed a specific self-evaluation task. They viewed a series of sentences related to the five domains of well-being. For example, a statement might ask them to evaluate if they accept themselves for who they are. The participants rated how much the statement applied to them on a scale of one to four.

The task did not stop at a simple evaluation of the present. After rating their current feelings, the participants answered a follow-up question. They rated the extent to which they wanted that specific aspect of their life to change in the future. This allowed the researchers to measure both current satisfaction and the desire for personal growth.

In addition to the brain scans, the participants completed standardized surveys outside of the scanner. One survey measured symptoms of depression. Another survey assessed symptoms of burnout. The researchers also asked about feelings of uncertainty regarding the future. These measures helped the team connect the immediate brain responses to the participants’ broader mental health.

The behavioral results from the study showed clear patterns in how young adults view their lives. The participants gave the lowest positivity ratings to the domain of dealing with stress. This suggests that managing stress is a primary struggle for this demographic. Consequently, the participants reported the highest desire for future change in this same domain.

The researchers analyzed the relationship between these ratings and the mental health surveys. They found that higher positivity ratings in all five domains were associated with fewer burnout symptoms. This means that feeling good about any area of life may offer some protection against burnout.

A different pattern emerged regarding the desire for change. Participants who reported more burnout symptoms expressed a stronger desire to change how they felt about having an impact. They also wanted to change their levels of self-confidence and their feelings of being loved. This suggests that burnout is not just about exhaustion. It is also linked to a desire to alter one’s sense of purpose and social connection.

Depressive symptoms showed a broad association with the desire for change. Higher levels of depression were linked to a wish for future changes in almost every domain. The only exception was self-confidence. This implies that young adults with depressive symptoms are generally unsatisfied with their external circumstances and relationships.

The brain imaging data revealed that the mind does indeed separate these domains. When participants evaluated sentences about positive family relationships, a specific region called the precuneus became highly active. The precuneus is located in the parietal lobe of the brain. It is known to play a role in thinking about oneself and recalling personal memories.

This finding aligns with previous research on social cognition. Thinking about family likely requires accessing autobiographical memories. It involves reflecting on one’s history with close relatives. The activity in the precuneus suggests that family well-being is deeply rooted in memory and self-referential thought.

A completely different neural pattern appeared when participants thought about dealing with stress. For these items, the researchers observed increased activity in the dorsolateral prefrontal cortex. This region is located near the front of the brain. It is widely recognized as a center for executive function.

The dorsolateral prefrontal cortex helps regulate emotions and manage cognitive control. Its involvement suggests that thinking about stress is an active cognitive process. It is not just a passive feeling. Instead, it requires the brain to engage in appraisal and regulation. This makes sense given that the participants also expressed the greatest desire to change how they handle stress.

The study did not find distinct, unique neural patterns for the other three domains. Self-confidence, having impact, and feeling loved did not activate specific regions to the exclusion of others. They likely rely on more general networks that overlap with other types of thinking.

However, the distinction between family and stress is notable. It provides physical evidence that well-being is not a single state of mind. The brain recruits different resources depending on whether a person is focusing on their social roots or their emotional management.

The researchers also noted a general pattern involving the medial prefrontal cortex. This area was active during the instruction phase of the task. It was also active when participants considered their desire for future changes. This region is often associated with thinking about the future and self-improvement.

There are limitations to this study that should be considered. The final sample size included only 34 participants. This is a relatively small number for an fMRI study. Small groups can make it difficult to detect subtle effects or generalize the findings to the entire population.

The researchers also noted that the number of trials for each condition was limited. Participants only saw a few sentences for each of the five domains. A higher number of trials would provide more data points for analysis. This would increase the statistical reliability of the results.

Additionally, the study design was correlational. This means the researchers can see that certain brain patterns and survey answers go together. However, they cannot say for certain that one causes the other. For instance, it is not clear if desiring change leads to burnout, or if burnout leads to a desire for change.

Future research could address these issues by recruiting larger and more diverse groups of people. It would be beneficial to include individuals from different cultural backgrounds. Different cultures may prioritize family or stress management differently. This could lead to different patterns of brain activity.

Longitudinal studies would also be a logical next step. Following participants over several years would allow scientists to see how these brain patterns develop. It is possible that the neural correlates of well-being shift as young adults mature into their thirties and forties.

Despite these caveats, the study offers a new perspective on mental health. It supports the idea that well-being is a multifaceted construct. By treating well-being as a collection of specific domains, clinicians may be better able to help patients.

The study, “Neural Correlates of Well-Being in Young Adults,” was authored by Kayla H. Green, Suzanne van de Groep, Renske van der Cruijsen, Esther A. H. Warnert, and Eveline A. Crone.

What are legislators hiding when they scrub their social media history?

13 December 2025 at 05:00

Federal legislators in the United States actively curate their digital footprints to project a specific professional identity. A new analysis reveals that these officials frequently remove social media posts that mention their private lives or name specific colleagues. But they tend to preserve posts that criticize policies or opponents. The research was published in the journal Computers in Human Behavior.

The digital age has transformed how elected officials communicate with voters. Social media platforms allow politicians to broadcast their views instantly. However, this speed also blurs the traditional boundaries between public performance and private thought.

Sociologist Erving Goffman described this dynamic as impression management. This concept suggests that individuals constantly perform to control how others perceive them. They attempt to keep their visible “front-stage” behavior consistent with a desired public image.

In the political arena, maintaining a consistent image is essential for securing votes and support. A single misstep on a platform like X, formerly known as Twitter, can damage a reputation instantly. Researchers wanted to understand how this pressure influences what politicians choose to hide. They sought to identify which specific characteristics prompt a legislator to hit the delete button.

The study was led by Siyuan Ma from the Department of Communication at the University of Macau. Ma worked alongside Junyi Han from the Leibniz-Institut für Wissensmedien in Germany and Wanrong Li from the University of Macau. They aimed to quantify the effort legislators put into managing their online impressions. They also wanted to see if the deletion of content followed a predictable pattern based on political strategy.

To investigate this, the team collected a massive dataset covering the 116th United States Congress. This session ran from January 2019 to September 2020. The researchers utilized a tool called Politwoops to retrieve data on deleted posts. This third-party platform archives tweets removed by public officials to ensure transparency. The dataset included nearly 30,000 deleted tweets and over 800,000 publicly available tweets from the same timeframe.

The researchers analyzed a random sample of these messages to ensure accuracy. Human coders reviewed the content to categorize the topics discussed. They looked for specific variables such as mentions of private life or policy statements. They also tracked mentions of other politicians and instances of criticism. This allowed the team to compare the content of deleted messages against those that remained online.

The timing of deletions offered early insights into political behavior. The data showed a sharp rise in the number of deleted tweets beginning in late 2019. This increase coincided with the start of the presidential impeachment inquiry. The high-stakes environment likely prompted legislators to be more cautious about their digital history.

The onset of the COVID-19 pandemic also shifted online behavior. As the health crisis unfolded, the total volume of tweets from legislators increased dramatically. Despite the higher volume of posts, the proportion of deleted messages remained elevated. This suggests that during periods of national crisis, the pressure to manage one’s public image intensifies.

When the researchers examined the content of the tweets, distinct patterns emerged. One of the strongest predictors for deletion was the mention of private life. Legislators were statistically more likely to remove posts about their families, hobbies, or vacations. This contradicts some political theories that suggest showing a “human side” helps build connections with voters.

Instead, the findings point toward a strategy of strict professionalism. By scrubbing personal details, politicians appear to be focusing the public’s attention on their official duties. They seem to use the platform as a space for serious legislative work rather than social intimacy. The data indicates that looking professional is prioritized over looking relatable.

Another major trigger for deletion was the mention of specific colleagues. Tweets that named other politicians were frequently removed from the public record. This behavior may be a strategic move to minimize liability. Mentioning a colleague who later becomes involved in a scandal can be damaging by association. Deleting these mentions keeps a legislator’s timeline clean of potential future embarrassments.

In contrast, the study found that criticism is rarely deleted. Legislators were likely to keep tweets that attacked opposing policies or ideologies visible. This suggests that being critical is viewed as a standard and acceptable part of a politician’s role. It signals to voters that the official is actively fighting for their interests.

The study also evaluated the accuracy of the information shared by these officials. Popular narratives often suggest that social media is flooded with false information from all sides. However, the analysis showed that legislators rarely posted demonstrably false claims. This adherence to factual information was consistent across both deleted and public tweets.

Party loyalty acted as a powerful constraint on behavior. The researchers found almost no instances of legislators posting content that violated their party’s stance. This was true even among the deleted tweets. The lack of dissent suggests an intense pressure to maintain a united front. Deviating from the party line appears to be a risk that few elected officials are willing to take.

The status of the legislator also influenced their deletion habits. The study compared members of the House of Representatives with members of the Senate. The results showed that Representatives were more likely to delete tweets than Senators. This difference likely stems from the varying political pressures they face.

Senators serve six-year terms and represent entire states. They typically have greater name recognition and more secure political resources. This security may give them the confidence to leave their statements on the public record. They feel less need to constantly micromanage their online presence.

Representatives, however, face re-election every two years. They often represent smaller, more volatile districts where a small shift in opinion can cost them their seat. This constant campaign mode creates a higher sensitivity to public perception. Consequently, they appear to scrub their social media accounts more aggressively to avoid potential controversies.

The findings illustrate that social media management is not random. It is a calculated extension of a politician’s broader communication strategy. The platform is used to construct an image that is professional, critical of opponents, and fiercely loyal to the party. The removal of personal content serves to harden this professional shell.

There are limitations to the study that the authors acknowledge. The analysis relied on a random sample rather than the full set of nearly one million tweets. While statistically valid, this approach might miss rare but important deviations in behavior. Funding constraints prevented the use of more expensive analysis methods on the full dataset.

The study also did not account for the specific political geography of each legislator. Factors such as gerrymandering could influence how safe a politician feels in their seat. A representative in a heavily gerrymandered district might behave differently than one in a swing district. The current study did not measure how these external pressures impact deletion rates.

Future research could address these gaps by using advanced technology. The authors propose using machine learning algorithms to classify the entire dataset of tweets. This would allow for a more granular analysis of political behavior on a massive scale. It would also help researchers understand if these patterns hold true over longer periods.

Understanding these behaviors is important for the voting public. The curated nature of social media means that voters are seeing a filtered version of their representatives. The emphasis on criticism and the removal of personal nuance contributes to a polarized online environment. By recognizing these strategies, citizens can better evaluate the digital performance of the people they elect.

The study, “More criticisms, less mention of politicians, and rare party violations: A comparison of deleted tweets and publicly available tweets of U.S. legislators,” was authored by Siyuan Ma, Junyi Han, and Wanrong Li.

Pre-workout supplements linked to dangerously short sleep in young people

13 December 2025 at 01:00

Adolescents and young adults who consume pre-workout dietary supplements may be sacrificing essential rest for their fitness goals. A recent analysis indicates that individuals in this age group who use these performance-enhancing products are more likely to report sleeping fewer than five hours per night. These findings were published recently in the journal Sleep Epidemiology.

The pressure to achieve an ideal physique or enhance athletic performance drives many young people toward dietary aids. Pre-workout supplements, often sold as powders or drinks, are designed to deliver an acute boost in energy and endurance. These products have gained popularity in fitness communities and on social media platforms.

Despite their widespread use, the potential side effects of these multi-ingredient formulations are not always clear to consumers. The primary active ingredient in most pre-workout blends is caffeine, often in concentrations far exceeding that of a standard cup of coffee or soda. While caffeine is a known performance enhancer, its stimulant properties can linger in the body for many hours.

Kyle T. Ganson, an assistant professor at the Factor-Inwentash Faculty of Social Work at the University of Toronto, led the investigation into how these products affect sleep. Ganson and his colleagues sought to address a gap in current public health knowledge regarding the specific relationship between these supplements and sleep duration in younger populations.

The researchers drew data from the Canadian Study of Adolescent Health Behaviors. This large-scale survey collects information on the physical, mental, and social well-being of young people across Canada. The team focused on a specific wave of data collected in late 2022.

The analysis included 912 participants ranging in age from 16 to 30 years old. The researchers recruited these individuals through advertisements on popular social media platforms, specifically Instagram and Snapchat. This recruitment method allowed the team to reach a broad demographic of digital natives who are often the target audience for fitness supplement marketing.

Participants answered questions regarding their use of appearance- and performance-enhancing substances over the previous twelve months. They specifically indicated whether they had used pre-workout drinks or powders. Additionally, the survey asked participants to report their average nightly sleep duration over the preceding two weeks.

To ensure the results were robust, the researchers accounted for various factors that might influence sleep independently of supplement use. They adjusted their statistical models for variables such as age, gender, and exercise habits. They also controlled for symptoms of depression and anxiety, as mental health struggles frequently disrupt sleep patterns.

The results showed a clear distinction between users and non-users of these supplements. Approximately 22 percent of the participants reported using pre-workout products in the past year. Those who did were substantially more likely to report very short sleep durations.

Specifically, the study found that pre-workout users were more than 2.5 times as likely to sleep five hours or less per night compared to those who did not use the supplements. This comparison used eight hours of sleep as the healthy baseline. The association remained strong even after the researchers adjusted for the sociodemographic and mental health variables.

The researchers did not find a statistically significant link between pre-workout use and sleeping six or seven hours compared to eight. The strongest signal in the data was specifically for the most severe category of sleep deprivation. This suggests that the supplements may be contributing to extreme sleep deficits rather than minor reductions in rest.

Biology offers a clear explanation for this phenomenon. Caffeine functions by blocking adenosine receptors in the brain. Adenosine is a chemical that accumulates throughout the day and promotes sleepiness; by blocking it, caffeine induces a state of alertness.

This mechanism helps during a workout but becomes a liability when trying to rest. Ganson highlights the dosage as a primary concern.

“These products commonly contain large doses of caffeine, anywhere between 90 to over 350 mg of caffeine, more than a can of Coke, which has roughly 35 mg, and a cup of coffee with about 100 mg,” said Ganson. “Our results suggest that pre-workout use may contribute to inadequate sleep, which is critical for healthy development, mental well-being, and academic functioning.”

Beyond simple wakefulness, caffeine also delays the body’s internal release of melatonin. This hormone signals to the body that it is time to sleep. Disrupting this rhythm can make it difficult to fall asleep at a reasonable hour.

Additionally, high doses of stimulants activate the sympathetic nervous system. This biological response increases heart rate and blood pressure. A body in this heightened state of physiological arousal is ill-equipped for the relaxation necessary for deep sleep.

The timing of consumption plays a major role in these effects. Young adults often exercise in the afternoon or evening after school or work. Consuming a high-stimulant beverage at this time means the caffeine is likely still active in their system when they attempt to go to bed.

This sleep disruption is particularly concerning for the age group studied. Adolescents generally require between 8 and 10 hours of sleep for optimal development. Young adults typically need between 7 and 9 hours.

Chronic sleep deprivation in this developmental window is linked to a host of negative outcomes. These include impaired cognitive function, emotional instability, and compromised physical health. The authors note that the very products used to improve health and fitness might be undermining recovery and overall well-being.

“Pre-workout supplements, which often contain high levels of caffeine and stimulant-like ingredients, have become increasingly popular among teenagers and young adults seeking to improve exercise performance and boost energy,” said Ganson. “However, the study’s findings point to potential risks to the well-being of young people who use these supplements.”

The study does have limitations that readers should consider. The data is cross-sectional, meaning it captures a snapshot in time rather than tracking individuals over years. As a result, the researchers cannot definitively prove that the supplements caused the sleep loss.

It is possible that the relationship works in the opposite direction. Individuals who are chronically tired due to poor sleep habits may turn to pre-workout supplements to power through their exercise routines. This could create a cycle of dependency and fatigue.

Furthermore, the study relied on self-reported data. Participants had to recall their sleep habits and supplement use, which introduces the possibility of memory errors. The survey also did not ask about the specific dosage or timing of the supplement intake.

Despite these limitations, the authors argue the association is strong enough to warrant attention from healthcare providers. They suggest that pediatricians and social workers should ask young patients about their supplement use. Open conversations could help identify potential causes of insomnia or fatigue.

Harm reduction strategies could allow young people to exercise safely without compromising their rest. The most effective approach involves timing. Experts generally recommend avoiding high doses of caffeine 12 to 14 hours before bedtime to ensure the substance is fully metabolized.

“Young people often view pre-workout supplements as harmless fitness products,” Ganson noted. “But these findings underscore the importance of educating them and their families about how these supplements can disrupt sleep and potentially affect overall health.”

Future research will need to examine the nuances of this relationship. Longitudinal studies could track users over time to establish a clearer causal link. Researchers also hope to investigate how specific ingredients beyond caffeine might interact to affect sleep quality.

The study, “Use of pre-workout dietary supplements is associated with lower sleep duration among adolescents and young adults,” was authored by Kyle T. Ganson, Alexander Testa, and Jason M. Nagata.

Oxytocin curbs men’s desire for luxury goods when partners are ovulating

12 December 2025 at 21:00

Recent research suggests that biological rhythms may exert a subtle yet powerful influence on male consumer behavior. A study published in Psychopharmacology has found that men in committed relationships exhibit a reduced desire to purchase status-signaling goods when their female partners are in the fertile phase of their menstrual cycle. This shift in preference appears to be driven by an unconscious evolutionary mechanism that prioritizes relationship maintenance over the attraction of new mates.

To understand these findings, it is necessary to examine the evolutionary roots of consumerism. Evolutionary psychologists posit that spending money is rarely just about acquiring goods. In many instances, it serves as a signal to others in the social group. Specifically, “conspicuous consumption” involves purchasing lavish items to display wealth and social standing.

This behavior is often compared to the peacock’s tail. Just as the bird displays its feathers to attract a mate, men may purchase luxury cars or expensive watches to signal their resourcefulness to potential partners. This is generally considered a strategy for attracting short-term mates. However, this strategy requires a significant investment of resources.

For men in committed relationships, there is a theoretical trade-off between attracting new partners and maintaining their current bond. This is described by sexual selection and parental investment theories. When a female partner is capable of conceiving, the reproductive stakes are at their highest.

During this fertile window, it may be maladaptive for a male to focus his energy on signaling to other women. Doing so could risk his current relationship. Instead, evolutionary logic suggests he should focus on “mate retention.” This involves guarding the relationship and ensuring his investment in potential offspring is secure.

The researchers hypothesized that this shift in focus would manifest in consumer choices. They predicted that men would be less inclined to buy flashier items when their partners were ovulating. To test this, they also looked at the role of oxytocin.

Oxytocin is a neuropeptide produced in the hypothalamus. It is often referred to as the “hormone of love” because of its role in social bonding and trust. It facilitates attachment between couples and between parents and children.

The research team included Honghong Tang, Hongyu Fu, Song Su, Luqiong Tong, Yina Ma, and Chao Liu. They are affiliated primarily with Beijing Normal University in China. Their investigation sought to determine if oxytocin reinforces the evolutionary drive to stop signaling status during a partner’s ovulation.

The investigation began with a preliminary pilot study to categorize consumer products. The team needed to distinguish between items that signal status and items that are merely functional. They presented a list of goods to a group of 110 participants.

These participants rated items based on dimensions such as social status, wealth, and novelty. Based on these ratings, the researchers selected specific “status products” and “functional products.” Status products included items that clearly projected wealth and prestige. Functional products were items of equal utility but without the social signaling component.

The first major experiment, titled Study 1a, involved 373 male participants. All these men were in committed heterosexual relationships. The study was conducted online.

Participants were asked to rate their attitude toward various status and functional products. They indicated how much they liked each item and how likely they were to buy it. Following this task, the men provided detailed information about their partners’ menstrual cycles.

The researchers categorized the men based on whether their partner was in the menstrual, ovulatory, or luteal phase. The results revealed a distinct pattern. Men whose partners were in the ovulatory phase expressed less interest in status products compared to men in the other groups.

This reduction in preference was specific to status items. The men’s interest in functional products remained stable regardless of their partner’s cycle phase. This suggests the effect is not a general loss of interest in shopping. Rather, it is a specific withdrawal from status signaling.

To ensure this effect was specific to men, the researchers conducted Study 1b. They recruited 416 women who were also in committed relationships. These participants performed the same rating tasks for the same products.

The women provided data on their own menstrual cycles. The analysis showed no variation in their preference for status products across the month. The researchers concluded that the fluctuation in status consumption is a male-specific phenomenon within the context of heterosexual relationships.

The team then designed Study 2 to investigate the causal role of oxytocin. They recruited 60 healthy heterosexual couples. These couples attended laboratory sessions together.

The experiment used a double-blind, placebo-controlled design. The couples visited the lab twice. One visit was scheduled during the woman’s ovulatory phase, and the other during the menstrual phase.

During these visits, the male participants were given a nasal spray. In one session, the spray contained oxytocin. In the other session, it contained a saline solution. Neither the participants nor the experimenters knew which spray was being administered.

After receiving the treatment, the men rated their preferences for the status and functional products. The researchers also measured the men’s “intuitive inclination.” This trait refers to how much a person relies on gut feelings versus calculated reasoning in decision-making.

The results from the placebo condition replicated the findings from the first study. Men liked status products less when their partners were ovulating. However, the administration of oxytocin amplified this effect.

When men received oxytocin during their partner’s fertile window, their desire for status products dropped even further. This suggests that oxytocin heightens a man’s sensitivity to his partner’s reproductive cues. It appears to reinforce the biological imperative to focus on the current relationship.

The study found that this effect was not uniform across all men. It was most pronounced in men who scored high on intuitive inclination. For men who rely heavily on intuition, oxytocin acted as a strong modulator of their consumer preferences.

The authors interpret these findings through the lens of mate-guarding. When a partner is fertile, the male’s biological priority shifts. He unconsciously moves away from behaviors that attract outside attention.

Instead, he focuses inward on the dyadic bond. Status consumption is effectively a broadcast signal to the mating market. Turning off this signal during ovulation serves to protect the exclusivity of the current pair bond.

There are some limitations to this research that warrant mention. The study relied on participants reporting their “possibility to buy” rather than observing actual spending. People’s stated intentions do not always align with their real-world financial behavior.

Additionally, the mechanism by which men detect ovulation is not fully understood. The study assumes men perceive these cues unconsciously. While previous literature suggests men can detect changes in scent or behavior, the current study did not explicitly test for this detection.

The study focused solely on couples in committed relationships. It remains to be seen how single men might respond to similar hormonal or environmental cues. It is possible that the presence of a committed partner is required to trigger this specific suppression of status seeking.

Future research could address these gaps by analyzing real-world consumer data. Comparing purchasing patterns of single men versus committed men would also provide greater clarity. Additionally, measuring oxytocin levels naturally occurring in the blood could validate the findings from the nasal spray experiment.

Despite these caveats, the research offers a new perspective on the biological underpinnings of economic behavior. It challenges the view of consumption as a purely social or rational choice. Instead, it highlights the role of ancient reproductive strategies in modern shopping aisles.

The findings indicate that marketing strategies might affect consumers differently depending on their biological context. Men in relationships may be less responsive to status-based advertising at certain times of the month. Conversely, campaigns focusing on relationship solidity might be more effective during those same windows.

This study adds to a growing body of work linking physiology to psychology. It demonstrates that the drive to reproduce and protect offspring continues to shape human behavior in subtle ways. Even the decision to buy a luxury watch may be influenced by the invisible tick of a partner’s biological clock.

The study, “Modulation of strategic status signaling: oxytocin changes men’s fluctuations of status products preferences in their female partners’ menstrual cycle,” was authored by Honghong Tang, Hongyu Fu, Song Su, Luqiong Tong, Yina Ma, and Chao Liu.

❌
❌