Normal view

Today — 16 February 2026English

Gender-affirming hormone therapy linked to shifts in personality traits

16 February 2026 at 03:00

A new study published in Comprehensive Psychoneuroendocrinology suggests that gender-affirming hormone therapy may influence specific personality traits in transgender individuals. The findings indicate that medical transition can shift certain emotional and behavioral patterns toward those typically associated with the individual’s identified gender. While personality is often viewed as a static set of characteristics, this research provides evidence that sex hormones might play a role in shaping how people think, feel, and behave.

The relationship between hormone levels and personality traits remains a complex area of study. Previous research on cisgender populations (people whose gender identity matches their sex assigned at birth) has documented average differences in personality traits between men and women. For instance, women tend to score higher on traits related to agreeableness and neuroticism compared to men. The researchers wanted to determine if altering hormone levels through medical treatment would cause personality shifts in transgender individuals.

“This investigation was part of a larger study regarding possible effects on the brain from gender-affirming hormonal treatment. For us, who are clinically active in transgender care, it is obvious that sex hormones have effects on the brain in an extent not totally acknowledged. So the deeper aim was to investigate the effects of sex hormones on personality traits, something usually believed to be rather static,” explained study author Mats Holmberg of the Karolinska Institutet.

The research team conducted a prospective study involving adults referred for gender-affirming hormone therapy at the Karolinska University Hospital in Stockholm, Sweden. To ensure the results specifically reflected hormonal changes rather than other factors, the scientists excluded individuals with known psychiatric disorders, autism spectrum disorder, or those taking antidepressant medications. This helped minimize variables that could skew the personality assessments.

The final group of participants consisted of 58 individuals. This included 34 people assigned female at birth who were prescribed testosterone and 24 people assigned male at birth who received anti-androgens and estradiol. Anti-androgens are medications that block the effects of testosterone.

The researchers used the NEO-PI-R inventory to assess personality. This is a comprehensive questionnaire based on the Five-Factor Model, often called the “Big Five.” This model categorizes personality into five main dimensions: Neuroticism, Extraversion, Openness, Agreeableness, and Conscientiousness. Neuroticism refers to the tendency to experience negative emotions like anxiety or sadness.

Extraversion describes sociability and enthusiasm. Openness involves curiosity and a willingness to try new things. Agreeableness relates to how cooperative and compassionate a person is. Conscientiousness reflects organization and dependability. Participants completed this assessment twice: once before starting hormones and again after at least six months of treatment.

Before treatment began, the researchers observed specific differences between the two groups. Participants assigned female at birth scored higher in the dimension of Agreeableness compared to those assigned male at birth. They also scored higher in specific sub-categories, known as facets, such as excitement seeking and straightforwardness. These baseline differences suggested that even prior to hormonal intervention, the groups displayed distinct personality profiles.

After six months of testosterone therapy, the participants assigned female at birth showed distinct changes. Their scores for Neuroticism decreased significantly. Within this dimension, they reported lower levels of depression and vulnerability. Simultaneously, this group showed an increase in the facet of “Actions,” which falls under the Openness dimension. This suggests a greater willingness to try different activities or behaviors. The reduction in Neuroticism aligns with patterns seen in cisgender men, who generally score lower in this trait than cisgender women.

The group assigned male at birth, who received feminizing hormones, experienced different shifts. These participants showed an increase in the “Feelings” facet of the Openness dimension. This indicates a greater receptivity to one’s own inner emotional states. Unlike the testosterone group, they did not show significant changes in the broad dimensions of Neuroticism or Extraversion. The increase in emotional receptivity mirrors findings in cisgender women, who typically score higher in this specific facet.

The scientists also looked for relationships between the amount of hormone change in the blood and the degree of personality change. In the group assigned male at birth, higher increases in estradiol levels correlated with lower scores in several traits, including Openness and Agreeableness. This finding was unexpected and somewhat contradictory to general sex differences, indicating a complex relationship between estrogen and personality that requires further study.

The comparison between the two groups after treatment revealed a divergence in the trait of vulnerability. Following six months of therapy, the group treated with testosterone showed a significant reduction in vulnerability. The group treated with estrogen did not show a corresponding increase or decrease. This resulted in a larger gap between the two groups post-treatment than existed beforehand.

While the study offers new insights, the researchers caution against drawing broad conclusions due to several limitations. The sample size was relatively small, which makes it difficult to generalize the findings to the entire transgender population. Additionally, the study did not include a control group of individuals not receiving hormone therapy. This makes it impossible to distinguish clearly between the biological effects of the hormones and the psychological effects of social transition.

The relief of treating gender dysphoria—the distress caused by a mismatch between gender identity and physical sex—could naturally improve mood and reduce neuroticism, regardless of specific chemical changes. The act of living authentically and being perceived correctly by others likely impacts personality expression as well.

The researchers also noted that the study only covered the first six months of treatment. It remains unknown if these personality changes persist, evolve, or revert over a longer period. Personality is generally considered stable in adulthood, so observing changes within such a short timeframe is notable. However, longer-term data is necessary to see if these shifts are permanent.

Future research requires larger groups of participants followed over several years to confirm these initial observations. The scientists emphasize that these results do not validate conservative gender roles but rather highlight that sex hormones may influence brain function and personality formation more than previously understood. They suggest that understanding these potential changes can help patients better anticipate the effects of medical transition.

The study, “The effect of gender-affirming hormonal treatment on personality traits – a NEO-PI-R study,” was authored by Mats Holmberg, Alex Wallen, and Ivanka Savic.

Targeting toxic protein chains could slow neurodegenerative disease

16 February 2026 at 01:00

For decades, researchers have worked to untangle the biological causes of neurodegenerative conditions such as Alzheimer’s disease. A primary focus has been the accumulation of misfolded proteins that clump together in the brain and damage neurons. A new study reveals that specific repetitive chains of amino acids, known as polyserine domains, can damage brain cells and worsen the accumulation of toxic protein clumps associated with these diseases.

The findings suggest that these repetitive chains may be a driver of neurological decline. The research was published in the Proceedings of the National Academy of Sciences.

To understand this study, it is necessary to understand a protein called tau. In healthy brains, tau serves as a stabilizer for the internal skeleton of nerve cells. It helps maintain the tracks used to transport nutrients and molecules within the cell. In diseases collectively known as tauopathies, which include Alzheimer’s, tau molecules detach from this structure. They then chemically change and stick together. These sticky clumps, or aggregates, form tangles that choke the cell and eventually kill it.

Researchers are working to identify what causes tau to transition from a helpful stabilizer to a toxic clump. Previous investigations have observed that certain other proteins often appear alongside tau tangles in the brains of patients. These accompanying proteins often contain long, repetitive strings of the amino acid serine. Scientists call these strings polyserine domains.

Additionally, these polyserine chains are produced in specific genetic disorders. Diseases such as Huntington’s disease and spinocerebellar ataxia type 8 are caused by errors in the genetic code where a small segment of DNA repeats itself too many times. These genetic stutters can result in the production of toxic repetitive proteins, including those rich in serine.

Meaghan Van Alstyne, a researcher at the University of Colorado Boulder, led the study to determine if these polyserine domains are merely bystanders or active participants in brain disease. She worked with senior author Roy Parker, a distinguished professor of biochemistry at the same university. The team sought to answer whether the presence of polyserine alone is enough to harm a mammalian brain. They also wanted to know if it accelerates the problems caused by tau.

To investigate this, the team used a common laboratory tool known as an adeno-associated virus serotype 9. This virus is modified so that it cannot cause disease. Instead, it acts as a delivery vehicle to transport specific genetic instructions into cells. The researchers injected newborn mice with this viral carrier. The virus delivered instructions to brain cells to produce a protein containing a long tail of 42 serine molecules.

The researchers first observed the effects of this polyserine on normal, wild-type mice. As the mice aged, those producing the polyserine developed clear physical and behavioral problems. They weighed less than the control group. They also displayed difficulties with movement and coordination.

The team tested the motor skills of the mice using a rotarod assay. This test involves placing a mouse on a horizontal rotating rod that spins faster over time. The mice must keep walking to avoid falling off. It is similar to a lumberjack balancing on a rolling log. From four to six months of age, the mice expressing polyserine fell off the rod much sooner than the control mice.

Behavioral changes also emerged. The researchers placed the mice in a maze that is elevated above the ground. The maze has two enclosed arms and two open arms. Mice naturally prefer enclosed spaces because they feel safer. The mice with polyserine spent more time in the open arms. This behavior suggests a reduction in anxiety or a lack of natural caution.

The team also tested memory using a fear conditioning assay. In this test, mice learn to associate a specific sound or environment with a mild foot shock. When placed back in that environment later, a mouse with normal memory will freeze in anticipation. The polyserine mice froze much less often. This indicates they had severe deficits in learning and memory.

To find the biological cause of these behaviors, Van Alstyne and her colleagues examined the brains of the mice. they found a dramatic loss of a specific type of neuron called a Purkinje cell. These are large, distinctively shaped neurons located in the cerebellum. The cerebellum is the part of the brain responsible for coordinating voluntary movements.

The viral delivery system used in the study is known to be particularly effective at targeting Purkinje cells. In the mice receiving the polyserine gene, these cells were largely wiped out. The loss of these cells likely explains the coordination problems observed in the rotarod test.

Alongside the cell death, the researchers observed signs of gliosis. This is a reaction where support cells in the brain, known as glia, become overactive. It is a sign of inflammation and damage. The brain was reacting to the polyserine as a toxic presence.

The researchers then investigated where the polyserine went inside the surviving neurons. They found that the protein did not stay in the main body of the cell. Instead, it accumulated inside the nucleus. The nucleus is the control center of the cell that holds its DNA. The polyserine formed large clumps within the nucleus. These clumps were tagged with ubiquitin, a small molecule the cell uses to mark garbage for disposal. This suggests the cells were trying, and failing, to clear the toxic protein.

After establishing that polyserine is toxic on its own, the researchers tested its effect on tau. They used a specific strain of mice genetically engineered to produce a mutant form of human tau. These mice naturally develop tau tangles and neurodegeneration as they age.

The team injected these tau-prone mice with the polyserine-producing virus. The results showed that polyserine acts like fuel for the fire. The mice expressing both the mutant tau and the polyserine died significantly younger than those expressing only the mutant tau.

When the researchers analyzed the brain tissue of these mice, they found elevated levels of disease markers. There was an increase in phosphorylated tau. Phosphorylation is a chemical change that promotes aggregation. The study also found more insoluble tau, which refers to the hard, tangles that cannot be dissolved.

Furthermore, the team measured the “seeding” capacity of the tau. In disease states, misfolded tau can act like a template. It corrupts normal tau and causes it to misfold, spreading the pathology from cell to cell. Brain extracts from the mice with polyserine showed a higher ability to induce clumping in test cells. This indicates that polyserine makes the tau pathology more aggressive and transmissible.

Finally, the researchers asked if this effect was unique to serine. They compared it to other repetitive amino acid chains often found in genetic diseases, such as polyglutamine and polyalanine. They introduced these different chains into human neurons grown in a dish.

The results showed a high level of specificity. Only the polyserine chains recruited tau molecules into their clusters. The polyglutamine and polyalanine chains did not. This physical interaction between polyserine and tau appears to be the mechanism that accelerates the formation of toxic tau seeds.

There are caveats to consider in this research. The study used a virus to force the cells to make high levels of polyserine. This might result in higher concentrations of the protein than would naturally occur in a human disease. Future research will need to determine if lower, natural levels of polyserine cause the same degree of harm over a human lifespan.

The authors also noted that while they saw massive cell death in the cerebellum, other brain areas like the hippocampus seemed more resistant to cell loss, despite containing the protein. Understanding why some neurons die while others survive could offer clues for protection.

This study provides evidence that polyserine is not just a passive marker of disease. It suggests that these repetitive domains are active toxins that can kill neurons and worsen tauopathies. This opens a potential new avenue for therapy. If scientists can block the interaction between polyserine and tau, they might be able to slow the progression of diseases like Alzheimer’s.

“If we really want to treat Alzheimer’s and many of these other diseases, we have to block tau as early as possible,” said Parker. “These studies are an important step forward in understanding why tau aggregates in cells and how we can intervene.”

The study, “Polyserine domains are toxic and exacerbate tau pathology in mice,” was authored by Meaghan Van Alstyne, Vanessa L. Nguyen, Charles A. Hoeffer, and Roy Parker.

Scientists confirm non-genitally stimulated orgasms are biologically real

15 February 2026 at 23:00

A new case study provides biological evidence that a post-menopausal woman can induce orgasms solely through the use of pelvic floor muscle exercises, without any direct genital stimulation. The findings indicate that these non-genitally stimulated orgasms trigger a surge in the hormone prolactin, mirroring the physiological response seen in typical sexual climaxes. This research was published in the International Journal of Sexual Health.

Orgasms typically result from direct physical stimulation of the genitals, but evidence indicates they can also occur through mental imagery or specific muscle movements. Previous research demonstrated that a premenopausal woman could induce orgasms using tantric techniques, a practice involving deep breathing and mental focus to control bodily sensations and sexual energy.

This earlier case was confirmed by a rise in plasma prolactin, a hormone released during sexual climax. However, it remained unclear whether this ability relied on the higher levels of ovarian hormones found in younger women or if it could occur after menopause. Consequently, the researchers aimed to determine if a post-menopausal woman could achieve these outcomes using a systematic routine targeting the pelvic floor.

The pelvic floor is a hammock-like group of muscles at the base of the pelvis that supports internal organs like the bladder and uterus, and plays a primary role in sexual response and control. The team sought to validate the experience using objective biological markers rather than relying solely on the participant’s description. Confirming the physiological reality of these experiences provides evidence for potential new therapeutic avenues for women facing difficulties with orgasm.

“I am generally interested in the neurobiology of sexual function, and in particular how the brain is organized for sexual arousal, desire, orgasm, sexual pleasure, and sexual inhibition. Recently, I started studying people who can have orgasms without genital stimulation (Non-Genitally Stimulated Orgasms, or NGSOs),” said study author James G. Pfaus, an assistant professor at Charles University in Prague and the director of research for the Center for Sexual Health and Interventions at the Czech National Institute of Mental Health.

“Women seem to be able to do this better than men, and the ability seems to come from training of the pelvic floor muscles and breathing exercises, either through tantra or pelvic floor therapy. An obvious question is whether these orgasms are ‘real,’ meaning whether they are accompanied by objective markers similar to those found during genitally stimulated orgasms (GSOs). We used the hormone prolactin as our objective measure, since it increases at orgasm (and the only other reasons it would go up would be a pituitary tumor, nursing, or extreme stress).”

“This occurs because at orgasm the neurotransmitter dopamine is instantly inhibited by both opioid and serotonin release. Dopamine in the hypothalamus keeps prolactin inhibited, so when it is inhibited, prolactin is released from inhibition. Prolactin increases reliably in both men and women during orgasm and stays elevated for at least an hour after.”

The new study focused on a 55-year-old woman who had undergone a hysterectomy and was not taking hormone replacement therapy. She had trained in a specific method called the “Wave Technique,” which involves rhythmic flexing and relaxing of the pelvic floor muscles. This training originally involved using a small jade egg to sensitize the muscles, but the participant had advanced to performing the movements without any device.

The experiment took place in a private hospital room where the participant remained fully clothed. The participant engaged in three distinct testing sessions, each separated by 48-hour intervals to ensure her hormone levels returned to baseline. These sessions included a 2.5-minute orgasm induction, a 10-minute orgasm induction, and a 10-minute Pilates workout which served as a control condition.

To measure physiological changes, a registered nurse drew blood samples fifteen minutes before each session, immediately afterward, and fifteen minutes post-session. The scientists analyzed the blood for prolactin to see if the muscle-induced orgasms triggered the expected hormonal release. They also measured levels of luteinizing hormone, follicle-stimulating hormone, and testosterone to track other potential endocrine changes.

In a separate session, the participant used a Bluetooth-enabled biofeedback device called the Lioness 2.0 to record muscle activity. The researchers modified the device to prevent any vibration or direct clitoral stimulation. This ensured the device only recorded pressure changes inside the vagina generated by the participant’s muscle movements.

The blood analysis revealed hormonal shifts following the muscle-induced orgasms. After the 2.5-minute session, prolactin levels rose to 110 percent of the baseline measurement. Following the 10-minute session, prolactin levels increased even further, reaching 141 percent of the baseline.

The findings indicate that “NGSOs are real from a physiological and psychological standpoint, and that probably all women can be trained to induce them, regardless of their hormonal status (pre-versus post-menopausal),” Pfaus told PsyPost.

In contrast, the Pilates workout resulted in a 12 percent decrease in prolactin levels. This differentiation suggests that the hormonal spike was specific to the sexual release and not merely a result of physical exertion. While exercise can affect hormones, it did not mimic the prolactin surge associated with orgasm in this context.

The researchers also tracked testosterone levels during the sessions. Testosterone increased slightly after the 10-minute orgasm session and the Pilates workout. This aligns with known data suggesting that acute physical exercise can elevate androgen levels in women.

Data from the Lioness sensor provided a visual representation of the physical activity during the orgasms. The device recorded rhythmic contractions occurring at intervals of roughly 7 to 15 seconds throughout the session. These contractions appeared as spikes in muscle tension that matched the participant’s subjective experience of climax.

During the Lioness session, the participant reportedly experienced over thirty distinct peaks within ten minutes. The sensor data showed a pattern of “push and pull” contractions that built up tension leading to each spike. The researchers noted that the participant vocalized during these peaks, signaling the moment of release.

“It is likely that the pelvic floor muscles are tensing around the nerves that carry information from the clitoris, vagina, and cervix into the spinal cord, and that women who learn this are sensitizing the nerve fibers to the abdominal and pelvic floor stimulation,” Pfaus explained. “So it is a very real phenomenon, and one that offers new vistas for women with orgasm difficulties.”

“Likewise, we have recently conducted a similar experiment on hypnotically induced orgasms, which show the same increase in prolactin. These orgasms are more likely to be ‘top down’ than ‘bottom up,’ although all women and men who show them have abdominal and pelvic floor reactions as the orgasm occurs.”

“The practical significance is that probably all women have this ability and it is just a matter of learning how to control the abdominal and pelvic floor musculature. It means that orgasm is not something your partner ‘gives’ you, but something you control in your own body and brain.”

A primary limitation of this research is that it is a case study involving a single participant. While the results provide strong biological evidence for this specific individual, they may not universally apply to all women. The participant was highly trained in a specific technique, which may be difficult for the average person to replicate without instruction.

Despite the small sample size, the study challenges the common misconception that orgasms without genital touch are fake. “It is common to disbelieve women who can have NGSOs induced by fantasy or the kind of pelvic floor movements we observed here. Likewise, it is common to think that orgasms induced by hypnosis are a party trick, and that people having them are simply faking it for the hypnotist. You cannot increase your prolactin at will. It is an objective marker of orgasm, so it is not faked.”

The scientists suggest that future research should involve a larger group of participants to verify these findings across a broader population. The researchers express an interest in studying men and women who can induce orgasms through other non-contact methods, such as hypnosis. Expanding the participant pool would help determine if this ability is a general human trait or specific to certain individuals.

Another goal for future study is to use functional magnetic resonance imaging, or fMRI, to observe brain activity during these non-genitally stimulated orgasms. Comparing brain scans of these experiences with those of standard orgasms could reveal how the brain processes different types of sexual pleasure. Such imaging could map the neural pathways involved in generating orgasm through muscle movement alone.

Ultimately, the researchers hope to investigate whether teaching these pelvic floor techniques could help women who suffer from lifelong difficulties achieving orgasm. If women can learn to sensitize their pelvic nerves through exercise, it might offer a non-pharmaceutical treatment for sexual dysfunction.

“Studies on orgasm are very difficult to get approved by institutional research ethics boards,” Pfaus noted. “There is a general fear that bad things could happen studying something so personal and intimate. And this is true even if, for example, the person’s partner is stimulating their genitals in a totally private space. NGSOs of course do not require direct genital stimulation and occur when the participant is fully clothed. So, in addition to their clinical significance, NGSOs may open the door for more study of orgasm function (e.g., in fertility) and the neurobiology of orgasm in general.”

The study, “Non-Genitally Stimulated Orgasms Increase Plasma Prolactin in a Menopausal Woman,” was authored by James G. Pfaus, Roni Erez, Nitsan Erez, and Jan Novák.

Yesterday — 15 February 2026English

Exercise rivals therapy and medication for treating depression and anxiety

15 February 2026 at 21:00

A new, comprehensive analysis confirms that physical activity is a highly effective treatment for depression and anxiety, offering benefits comparable to therapy or medication. The research suggests that specific types of exercise, such as group activities for depression or short-term programs for anxiety, can be tailored to maximize mental health benefits for different people. These findings were recently published in the British Journal of Sports Medicine.

Mental health disorders are a growing concern across the globe. Depression and anxiety affect a vast number of people, disrupting daily life and physical health. While antidepressants and psychotherapy are standard treatments, they are not always sufficient for every patient. Rates of these conditions continue to rise despite the availability of traditional cares.

Health experts have explored exercise as an alternative or add-on treatment for many years. However, previous attempts to summarize the evidence have faced challenges. Earlier reviews often mixed data from healthy individuals with data from patients suffering from chronic physical illnesses. This made it difficult to determine if mental improvements were due to exercise itself or simply a result of better physical health.

To address this uncertainty, a team of researchers conducted a “meta-meta-analysis,” also known as an umbrella review. This is a highly rigorous study design that sits at the top of the evidence hierarchy. Instead of running a new experiment on people, the researchers analyzed data from existing meta-analyses.

A meta-analysis pools the results of many individual scientific experiments to find a common truth. This umbrella review went a step further by pooling the results of those pools. The goal was to provide the most precise estimate possible of how exercise impacts mental health.

The research team was led by Neil Richard Munro from James Cook University in Queensland, Australia. He collaborated with colleagues from institutions in Australia and the United States. Their primary aim was to isolate the effect of exercise on mental health by excluding studies involving participants with pre-existing chronic physiological conditions.

This exclusion was a key part of their methodology. By removing data related to conditions like heart disease or cancer, the team removed potential confounding factors. They wanted to ensure that any observed benefits were due to the direct impact of exercise on the brain and psychological state.

The researchers searched five major electronic databases for relevant literature. They gathered data from studies published up to July 2025. The scope of their search was massive, covering children, adults, and older adults.

The final dataset included 63 umbrella reviews. These reviews encompassed 81 specific meta-analyses. In total, the analysis represented data from 1,079 individual studies and involved 79,551 participants.

The sheer volume of data allowed the researchers to look for subtle patterns. They examined different types of exercise, such as aerobic activities, resistance training, and mind-body practices like yoga. They also analyzed variables like intensity, duration, and whether the exercise was performed alone or in a group.

The overarching finding was clear and positive. Exercise reduced symptoms of both depression and anxiety across all population groups. The magnitude of the benefit was described as medium for depression and small-to-medium for anxiety.

For depression, the study found that all types of exercise were beneficial. However, aerobic exercise—activities that get the heart rate up, like running or cycling—showed the strongest impact. This suggests that cardiovascular engagement may trigger biological pathways that fight depressive symptoms.

The social context of the physical activity also appeared to matter greatly for depression. The data indicated that exercising in a group setting was more effective than exercising alone. Similarly, programs that were supervised by a professional yielded better results than unsupervised routines.

These findings regarding group and supervised settings point to the importance of social support. The shared experience of a class or team environment may provide a psychological sense of belonging. This social connection likely acts as an additional antidepressant mechanism alongside the physical exertion.

The study identified specific demographic groups that responded particularly well to exercise. “Emerging adults,” defined as individuals aged 18 to 30, saw the greatest benefits for depression. This is a critical age range, as it often coincides with the onset of many mental health challenges.

Another group that saw substantial benefits was women in the postnatal period. Postpartum depression is a severe and common condition. The finding that exercise is a highly effective intervention for this group offers a promising, non-pharmaceutical tool for maternal mental health.

When analyzing anxiety, the researchers found slightly different patterns. While aerobic exercise was still the most effective mode, all forms of movement helped reduce symptoms. This included resistance training and mind-body exercises like yoga or tai chi.

The optimal parameters for anxiety relief were notably different than for depression. The data suggested that shorter programs were highly effective. Interventions lasting up to eight weeks showed the strongest impact on anxiety symptoms.

Regarding intensity, the findings for anxiety were somewhat counterintuitive. Lower intensity exercise appeared to be more effective than high-intensity workouts. This could be because high-intensity exertion mimics some physical symptoms of anxiety, such as a racing heart, which might be uncomfortable for some patients.

The researchers compared the effects of exercise to traditional treatments. They found that the benefits of physical activity were comparable to those provided by psychotherapy and medications. This positions exercise not just as a lifestyle choice, but as a legitimate clinical intervention.

Despite the strength of these findings, the authors noted several caveats. The definitions of exercise intensity varied across the original studies, making it hard to set precise boundaries. What one study considers “moderate” might be “vigorous” in another.

There was also a potential sign of publication bias in the anxiety studies. This refers to the tendency for scientific journals to publish positive results more often than negative ones. However, the sheer number of studies analyzed provides a buffer against this potential distortion.

Another limitation was the overlap of participants in some of the underlying reviews. The researchers used a statistical method to check for this duplication. While some overlap existed, particularly in studies of youth and perinatal women, the overall quality of the evidence remained high.

The authors emphasized that motivation remains a hurdle. Knowing exercise helps is different from actually doing it. Future research needs to focus on how to help people with depression and anxiety stick to an exercise routine.

The study supports a shift in how mental health is treated clinically. The authors argue that health professionals should prescribe exercise with the same confidence as they prescribe pills. It is a cost-effective, accessible option with few side effects.

For public health policy, the implications are broad. The study suggests that guidelines should explicitly recommend exercise as a first-line treatment. This is especially relevant for young adults and new mothers, who showed the strongest responses.

Tailoring the prescription is key. A “one size fits all” approach does not apply to mental health. A depressed patient might benefit most from a running group, while an anxious patient might prefer a gentle, short-term yoga program.

The authors concluded that the evidence is now undeniable. Exercise is a potent medicine for the mind. The challenge now lies in integration and implementation within healthcare systems.

Mental health professionals can use these findings to offer evidence-based advice. They can move beyond vague recommendations to “be more active.” Instead, they can suggest specific formats, like group classes for depression, based on rigorous data.

Ultimately, this study serves as a comprehensive validation of movement as therapy. It strips away the noise of co-occurring physical diseases to show that exercise heals the brain. It offers a hopeful, empowering path for millions struggling with mental health issues.

The study, “Effect of exercise on depression and anxiety symptoms: systematic umbrella review with meta-meta-analysis,” was authored by Neil Richard Munro, Samantha Teague, Klaire Somoray, Aaron Simpson, Timothy Budden, Ben Jackson, Amanda Rebar, and James Dimmock.

Genetic risk for anhedonia linked to altered brain activity during reward processing

15 February 2026 at 19:00

A study in Germany found that individuals with higher polygenic risk scores for anhedonia showed specific patterns of brain activity when processing anticipated monetary rewards. More specifically, they showed decreased activation in the bilateral putamen and left middle frontal gyrus during anticipation of rewards and decreased activation in the right caudate while receiving feedback. The research was published in the Journal of Affective Disorders.

Anhedonia is the reduced ability to experience pleasure or interest in activities that are normally rewarding. It is a core symptom of major depressive disorder but also appears in other conditions such as schizophrenia, substance use disorders, and bipolar disorder.

Anhedonia can involve diminished pleasure during activities (consummatory anhedonia) or reduced motivation and anticipation for rewards (anticipatory anhedonia). People with anhedonia may withdraw from social interactions, hobbies, or goals they once enjoyed. Neurobiologically, it is linked to dysfunction in brain reward systems, particularly pathways involving dopamine.

Psychological factors such as chronic stress, trauma, and negative cognitive patterns can contribute to its development. Anhedonia is associated with poorer quality of life and worse clinical outcomes when it persists. It can make treatment more challenging because reduced motivation may limit engagement in therapy or daily activities.

Study author Nicholas Schäfer and his colleagues investigated the role of a polygenic risk score for anhedonia in functional brain activity during the monetary incentive delay (MID) task. The MID task is a paradigm that requires participants to respond quickly to cues signaling potential monetary gains or losses. A polygenic risk score is an estimate of an individual’s genetic predisposition to a trait or disorder created by aggregating the effects of many genetic variants across the genome.

Study participants were individuals participating in the MooDs and IntegraMent studies. These were multisite neuroimaging studies recruiting a total of 974 individuals; this specific study analyzed data from 517 of them. The sample included 57 patients with major depressive disorder, 39 with schizophrenia, and 48 with bipolar disorder. The remaining 373 participants were healthy controls (a group that included 243 healthy individuals and 130 healthy first-degree relatives of patients).

Study authors calculated participants’ polygenic risk scores for anhedonia using their genotype data. They also assessed participants’ anhedonia scores using a questionnaire (derived from the SCL-90). Participants completed the monetary incentive delay task while undergoing functional magnetic resonance imaging (fMRI) of their brains. In this task, participants were presented with arrows that indicated either a potential monetary reward, a potential loss, no reward, or a cue for verbal trials. This was the anticipation phase.

Participants then had to react to a visual target by pressing a button (except in the neutral trials where no action was required). After this, they received feedback about whether they lost or won 2 EUR, or received neutral or verbal feedback (e.g., “You reacted slow”). This was the feedback phase of the task.

Results showed that individuals with higher polygenic risk scores for anhedonia tended to show decreased activation in the putamen region of the brain in both brain hemispheres and in the left middle frontal gyrus during the anticipation phase of the task. They also showed lower activation in the right caudate region during the feedback phase (specifically during reward feedback).

Participants with higher polygenic risk scores for anhedonia also tended to show lower activity in the left middle frontal gyrus while anticipating financial loss and during salience processing (deciding how important the events at hand are).

However, while participants were receiving feedback about losing 2 EUR, individuals with higher polygenic risk scores for anhedonia tended to show heightened activity in the bilateral putamen and right caudate regions.

The right caudate nucleus of the brain is involved in goal-directed behavior, reward-based learning, and the integration of motivation with action selection, while the left middle frontal gyrus supports executive functions such as working memory, planning, and top-down cognitive control. The putamen primarily contributes to motor control and habit formation, and it also plays a role in reinforcing learned actions through reward processing.

“Our results highlight the importance of the striatum and prefrontal cortex in the context of a genetic risk for anhedonia,” the study authors concluded.

The study contributes to the scientific understanding of the neural basis of anhedonia. However, it should be noted that studies of neural correlates of psychological characteristics often yield inconsistent results. There are often pronounced individual differences in brain activities associated with specific psychological characteristics. Further studies are needed to verify and corroborate the reported findings.

The paper, “Associations between polygenic risk for anhedonia and functional brain activity during reward processing,” was authored by Nicholas Schäfer, Swapnil Awasthi, Stephan Ripke, Anna Daniels, Andreas Meyer-Lindenberg, Heike Tost, Andreas Heinz, Henrik Walter, and Susanne Erk.

Daily soda consumption linked to cognitive difficulties in teens

15 February 2026 at 17:00

New research indicates that daily consumption of sodas and sports drinks may hinder the cognitive abilities of adolescents. A recent analysis suggests that these sugary beverages disrupt sleep patterns, which in turn leads to difficulties with memory, concentration, and decision-making. These findings were published in the journal Nutritional Neuroscience.

The adolescent brain undergoes a period of rapid development and reorganization. This phase is characterized by changes in the prefrontal cortex, the area of the brain responsible for planning and impulse control. Because the brain is still maturing, it is particularly sensitive to dietary inputs and environmental factors.

Researchers have previously identified links between high sugar intake and various health issues. However, the specific relationship between different types of sugary drinks and mental clarity in teenagers has remained less defined. Shuo Feng, a researcher at the Department of Health Behavior at Texas A&M University, sought to clarify this connection.

Feng designed the study to look beyond a simple direct link between sugar and brain function. The investigation aimed to determine if sleep duration acts as a “mediator.” A mediator is a variable that explains the process through which two other variables are related. In this case, the question was whether sugary drinks cause poor sleep, which then causes cognitive trouble.

The study utilized data from the 2021 Youth Risk Behavior Surveillance Survey (YRBS). This is a large-scale, national survey administered by the Centers for Disease Control and Prevention (CDC). It monitors health behaviors contributing to the leading causes of death and disability among youth.

The final dataset included responses from 8,229 high school students across the United States. The survey asked students to report how often they consumed soda and sports drinks over the past week. It also asked them to estimate their average nightly sleep duration.

To measure cognitive difficulties, the survey included a specific question regarding mental clarity. Students were asked if physical, mental, or emotional problems caused them “serious difficulty concentrating, remembering, or making decisions.” Feng used statistical models to analyze the relationships between these variables while accounting for factors like age, gender, and physical activity.

The analysis revealed distinct patterns based on the type of beverage and the sex of the student. Daily consumption of soda showed a strong association with cognitive difficulties for both boys and girls. Compared to non-drinkers, adolescents who drank soda every day had higher odds of reporting serious trouble with memory and concentration.

The results for sports drinks appeared slightly different. Daily consumption of sports drinks was linked to cognitive difficulties in girls. This association was not statistically clear for boys in the same daily consumption category.

A major component of the findings focused on the role of sleep. The data showed that higher intake of sugar-sweetened beverages correlated with fewer hours of rest. This reduction in sleep served as a pathway linking the drinks to cognitive struggles.

For both boys and girls, sleep duration mediated the relationship between soda intake and cognitive difficulties. This means that part of the reason soda drinkers struggle with focus is likely because they are not sleeping enough. A similar mediation effect was found regarding sports drinks.

The biological mechanisms behind these findings involve the brain’s chemical signaling systems. Many sugar-sweetened beverages contain caffeine. Caffeine acts as an antagonist to adenosine, a brain chemical that promotes sleepiness. By blocking adenosine receptors, caffeine increases alertness temporarily but disrupts the body’s natural drive for sleep.

Sugar itself also impacts the brain’s reward system. Consuming high amounts of sugar stimulates the release of dopamine. This is a neurotransmitter associated with pleasure and motivation.

Chronic overstimulation of this reward system during adolescence can alter gene expression in the hypothalamus. This brain region regulates various bodily functions, including sleep cycles and memory. Over time, these chemical changes may increase vulnerability to cognitive dysregulation.

The study also touched upon the concept of synaptic plasticity. This term refers to the brain’s ability to strengthen or weaken connections between neurons. Estrogens, particularly estradiol, play a role in enhancing this plasticity and promoting blood flow in the brain.

Biological differences in how males and females process these chemicals may explain the variation in results. For instance, the study notes that sex-specific mechanisms could influence how sugar affects the brain. This might shed light on why sports drinks showed a stronger negative association with cognitive function in girls than in boys.

The sugar content in sports drinks is generally lower than that of sodas. A typical 20-ounce sports drink contains about 34 grams of sugar. In contrast, a similar amount of soda may contain nearly double that amount.

This difference in sugar load might result in less stimulation of the dopamine reward system for sports drink consumers. Additionally, sports drinks are often consumed in the context of physical exercise. Exercise is known to improve metabolism and hormonal regulation.

Improved metabolism from exercise might help the body process unhealthy ingredients more rapidly. This could potentially buffer some of the negative effects on the brain. However, the study suggests that for girls consuming these drinks daily, the negative cognitive outcomes persist.

The researcher pointed out that socioeconomic factors often influence dietary choices. Marketing for sugary beverages frequently targets younger demographics. The availability of these drinks in schools and communities remains high.

There are limitations to this study that require consideration. The data comes from a cross-sectional survey. This means it captures a snapshot in time rather than following individuals over years.

Because of this design, the study cannot definitively prove that sugary drinks cause cognitive decline. It can only show that the two are statistically linked. It is possible that students with cognitive difficulties are more prone to drinking sugary beverages, rather than the other way around.

Another limitation is the reliance on self-reported data. Students might not accurately remember how many drinks they consumed in the past week. They might also struggle to estimate their average sleep duration precisely.

The measurement of cognitive difficulties relied on a single, broad question. This question combined memory, concentration, and decision-making into one category. Future research would benefit from using more granular tests to measure these specific mental functions separately.

The study also had to exclude a number of participants due to missing data. A sensitivity analysis showed that the final group of students was slightly older and more racially diverse than those excluded. This could potentially introduce selection bias into the final results.

Despite these caveats, the research offers evidence supporting public health interventions. Reducing the intake of sugar-sweetened beverages could be a practical strategy to improve youth health. Such a reduction may lead to better sleep duration and improved academic performance.

Educators and health professionals might consider emphasizing sleep hygiene as part of nutritional counseling. Addressing the consumption of caffeine and sugar, particularly in the evening, could help restore natural sleep cycles. This is vital for the developing adolescent brain.

Future studies should aim to replicate these findings using objective measures. Wearable technology could provide more accurate data on sleep duration and quality. controlled trials could also help isolate the effects of specific ingredients like high-fructose corn syrup or caffeine.

The study highlights a clear intersection between diet, rest, and mental function. It suggests that what teenagers drink has consequences that extend beyond physical weight or dental health. The impact reaches into the classroom and their daily ability to process information.

The study, “The association of sugar-sweetened beverages consumption with cognitive difficulties among U.S. adolescents: a mediation effect of sleep using Youth Risk Behavior Surveillance Survey 2021,” was authored by Shuo Feng.

A specific mental strategy appears to boost relationship problem-solving in a big way

15 February 2026 at 15:00

New research published in the Journal of Social and Personal Relationships provides evidence that a specific mental exercise can help couples resolve conflicts more effectively than simple positive thinking. The study indicates that a self-regulation strategy known as “mental contrasting” encourages partners to engage with the internal obstacles preventing them from solving their problems.

Romantic relationships inevitably involve conflict. How couples navigate these disagreements is a strong predictor of whether the relationship will last and how satisfied the partners will feel. Effective problem-solving usually involves constructive communication and emotional responsiveness, while ineffective management is characterized by defensiveness or avoidance.

While counseling is a traditional route for improving these skills, it can be time-consuming and expensive. As a result, psychologists have sought to identify effective, self-administered strategies that couples can use on their own to navigate difficulties.

“Almost every couple faces problems sooner or later. Sadly, most couples, especially those whose satisfaction is not (yet) critically affected, are unlikely participate in couple interventions programs due to the substantial time and money investment required. That’s why we wanted to test whether a brief, scalable, and self-guided exercise can have meaningful impact on couples’ problem-solving behavior,” said study author Henrik Jöhnk, a research associate at Zeppelin University.

The researchers focused on a strategy called mental contrasting. This technique is distinct from positive thinking, or “indulging.” When people indulge, they imagine a desired future without considering the reality that stands in the way. In mental contrasting, an individual identifies a wish and the best outcome of fulfilling that wish, but then immediately reflects on the main inner obstacle—such as an emotion, habit, or belief—that prevents them from realizing that future.

Prior studies have shown that mental contrasting helps individuals regulate their behavior by creating a strong mental link between the desired future and the obstacle that must be overcome. The researchers in this study wanted to determine if this internal cognitive process could translate into better interpersonal communication between two partners.

The study involved 105 mixed-gender couples living in Germany. The participants ranged in age from 19 to 60, with an average age of roughly 27 years. Most were in committed relationships, with an average duration of three and a half years. The study was conducted remotely using video conferencing software.

To begin the experiment, both partners in a couple independently listed topics that caused disagreements in their relationship. They then came together to agree on one specific problem they wanted to solve. Once a problem was selected, the partners separated into different virtual rooms to complete the experimental task.

The couples were randomly assigned to one of two conditions. In the mental contrasting condition, each partner was asked to imagine the most positive aspect of resolving their chosen problem. Following this, they were asked to identify and imagine their main inner obstacle that was holding them back from resolving it. In the indulging condition, participants also imagined the most positive aspect of the resolution, but instead of focusing on an obstacle, they were asked to imagine a second positive aspect. This condition mimicked standard positive thinking or daydreaming.

After these individual mental exercises, the partners rejoined in the same physical room and were recorded having a ten-minute discussion about their problem via Zoom. Researchers later coded these interactions, looking for specific behaviors. They measured “self-disclosure,” which is the act of revealing personal feelings, attitudes, and needs. They also measured “solution suggestions,” counting how often partners proposed specific ways to fix the problem. Two weeks after the experiment, the couples completed a follow-up survey to report on whether they had made progress in resolving the conflict.

The results showed that mental contrasting had a measurable impact on how couples interacted and how successful they were at solving their problems. Regarding the long-term outcome, couples who used mental contrasting reported greater problem resolution two weeks later compared to those who used indulging. This benefit was specifically observed for problems that the partners perceived as highly important. When the issue was of low importance, the type of mental exercise made less of a difference.

“For a brief, self-guided exercise, the effects are surprisingly strong,” Jöhnk told PsyPost. “In particular, couples who are still relatively satisfied may benefit from trying mental contrasting in order to identify new ways forward. At the same time, these effects should not be seen as comparable to those of established couple therapies, which typically involve multiple sessions over months or years. Mental contrasting is best understood as a tool and a complement—not an alternative—to existing interventions.”

The video analysis revealed that the intervention changed the behavior of men and women in distinct ways. Men in the mental contrasting condition engaged in significantly more self-disclosure than men in the indulging condition. Specifically, they were more likely to verbalize their feelings and explain the attitudes driving their behavior. In the indulging condition, men showed typical patterns of disclosing less than women. However, in the mental contrasting condition, men’s level of self-disclosure rose to match that of the women.

This suggests that reflecting on internal obstacles helped men overcome barriers to vulnerability. By recognizing that an emotion like anger or insecurity was the obstacle, they became more likely to express that emotion to their partner. This is significant because self-disclosure is a key component of intimacy and helps partners understand the root causes of a conflict.

Women responded to the intervention differently. Women in the mental contrasting condition suggested fewer solutions than those in the indulging group. This reduction in solution suggestions was particularly evident when the problem was rated as important. While offering fewer solutions might sound negative, the researchers interpret this as a positive shift toward quality over quantity.

“What surprised me most was that mental contrasting didn’t increase the number of solutions people suggested for their problems,” Jöhnk said. “Instead, it appeared to slow the process down: people (especially women in our study) were less likely to offer quick or premature fixes, which may actually support effective problem-solving.”

In many conflicts, rushing to offer solutions can be a way to bypass necessary emotional processing. By suggesting fewer solutions, the women may have been more selective and thoughtful, avoiding premature fixes that would not address the underlying issue. The data showed that in the mental contrasting condition, participants were more likely to suggest a solution immediately after engaging in self-disclosure, implying that the solutions offered were more grounded in the reality of their feelings.

The study provides evidence that focusing on obstacles, rather than ignoring them, fosters a more realistic and grounded approach to relationship maintenance. Indulging in positive fantasies can sometimes drain the energy needed for action or lead to disappointment when reality does not match the fantasy. Mental contrasting appears to mobilize individuals to tackle the hard work required for resolving serious issues.

“To resolve relationship problems, it’s not enough to just hope things will get better,” Jöhnk explained. “Our research shows that people benefit from also facing their own inner obstacles like anger, fear, or insecurity that often get in the way of constructive conversations and actual change. ”

But there are some limitations to this study. The sample consisted largely of young, educated couples who were relatively satisfied with their relationships. The dynamics of problem-solving might look very different in couples who are highly distressed or on the brink of separation. In those cases, the problems might be perceived as insurmountable, and mental contrasting might lead to disengagement rather than engagement.

Additionally, the study relied on a specific experimental setup using Zoom. While this allowed the researchers to observe couples in their own homes, the presence of a recording device and the structured nature of the task might have influenced behavior. The researchers also only analyzed verbal communication. Non-verbal cues, such as tone of voice, facial expressions, and body language, play a massive role in conflict and were not part of the behavioral coding.

“We are still at in the middle of investigating the role of mental contrasting in romantic relationships, but this line of research is now expanding, supported by funding from the German Research Foundation,” Jöhnk noted. “A next step is to examine whether and how mental contrasting may benefit highly distressed couples, whose problems are often difficult or even impossible to fully resolve. In particular, we aim to study how mental contrasting shapes the way couples think about and engage with their problems when quick solutions are unlikely.”

“Readers who are curious to learn more about mental contrasting can visit https://woopmylife.org, which offers free, evidence-based resources on mental contrasting and WOOP, a practical self-regulation strategy based on this research. For a deeper introduction, I also recommend Rethinking Positive Thinking by Gabriele Oettingen, who supervised this project and holds senior professorships at both New York University and Zeppelin University.”

The study, “Mental contrasting and problem-solving in romantic relationships: A dyadic behavioral observation study,” was authored by Henrik Jöhnk, Gabriele Oettingen, Kay Brauer, and A. Timur Sevincer.

Psychology professor challenges the idea that dating is a marketplace

15 February 2026 at 07:00

PsyPost’s PodWatch highlights interesting clips from recent podcasts related to psychology and neuroscience.

On Saturday, February 7, the Modern Wisdom podcast, hosted by Chris Williamson, released episode 1056 featuring Dr. Paul Eastwick, a psychology professor who specializes in attraction and close relationships. The episode explores whether traditional evolutionary theories about dating are accurate in the modern world.

At roughly the 30-minute mark, Dr. Eastwick challenges the popular idea that dating operates like a marketplace where everyone has an objective “mate value.” He argues that viewing people as a number, such as a “seven” or a “ten,” fails to account for human history. Instead of seeking the absolute highest status partner, humans evolved to prioritize compatibility and interdependence. This shift occurred because human children are born relatively helpless and require an immense amount of care to survive.

To support this, Eastwick points to physical changes in human evolution, such as the reduction in size of male canine teeth. This physical shift suggests a move away from aggression and toward “male parental investment,” where fathers play an active role in child-rearing. In the ancient past, being a supportive and cooperative partner was often more effective for survival than simply being the strongest or most dominant hunter.

The conversation then shifts to highlight the difference between “stated preferences” and “revealed preferences.” Stated preferences are the traits people say they want when asked, such as men requesting youth or women requesting wealth. However, revealed preferences are what people actually choose during interactions like speed dating. Eastwick’s research shows that when people meet face-to-face, gender differences often disappear, and both sexes weigh traits like ambition and attractiveness similarly.

This distinction explains why online dating can be so frustrating for many users. Dating apps encourage users to filter potential partners based on rigid demographic “boxes” like height or education level. This prevents people from meeting in person, where subjective chemistry and personality often override those initial checklists. The researcher suggests that loneliness is often a result of screening people out before a real human connection can form.

Eastwick also addresses fears regarding changing gender roles, specifically the rise in women’s education and income levels. Contrary to some cultural narratives, current data indicates that relationships where the woman is more educated than the man are not at higher risk of divorce. The “crisis” of men needing to improve their status may be exaggerated by the lack of in-person socialization.

You can listen to the full interview here.

Scientists use machine learning to control specific brain circuits

15 February 2026 at 05:00

A team of researchers in Japan has developed an artificial intelligence tool called YORU that can identify specific animal behaviors in real time and immediately interact with the animals’ brain circuits. This open-source software, described in a study published in Science Advances, allows biologists to study social interactions with greater speed and precision than previously possible. By treating complex actions as distinct visual objects, the system enables computers to “watch” behaviors like courtship or food sharing and respond within milliseconds.

Biologists have struggled for years to automate the analysis of how animals interact. Social behaviors such as courtship or aggression involve dynamic movements where individuals often touch or obscure one another from the camera’s view. Previous software solutions typically relied on a method called pose estimation. This technique tracks specific body points like a joint, a knee, or a wing tip across many video frames to calculate movement.

These older methods often fail when animals get too close to one another. When two insects overlap, the computer frequently loses track of which leg belongs to which individual. This confusion makes it difficult to trigger experiments at the exact moment a behavior occurs. To solve this, a team including Hayato M. Yamanouchi and Ryosuke F. Takeuchi sought a different approach. They worked under the guidance of senior author Azusa Kamikouchi at Nagoya University.

The group aimed to build a system capable of “closed-loop” feedback. This term refers to an experimental setup where a computer watches an animal and instantly creates a stimulus in response. For example, a computer might turn on a light the moment a fly extends its wing. Achieving this requires software that processes video data faster than the animal moves.

The researchers built their system using a deep learning algorithm known as object detection. Unlike pose estimation, this method analyzes the entire shape of an animal in a single video frame. The team named their software YORU. This acronym stands for Your Optimal Recognition Utility.

YORU identifies a specific action as a distinct “behavior object.” The software recognizes the visual pattern of two ants sharing food or a male fly vibrating its wing. This approach allows the computer to classify social interactions even when the animals are touching. By viewing the behavior as a unified object rather than a collection of points, the system bypasses the confusion caused by overlapping limbs.

The team tested YORU on several different species to verify its versatility. They recorded videos of fruit flies courting, ants engaging in mouth-to-mouth food transfer—a behavior known as trophallaxis—and zebrafish orienting toward one another. The system achieved detection accuracy rates ranging from roughly 90 to 98 percent compared to human observation.

The software also proved effective at analyzing brain activity in mice. The researchers placed mice on a treadmill within a virtual reality setup. YORU accurately identified behaviors such as running, grooming, and whisker movements. The system matched these physical actions with simultaneous recordings of neural activity in the mouse cortex. This confirmed that the AI could reliably link visible movements to the invisible firing of neurons.

The most advanced test involved a technique called optogenetics. This method allows scientists to switch specific neurons on or off using light. The team genetically modified male fruit flies so that the neurons responsible for their courtship song would be silenced by green light. These neurons are known as pIP10 descending neurons.

YORU watched the flies in real time. When the system detected a male extending his wing to sing, it triggered a green light within milliseconds. The male fly immediately stopped his courtship song. This interruption caused a decrease in mating success that was statistically significant.

Hayato M. Yamanouchi, co-first author from Nagoya University’s Graduate School of Science, highlighted the difference in their approach. He noted, “Instead of tracking body points over time, YORU recognizes entire behaviors from their appearance in a single video frame. It spotted behaviors in flies, ants, and zebrafish with 90-98% accuracy and ran 30% faster than competing tools.”

The researchers then took the experiment a step further by using a projector. They wanted to manipulate only one animal in a pair without affecting the other. They genetically modified female flies to have light-sensitive hearing neurons. Specifically, they targeted neurons in the Johnston’s organ, which is the fly’s equivalent of an ear.

When the male fly extended his wing, YORU calculated the female’s exact position. The system then projected a small circle of light onto her thorax. This light silenced her hearing neurons exactly when the male tried to sing. The female ignored the male’s advances because she could not hear him.

This experiment confirmed the software’s ability to target individuals in a group. Azusa Kamikouchi explained the significance of this precision. “We can silence fly courtship neurons the instant YORU detects wing extension. In a separate experiment, we used targeted light that followed individual flies and blocked just one fly’s hearing neurons while others moved freely nearby.”

The speed of the system was a primary focus for the researchers. They benchmarked YORU against SLEAP, a popular pose-estimation tool. YORU exhibited a mean latency—the delay between seeing an action and reacting to it—of approximately 31 milliseconds. This was roughly 30 percent faster than the alternative method. Such speed is necessary for studying neural circuits, which operate on extremely fast timescales.

The system is also designed to be user-friendly for biologists who may not be experts in computer programming. It includes a graphical user interface that allows researchers to label behaviors and train the AI without writing code. The team has made the software open-source, allowing laboratories worldwide to download and adapt it for their own specific animal models.

While the system offers speed and precision, it relies on the appearance of behavior in a single frame. This design means YORU cannot easily identify behaviors that depend on a sequence of events over time. For example, distinguishing between the beginning and end of a foraging run might require additional analysis. The software excels at spotting “states” of being rather than complex narratives.

The current version also does not automatically track the identity of individual animals over long periods. If two animals look identical and swap places, the software might not distinguish between them without supplementary tools. Researchers may need to combine YORU with other tracking software for studies requiring long-term individual histories.

Hardware limitations present another challenge for the projector-based system. Fast-moving animals might exit the illuminated area before the light pulses if the projector has a slight delay. Future updates could incorporate predictive algorithms to anticipate where an animal will be millisecond by millisecond.

Despite these limitations, YORU represents a new way to interrogate the brain. By allowing computers to recognize social behaviors as they happen, scientists can now ask questions about how the brain navigates the complex social world. The ability to turn specific senses on and off during social exchanges opens new avenues for understanding the neural basis of communication.

The study, “YORU: Animal behavior detection with object-based approach for real-time closed-loop feedback,” was authored by Hayato M. Yamanouchi, Ryosuke F. Takeuchi, Naoya Chiba, Koichi Hashimoto, Takashi Shimizu, Fumitaka Osakada, Ryoya Tanaka, and Azusa Kamikouchi.

One holiday sees a massive spike in emergency contraception sales, and it isn’t Valentine’s Day

15 February 2026 at 03:00

Research published in the BMJ has revealed that retail sales of emergency contraception rise sharply after the New Year holiday in the United States. The holiday was found to be linked to a surge in demand for emergency contraception, suggesting higher rates of unprotected sex compared with other times of the year—even more than Valentine’s Day.

Unprotected sex has long been recognised as a public health concern due to unintended pregnancies and sexually transmitted infections. Previous research shows that alcohol use, increased social activity, and limited access to contraception can all influence whether people practice safe sex. These factors tend to fluctuate throughout the year, and certain holidays may bring them together. Emergency contraception, which can prevent pregnancy after unprotected sex, offers researchers a way to indirectly measure when these risks may be higher.

Brandon Wagner (Texas Tech University) and Kelly Cleland (American Society for Emergency Contraception) sought to investigate if New Year’s Eve—a holiday commonly associated with parties, alcohol consumption, and romantic expectations—was followed by an increase in emergency contraception sales. They also wanted to compare this with other holidays that might share similar features, such as Valentine’s Day or St. Patrick’s Day.

To investigate, the team analysed weekly sales data for over-the-counter emergency contraception pills (levonorgestrel) across the United States from 2016 to 2022. The dataset covered 362 weeks of sales from traditional “brick and mortar” retailers, including grocery stores, drug stores, and mass merchandisers. Using statistical models, they compared sales in weeks following holidays with sales during non-holiday weeks.

The results showed a clear pattern. In the week after the New Year holiday, sales of emergency contraception rose by about 0.63 additional units per 1,000 women of reproductive age (15–44). Based on US population estimates, this equated to roughly 41,000 extra units sold in the first week of 2022 alone compared with what would hypothetically occur without the holiday.

While other holidays also showed increases, none matched the New Year. Sales rose after Valentine’s Day, but only about half as much as after the New Year holiday. Smaller increases followed St. Patrick’s Day and Independence Day. By contrast, holidays not typically associated with heightened sexual activity or alcohol consumption—such as Easter, Mother’s Day, and Father’s Day—showed no significant change in emergency contraception sales.

The researchers say the spike after the New Year likely reflects a combination of factors: more sexual activity, lower contraception vigilance due to alcohol consumption, increased risk of sexual assault, and limited access to contraceptives due to holiday retail closures. Together, these conditions may make New Year’s Eve uniquely associated with unprotected sex.

“More than ever, emergency contraception is a critically important option for people in the US, particularly those living in regions with bans or severe restrictions on abortion. Although this annual spike in sales might seem humorous, it is indicative of unmet contraceptive need that calls for further attention,” Wagner and Cleland noted.

However, the study has important limitations. Sales data do not necessarily reflect actual use of emergency contraception, and the figures exclude purchases made online, in independent pharmacies, or through clinics. The findings also apply only to the United States, meaning patterns could differ in countries with different healthcare systems or cultural practices.

The study, “Retail demand for emergency contraception in United States following New Year holiday: time series study,” was authored by Brandon Wagner and Kelly Cleland.

Religiosity may protect against depression and stress by fostering gratitude and social support

15 February 2026 at 01:00

An analysis of data from the Midlife in the United States (MIDUS) study found that religiosity may protect against depression and stress by fostering feelings of gratitude and social support. The research was published in the Journal of Affective Disorders.

Religiosity refers to the extent to which individuals hold religious beliefs, engage in religious practices, and integrate religion into their daily lives. It encompasses beliefs, behaviors (such as prayer or worship attendance), personal commitment, and identification with a religious community.

A substantial body of research shows that religiosity is positively associated with psychological outcomes, such as higher life satisfaction and greater subjective well-being. Longitudinal and cross-cultural evidence indicates that these associations are modest but robust across different populations and cultural contexts.

In psychology and public health, religiosity tends to be viewed as a potential protective factor for mental health. One key reason for this is that religious involvement can help individuals cope with stressful life events and derive meaning from adversity. According to some models of stress and coping, religiosity may influence well-being by shaping how stressors are appraised and managed. Rather than exerting a direct effect, religiosity appears to provide psychological and social coping resources.

Study authors Ethan D. Lantz and Danielle K. Nadorff sought to explore the mechanisms through which religiosity affects psychological well-being. They hypothesized that higher levels of religiosity would be associated with higher levels of gratitude and social support. In turn, individuals experiencing stronger feelings of gratitude and better social support would tend to report better psychological well-being—defined as lower depressive symptoms and perceived stress, and higher life satisfaction.

The authors analyzed data from the Midlife in the United States (MIDUS) study. MIDUS is a large, long-running national research program that examines how psychological, social, behavioral, and biological factors influence health and well-being as people age.

Specifically, the researchers used data from 1,052 participants in the MIDUS 2 dataset, collected between 2004 and 2006, and 625 participants from the MIDUS Refresher dataset, collected between 2011 and 2014. The average age of participants was 55 years in the MIDUS 2 dataset and approximately 52 years in the MIDUS Refresher. Females made up 55% and 51% of the participants in the two datasets, respectively.

The authors utilized data on participants’ religiosity (collected using the MIDUS Religiosity Questionnaire), depressive symptoms (Center for Epidemiological Studies – Depression Scale), perceived stress (Perceived Stress Scale), life satisfaction (Satisfaction with Life Scale), gratitude (Gratitude Questionnaire), and social support (Support and Strain from Partners, Family, and Friends scale).

They tested a statistical model proposing that religiosity leads to higher feelings of gratitude and greater social support. In turn, the model proposed that these resources would lead to improved psychological well-being. The results confirmed a “full mediation” model across both datasets. This indicates that the relationship between religiosity and well-being was fully explained by the presence of gratitude and social support.

“Religiosity may confer protection against affective distress by fostering key psychological and social coping resources. These findings highlight the potential clinical utility of interventions designed to cultivate gratitude and strengthen social support networks as a strategy to improve well-being and reduce symptoms of affective disorders,” the study authors concluded.

The study contributes to the scientific understanding of the psychological correlates of religiosity. However, it should be noted that the cross-sectional design of this specific analysis does not allow for causal inferences to be derived from the results.

The paper, “An attitude of gratitude: How psychological and social resources mediate the protective effect of religiosity on depressive symptoms,” was authored by Ethan D. Lantz and Danielle K. Nadorff.

Virtual parenting games may boost desire for real children, study finds

14 February 2026 at 23:30

Declining birth rates present a demographic challenge for nations across the globe, particularly in East Asia. A new study published in Frontiers in Psychology suggests that playing life simulation video games may influence a player’s desire to have children in the real world. The research indicates that the emotional bonds formed with virtual characters can serve as a psychological pathway to shaping reproductive attitudes.

Societies such as China are currently experiencing a transition marked by persistently low fertility rates. Young adults aged 18 to 35 often report a reluctance to marry and bear children. This hesitation is frequently attributed to high economic costs associated with housing and education. It is also linked to a phenomenon researchers call “risk consciousness.” This mindset involves anxiety regarding the potential loss of personal freedom and the financial burdens of parenthood.

In this environment, digital entertainment has become a primary venue for social interaction and relaxation. Some scholars have argued that online activities might replace real-world relationships. This substitution could theoretically weaken the motivation to start a family. However, other researchers contend that specific types of games might offer a different outcome.

The researchers leading this study are Yuan Qi of Anhui Normal University and Gao Jie of Nanjing University. They collaborated with colleagues to investigate the psychological impact of life simulation games. They focused specifically on a popular game titled Chinese Parents. This game allows players to simulate the experience of raising a child from birth to adulthood. It incorporates culturally specific elements such as academic pressure and intergenerational expectations.

The team sought to understand if the virtual experience of raising a digital child could translate into a real-world desire for parenthood. To do this, they relied on two primary psychological concepts. The first is attachment theory, which typically describes the bonds between humans. The second is the concept of parasocial relationships.

Parasocial relationships refer to one-sided psychological connections that media users form with characters. While the user knows the character is fictional, the feelings of friendship, empathy, or affection feel real. The researchers hypothesized that these virtual bonds might act as a buffer against real-world anxieties. They proposed an “Emotional Compensation Hypothesis.” This hypothesis suggests that the safety of a virtual environment allows young people to experience the emotional rewards of parenting without the immediate financial or social risks.

To test their model, the researchers conducted a survey of 612 gamers who played Chinese Parents. The participants ranged in age from 18 to 35 years old. This age bracket represents the primary demographic for marriage and childbearing decisions. The group was recruited from online gaming communities and university campuses in China.

The survey utilized a statistical approach known as Partial Least Squares Structural Equation Modeling. This method allows scientists to identify complex relationships between different variables. The researchers measured several specific psychological factors.

The first factor was game concentration. This refers to the depth of immersion a player feels. It is a state of flow where the player becomes absorbed in the virtual world. The second factor was identification friendship. This measures the degree to which a player views the virtual character as a friend or an extension of themselves.

The researchers then looked at parasocial relationships, which they divided into two distinct categories. The first category is parasocial cognition. This involves thinking about the character’s motivations and understanding their perspective intellectually. The second category is parasocial emotions. This involves feeling empathy, warmth, and affection toward the character. Finally, the researchers measured fertility desire, which is the self-reported intention to have children in the real world.

The analysis revealed a specific psychological pathway. The researchers found that game concentration did not directly change a player’s desire to have children. Simply being immersed in the game was not enough to alter real-world life planning.

Instead, the results showed that immersion acted as a catalyst for other feelings. High levels of concentration led players to develop a sense of identification friendship with their virtual characters. Players began to see these digital figures as distinct social entities worthy of care.

This sense of friendship then triggered the critical component of the model: parasocial emotions. Players reported feeling genuine empathy and support for their virtual children. The data showed that these emotional connections were the bridge to real-world attitudes. When players formed strong emotional attachments to their in-game characters, they reported a higher desire to have children in real life.

The researchers found that the emotional pathway was the only successful route to influencing fertility desire. The study examined a cognitive pathway, where players intellectually analyzed the character’s situation. The results for this path were not statistically significant regarding the final outcome. Understanding the logic of the character did not correlate with a desire for parenthood. Only the emotional experience of caring for the character had an association with real-world reproductive goals.

The findings support the researchers’ “Emotional Compensation Hypothesis.” In a high-pressure society, simulation games provide a low-stakes environment. Players can satisfy their innate need for caregiving and intimacy through the game. Rather than replacing the desire for real children, this virtual fulfillment appears to keep the positive idea of parenthood alive. The game functions as a “secure base.” It allows individuals to practice the emotions of parenting without the fear of real-world consequences.

There are several limitations to this study that contextualize the findings. The research used a cross-sectional design. This means the data represents a snapshot in time. It shows a correlation between playing the game and wanting children, but it cannot definitively prove that playing the game caused the desire. It is possible that people who already want children are more likely to play parenting simulation games.

The data relied on self-reported questionnaires. This method depends on the honesty and self-awareness of the participants. Additionally, the study focused on a specific game within a specific cultural context. Chinese Parents is deeply rooted in Chinese social norms. The results might not apply to gamers in other countries or players of different genres of simulation games.

The researchers suggest that future studies should employ longitudinal designs. Tracking players over a long period would help determine if these virtual desires translate into actual decisions to have children years later. They also recommend expanding the research to include different cultural backgrounds.

Future investigations could also explore the potential of using such games as psychological tools. If these simulations can provide a safe space for emotional expression, they might help individuals with anxiety regarding family planning. The study opens a conversation about how digital experiences in the modern age intersect with fundamental biological and social motivations.

The study, “From virtual attachments to real-world fertility desires: emotional pathways in game character attachment and parasocial relationships,” was authored by Yuan Qi, Gao Jie, Du Yun, and Ding Yi Zhuo.

Before yesterdayEnglish

Donald Trump is fueling a surprising shift in gun culture, new research suggests

14 February 2026 at 22:30

A new study published in Injury Epidemiology provides evidence that the 2024 United States presidential election prompted specific groups of Americans to change their behaviors regarding firearms. The findings suggest that individuals who feel threatened by the policies of the current administration, specifically Black adults and those with liberal political views, are reporting stronger urges to carry weapons and keep them easily accessible. This research highlights a potential shift in gun culture where decision-making is increasingly driven by political anxiety and a desire for protection.

Social scientists have previously observed that firearm purchasing patterns often fluctuate in response to major societal events, such as the onset of the COVID-19 pandemic or periods of civil unrest. However, there has been less research into how specific election results influence not just the buying of guns, but also daily habits like carrying a weapon or how it is stored within the home.

To understand these dynamics better, a team led by Michael Anestis from the New Jersey Gun Violence Research Center at Rutgers University sought to track these changes directly. The researchers aimed to determine if the intense rhetoric surrounding the 2024 election altered firearm safety practices among different demographics.

The researchers surveyed a nationally representative group of adults at two different points in time to capture a “before and after” snapshot. The first survey included 1,530 participants and took place between October 22 and November 3, 2024, immediately preceding the election. The team then followed up with 1,359 of the same individuals between January 7 and January 22, 2025. By maintaining the same group of participants, the scientists could directly compare intentions expressed before the election with reported behaviors and urges felt in the weeks following the results.

The data indicated that identifying as Black was associated with a increase in the urge to carry firearms specifically because of the election results. Black participants were also more likely than White participants to express an intention to purchase a firearm in the coming year or to remain undecided, rather than rejecting the idea of ownership. This aligns with broader trends suggesting that the demographics of gun ownership are diversifying.

Similarly, participants who identified with liberal political beliefs reported a stronger urge to carry firearms outside the home as a direct result of the election outcome. The study found that as political views became more liberal, individuals were over two times more likely to change their storage practices to make guns more quickly accessible. This suggests that for some, the perceived need for immediate defense has overridden standard safety recommendations regarding secure storage.

The researchers also examined how participants viewed the stability of the country. Those who perceived a serious threat to American democracy were more likely to store their guns in a way that allowed for quicker access. Individuals who expressed support for political violence showed a complex pattern. They were more likely to intend to buy guns but reported a decreased urge to carry them. This might imply that those who support such violence feel more secure in the current political environment, reducing their perceived need for constant protection outside the home.

Anestis, the executive director of the New Jersey Gun Violence Research Center and lead researcher, noted that the motivation for these changes is clear but potentially perilous.

“These findings highlight that communities that feel directly threatened by the policies and actions of the second Trump administration are reporting a greater drive to purchase firearms, carry them outside their home, and store them in a way that allows quick access and that these urges are a direct result of the presidential election,” Anestis said. “It may be that individuals feel that the government will not protect them or – worse yet – represents a direct threat to their safety, so they are trying to prepare themselves for self-defense.”

These findings appear to align with recent press reports describing a surge in firearm interest among groups not historically associated with gun culture. An NPR report from late 2025 featured accounts from individuals like “Charles,” a doctor who began training with a handgun due to fears for his family’s safety under the Trump administration.

A story from NBC News published earlier this week highlighted a sharp rise in requests for firearm training from women and people of color. Trainers across the country, including organizations like the Liberal Gun Club and Grassroots Defense, have reported that their classes are fully booked. This heightened interest often correlates with specific fears regarding federal law enforcement.

For example, recent news coverage mentions the high-profile shooting of Alex Pretti, a concealed carry permit holder in Minneapolis, by federal agents. Reports indicate that such incidents have stoked fears about constitutional rights violations. Both the academic study and these journalistic accounts paint a picture of defensive gun ownership rising among those who feel politically marginalized.

While the study provides evidence of shifting behaviors, there are limitations to consider. The number of people who actually purchased a gun during the short window between the two surveys was low, which limits the ability of the researchers to draw broad statistical conclusions about immediate purchasing habits.

Additionally, the study relied on self-reported data. This means the results depend on participants answering honestly about sensitive topics like weapon storage and their willingness to use force. Future research will need to examine whether these shifts in behavior result in long-term changes in injury rates or accidental shootings.

“Ultimately, it seems that groups less typically associated with firearm ownership – Black adults and those with liberal political beliefs, for instance – are feeling unsafe in the current environment and trying to find ways to protect themselves and their loved ones,” Anestis said.

However, he cautioned that the method of protection chosen could lead to unintended consequences.

“Although those beliefs are rooted in a drive for safety, firearm acquisition, carrying, and unsecure storage are all associated with the risk for suicide and unintentional injury, so I fear that the current environment is actually increasing the risk of harm,” he said. “Indeed, recent events in Minneapolis make me nervous that the environment fostered by the federal government is putting the safety of Americans in peril.”

The study, “Changes in firearm intentions and behaviors after the 2024 United States presidential election,” was authored by Michael D. Anestis, Allison E. Bond, Kimberly C. Burke, Sultan Altikriti, and Daniel C. Semenza.

This mental trait predicts individual differences in kissing preferences

14 February 2026 at 21:30

A new study published in Sexual and Relationship Therapy provides evidence that a person’s tendency to engage in sexual fantasy influences what they prioritize in a romantic kiss. The findings suggest that the mental act of imagining intimate scenarios is strongly linked to placing a higher value on physical arousal and contact during kissing. This research helps explain the psychological connection between cognitive states and physical intimacy.

From an evolutionary perspective, researchers have proposed three main reasons for romantic kissing. The first is “mate assessment,” which means kissing helps individuals subconsciously judge a potential partner’s health and genetic compatibility. The second is “pair bonding,” where kissing serves to maintain an emotional connection and commitment between partners in a long-term relationship.

The third proposed function is the “arousal hypothesis.” This theory suggests that the primary biological purpose of kissing is to initiate sexual arousal and prepare the body for intercourse. While this seems intuitive, previous scientific attempts to prove this hypothesis have failed to find a strong link. Past data did not show that kissing consistently acts as a catalyst for sexual arousal.

The researchers behind the current study argued that these previous attempts were looking at the problem too narrowly. Earlier work focused almost exclusively on the physical sensation of kissing, such as the sensitivity of the lips or the exchange of saliva. This approach largely ignored the mental and emotional state of the person doing the kissing. The researchers hypothesized that the physical act of kissing might not be arousing on its own without a specific cognitive component. They proposed that sexual fantasy serves as this missing link.

“People have tested three separate hypotheses to explain why we engage in romantic kissing as a species,” said study author Christopher D. Watkins, a senior lecturer in psychology at Abertay University. “At the time there had been no evidence supporting the arousal hypothesis for kissing – that kissing may act as an important catalyst for sex. This may be because these studies focussed on the sensation of kissing as the catalyst, when psychological explanations are also important (e.g., the mental motives for kissing which in turn makes intimacy feel pleasurable/desirable).”

To test this idea, the researchers designed an online study to measure the relationship between fantasy proneness and kissing preferences. They recruited a sample of 412 adults, primarily from the United Kingdom and Italy. After removing participants who did not complete all sections or meet the age requirements, the final analysis focused on 212 individuals. This group was diverse in terms of relationship status, with about half of the participants reporting that they were in a long-term relationship.

Participants completed a series of standardized questionnaires. The first was the “Good Kiss Questionnaire,” which asks individuals to rate the importance of various factors when deciding if someone is a good kisser. These factors included sensory details like the taste of the partner’s lips, the pleasantness of their breath, and the “wetness” of the kiss. The questionnaire also included items related to “contact and arousal,” asking how important physical touching and the feeling of sexual excitement were to the experience.

The scientists also administered the “Sexual Fantasy Questionnaire.” They specifically focused on the “intimacy” subscale, which measures how often a person engages in daytime fantasies about romantic interactions with a partner. This measure was distinct from fantasies that occur during sexual acts or while dreaming. It focused on the mental habit of imagining intimacy during everyday life.

To ensure their results were precise, the researchers included control measures. They measured “general creative experiences” to assess whether a person was simply imaginative in general. This allowed the scientists to determine if the results were driven specifically by sexual fantasy rather than just a vivid imagination. They also measured general sexual desire to see if the effects were independent of a person’s overall sex drive.

The results supported the researchers’ primary prediction. The analysis showed a positive correlation between daytime intimate fantasy and the importance placed on arousal and contact in a good kiss. Individuals who reported a higher tendency to fantasize about intimacy were much more likely to define a “good kiss” as one that includes high levels of physical contact and sexual arousal.

“Your tendency to think and fantasise about intimacy during the day is related to the qualities you associate with a good-quality kiss,” Watkins told PsyPost. “Specifically, the importance we attach to contact and arousal while kissing. As such, our mental preoccupations could facilitate arousal when in close contact with an intimate partner – explaining personal differences in how we approach partners during intimate encounters.”

This relationship held true even after the researchers statistically controlled for other variables. The link between fantasy and kissing preferences remained significant regardless of the participant’s general creativity levels. This suggests that the connection is specific to sexual and romantic cognition, not just a byproduct of having a creative mind.

Additionally, the finding was independent of general sexual desire. While people with higher sex drives did generally value arousal more, the specific habit of fantasizing contributed to this preference over and above general desire. This implies that the mental act of simulating intimacy creates a specific psychological context. This context appears to shape what a person expects and desires from the physical act of kissing.

The study also yielded secondary findings regarding kissing styles. The researchers looked at “reproductive potential,” which they measured by asking participants about their history of sexual partners relative to their peers. This is often used in evolutionary psychology as a proxy for mating strategy. The data showed that individuals with a history of more sexual partners placed greater importance on “technique” in a good kiss. Specifically, they valued synchronization, or whether the partner’s kissing style matched their own.

“One unplanned relationship found in the data was between the importance people placed on technique (e.g., synchronicity) in a good kiss and the extent to which people reported tending to have sex with different people across their relationship history (compared to average peer behavior),” Watkins said. “This may suggest that people who seek sexual variety also seek some form of similarity in partners while intimate (kissing style). This was a small effect though that we would like others to examine/replicate independently in their own studies.”

As with all research, there are some limitations. The research used a cross-sectional design, meaning it captured data from participants at a single point in time. As a result, the researchers cannot prove that fantasizing causes a change in kissing preferences. It is largely possible that the relationship works in the reverse direction, or that a third factor influences both.

The sample was also heavily skewed toward Western cultures, specifically the UK and Italy. Romantic kissing is not a universal human behavior and is observed in less than half of known cultures. Consequently, these findings may not apply to cultures where kissing is not a standard part of romantic or sexual rituals.

Future research could address these issues by using longitudinal designs. Scientists could follow couples over time to see how the relationship between fantasy and physical intimacy evolves. This would help clarify whether increasing intimate fantasy can lead to a more revitalized physical connection.

“We are looking to develop our testing instruments to explore other experiences related to kissing, and expand our studies on this topic – for example, by establishing clear cause and effect between our thoughts/fantasies and later kissing behaviors or other behaviors reported during close contact with romantic partners,” Watkins said.

The study, “Proclivity for sexual fantasy accounts for differences in the perceived components of a ‘good kiss’,” was authored by Milena V. Rota and Christopher D. Watkins.

Strong ADHD symptoms may boost creative problem-solving through sudden insight

14 February 2026 at 20:30

New research suggests that the distinctive cognitive traits associated with Attention-Deficit/Hyperactivity Disorder, or ADHD, may provide a specific advantage in how people tackle creative challenges. A study conducted by psychologists found that individuals reporting high levels of ADHD symptoms are more likely to solve problems through sudden bursts of insight rather than through methodical analysis.

These findings indicate that while ADHD is often defined by its deficits, the condition may also facilitate a unique style of thinking that bypasses conscious logic to reach a solution. The results were published in the journal Personality and Individual Differences.

Attention-Deficit/Hyperactivity Disorder is a neurodevelopmental condition typically characterized by difficulty maintaining focus, impulsive behavior, and hyperactivity. These symptoms are often viewed through the lens of executive function deficits. Executive function refers to the brain’s management system. It acts like an air traffic controller that directs attention, filters out distractions, and keeps mental processes organized.

When this system works efficiently, a person can focus on a specific task and block out irrelevant information. However, researchers have long hypothesized that a “leaky” attention filter might have a hidden upside. If the brain does not filter out irrelevant information efficiently, it may allow remote ideas and associations to enter conscious awareness. This broader associative net could theoretically help a person connect seemingly unrelated concepts.

To test this theory, a team of researchers led by Hannah Maisano and Christine Chesebrough, along with senior author John Kounios, designed an experiment to measure problem-solving styles. Maisano is a doctoral student at Drexel University, and Chesebrough is a researcher at the Feinstein Institutes for Biomedical Research. They collaborated with Fengqing Zhang and Brian Daly of Drexel University and Mark Beeman of Northwestern University.

The researchers recruited 299 undergraduate students to participate in an online study. The team did not limit the study to individuals with a formal medical diagnosis. Instead, they asked all participants to complete the Adult ADHD Self-Report Scale. This is a standard survey used to measure the frequency and severity of symptoms such as inattention and hyperactivity. This approach allowed the scientists to examine the effects of these traits across a full spectrum of severity.

The core of the experiment involved a test known as the Compound Remote Associates task. Psychologists frequently use this task to measure convergent thinking, which is the ability to find a single correct answer to a problem. In this test, participants view three words that appear unrelated. Their goal is to find a fourth word that creates a familiar compound word or phrase with each of the three.

For example, a participant might see the words “pine,” “crab,” and “sauce.” The correct answer is “apple,” which forms “pineapple,” “crabapple,” and “applesauce.” The participants attempted to solve sixty of these puzzles.

After each successful solution, the participants reported how they arrived at the answer. They had to choose between two distinct cognitive styles. The first style is analysis. This involves a deliberate, step-by-step search for the answer. It is a conscious and slow process. The second style is insight. This is often described as an “Aha!” moment. It occurs when the solution pops into awareness suddenly and surprisingly, often after the person has stopped actively trying to force a result.

The data revealed a distinct pattern in how different groups approached the puzzles. Participants who reported the highest levels of ADHD symptoms relied heavily on insight. They were statistically significantly more likely to solve the problems through sudden realization than through step-by-step logic.

In contrast, the participants with the lowest levels of ADHD symptoms displayed a different profile. This group used a balance of both insight and analysis to find the answers. They did not favor one method overwhelmingly over the other.

“We found that individuals reporting the strongest ADHD symptoms relied significantly more on insight to solve problems,” said Maisano. “They appear to favor unconscious, associative processing that can produce sudden creative breakthroughs.”

The researchers also analyzed the total number of problems solved correctly by each group. This analysis produced an unexpected U-shaped curve. The group with the highest symptoms and the group with the lowest symptoms both performed very well. They solved the most puzzles overall. However, the participants in the middle of the spectrum performed the worst.

This U-shaped result suggests that high and low levels of executive control lead to success through different routes. People with high executive control can effectively use analytical strategies. They can systematically test words until they find a match. People with low executive control, such as those with high ADHD symptoms, struggle with that systematic approach. However, their tendency toward unfocused thought allows their brains to stumble upon the answer unconsciously.

The individuals in the middle appear to be at a disadvantage in this specific context. They may not have enough executive control to be highly effective at analysis. Simultaneously, they may have too much control to allow their minds to wander freely enough for frequent insight.

Kounios explains the implication of this finding. “Our results show that having strong ADHD symptoms can mean being a better creative problem-solver than most people, that is, than people who have low to moderate ADHD symptoms.”

The study aligns with the concept of dual-process theories of thought. Psychologists often distinguish between Type 1 and Type 2 processing. Type 1 processing is fast, automatic, and unconscious. It is the engine behind intuitive insight. Type 2 processing is slow, effortful, and conscious. It drives analytical reasoning.

ADHD symptoms are generally associated with a weakness in Type 2 processing. The effort required to maintain focus and manipulate information in working memory is often impaired. The researchers argue that this deficit in Type 2 processing forces—or perhaps allows—individuals with ADHD symptoms to rely on Type 1 processing.

This reliance on Type 1 processing is not merely a compensation strategy. It appears to be a robust pathway to solution in its own right. The high-symptom group did not just fail to analyze; they succeeded through insight. The regression analyses performed by the team showed that as ADHD symptoms increased, the probability of using analysis dropped, while the probability of using insight rose.

“Being both very high or very low in executive control can be beneficial for creative problem-solving, but you get to the right answer in different ways,” said Chesebrough.

Kounios and his colleagues emphasize that these findings challenge the traditional view of ADHD as purely a disorder of deficits. While the condition certainly presents challenges in environments that require rigid focus and organization, it offers advantages in situations that demand creative connections.

The study does have limitations. It relied on a sample of university students rather than a broader slice of the general population. Additionally, the study used self-reported symptoms rather than clinical diagnoses confirmed by a physician. It is possible that other undiagnosed conditions could have influenced the results.

The researchers also note that they excluded participants who reported poor sleep or substance use, as these factors can impair cognitive performance. Future research will need to replicate these findings with larger groups and formally diagnosed clinical populations to confirm the robustness of the U-shaped performance curve.

Despite these caveats, the research offers a new perspective on neurodiversity in the context of problem-solving. It suggests that the cognitive profile associated with ADHD is not simply a broken version of “normal” cognition. Instead, it represents a different functional organization of the brain. This organization favors spontaneous processing over deliberate control.

Understanding this strength could help educators and employers create environments that harness the natural abilities of individuals with ADHD. Rather than forcing these individuals to adopt analytical strategies that do not fit their cognitive style, it may be more effective to encourage their intuitive approaches.

“Understanding these strengths could help people harness their natural problem-solving style in school, work and everyday life,” said Kounios.

The study, “ADHD symptom magnitude predicts creative problem-solving performance and insight versus analysis solving modes,” was authored by Hannah Maisano, Christine Chesebrough, Fengqing Zhang, Brian Daly, Mark Beeman, and John Kounios.

Who lives a good single life? New data highlights the role of autonomy and attachment

14 February 2026 at 19:15

A new study published in the journal Personal Relationships suggests that single people who feel their basic psychological needs are met tend to experience higher life satisfaction and fewer depressive symptoms. The findings indicate that beyond these universal needs, having a secure attachment style and viewing singlehood as a personal choice rather than a result of external barriers are significant predictors of a satisfying single life.

The number of single adults has increased significantly in recent years, prompting psychologists to investigate what factors contribute to a high quality of life for this demographic. Historically, relationship research has focused heavily on the dynamics of couples, often treating singlehood merely as a transitional stage or a deficit. When researchers did study singles, they typically categorized them simply as those who chose to be single versus those who did not. This binary perspective fails to capture the complexity of the single experience.

The researchers behind the new study sought to understand the specific psychological characteristics that explain why some individuals thrive in singlehood while others struggle. By examining factors ranging from broad human needs to specific attitudes about relationships, the team aimed to clarify the internal and external forces that shape single well-being.

“Much of the research on single people has focused on deficits—that singles are less happy or lonely to partnered people,” said study author Jeewon Oh, an assistant professor at Syracuse University.

“We wanted to ask instead: When do single people thrive? We wanted to identify what actually predicts a good single life from understanding their individual differences. We know that people need to feel autonomous, competent, and related to others to flourish, but it wasn’t clear whether relationship-specific factors like attachment style or reasons for being single play an important role beyond satisfying these more basic needs.”

To investigate these questions, the scientists conducted two separate analyses. The first sample consisted of 445 adults recruited through Qualtrics Panels. These participants were older, with an average age of approximately 53 years, and were long-term singles who had been without a partner for an average of 20 years. This demographic provided a window into the experiences of those who have navigated singlehood for a significant portion of their adulthood.

The second sample was gathered to see if the findings would hold true for a different age group. This group included 545 undergraduate students from a university in the northeastern United States. These participants were much younger, with an average age of roughly 19 years. By using two distinct samples, the researchers hoped to distinguish between findings that might be unique to a specific life stage and those that apply to singles more generally.

The researchers used a series of surveys to assess several psychological constructs. First, they measured the satisfaction of basic psychological needs based on Self-Determination Theory. This theory posits that three core needs are essential for human well-being: autonomy, competence, and relatedness. Autonomy refers to a sense of volition and control over one’s own life choices. Competence involves feeling capable and effective in one’s activities. Relatedness is the feeling of being connected to and cared for by others.

In addition to basic needs, the study assessed attachment orientation. Attachment theory describes how people relate to close others, often based on early life experiences. The researchers looked at two dimensions: attachment anxiety and attachment avoidance. Attachment anxiety is characterized by a fear of rejection and a strong need for reassurance. Attachment avoidance involves a discomfort with intimacy and a preference for emotional distance.

The team also measured sociosexuality and reasons for being single. Sociosexuality refers to an individual’s openness to uncommitted sexual experiences, including their desires, attitudes, and behaviors regarding casual sex. For the reasons for being single, participants rated their agreement with statements categorized into domains such as valuing freedom, perceiving personal constraints, or feeling a lack of courtship ability.

The most consistent finding across both samples was the importance of basic psychological need satisfaction. Single individuals who felt their needs for autonomy, competence, and relatedness were being met reported significantly higher life satisfaction and satisfaction with their relationship status. They also reported fewer symptoms of depression.

This suggests that the foundation of a good life for singles is largely the same as it is for everyone else. It relies on feeling in control of one’s life, feeling capable, and having meaningful social connections, which for singles are often found in friendships and family rather than romantic partnerships.

Attachment style also emerged as a significant predictor of well-being. The data showed that higher levels of attachment anxiety were associated with more depressive symptoms. In the combined analysis of both samples, attachment anxiety also predicted lower satisfaction with singlehood. People with high attachment anxiety often crave intimacy and fear abandonment. This orientation may make singlehood particularly challenging, as the lack of a romantic partner might act as a constant source of distress.

The study found that the specific reasons a person attributes to their singlehood matter for their mental health. Participants who viewed their singlehood as a means to maintain their freedom and independence reported higher levels of satisfaction. These individuals appeared to be single because they valued the autonomy it provided.

In contrast, those who felt they were single due to constraints experienced worse outcomes. Constraints included factors such as lingering feelings for a past partner, a fear of being hurt, or perceived personal deficits. Viewing singlehood as a forced circumstance rather than a choice was linked to higher levels of depressive symptoms.

The researchers examined whether sociosexuality would predict well-being, hypothesizing that singles who are open to casual sex might enjoy singlehood more. However, the results indicated that sociosexuality did not provide additional explanatory power once basic needs and attachment were taken into account. While the desire for uncommitted sex was correlated with some outcomes in isolation, it was not a primary driver of well-being in the comprehensive models.

These findings suggest that a “sense of choice” is a multi-layered concept. It is not just about a simple decision to be single or not. Instead, it is reflected in how much autonomy a person feels generally, whether their attachment style allows them to feel secure without a partner, and whether they interpret their single status as an alignment with their values.

“The most important takeaway is that single people’s well-being consistently depends on having their basic psychological needs met—feeling autonomous, competent, and connected to others,” Oh told PsyPost. “However, beyond that, it also matters whether someone has an anxious attachment style, and whether they feel like they are single because it fits their values (vs. due to constraints). These individual differences are aligned with having a sense of choice over being single, which may be one key to a satisfying singlehood.”

The study has some limitations. The research relied on self-reported data collected at a single point in time. This cross-sectional design means that scientists cannot determine the direction of cause and effect. For example, it is possible that people who are already depressed are more likely to perceive their singlehood as a result of constraints, rather than the constraints causing the depression.

The demographic composition of the samples also limits generalizability. The participants were predominantly White and, in the older sample, mostly women. The experience of singlehood can vary greatly depending on gender, race, cultural background, and sexual orientation. The researchers noted that future studies should aim to include more diverse groups to see if these psychological patterns hold true across different populations.

Another limitation involved the measurement of reasons for being single. The scale used to assess these reasons had some statistical weaknesses, which suggests that the specific categories of “freedom” and “constraints” might need further refinement in future research. Despite this, the general pattern—that voluntary reasons link to happiness and involuntary reasons link to distress—aligns with previous scientific literature.

Future research could benefit from following single people over time. A longitudinal approach would allow scientists to observe how changes in need satisfaction or attachment security influence feelings about singlehood as people age. It would also be valuable to explore how other personality traits, such as extraversion or neuroticism, interact with these factors to shape the single experience.

The study, “Who Lives a Good Single Life? From Basic Need Satisfaction to Attachment, Sociosexuality, and Reasons for Being Single,” was authored by Jeewon Oh, Arina Stoianova, Tara Marie Bello, and Ashley De La Cruz.

Waist-to-hip ratio predicts faster telomere shortening than depression

14 February 2026 at 03:00

A new study published in the Journal of Affective Disorders has found that depression itself may not directly speed up biological aging. Instead, body fat distribution, particularly around the waist, appears more strongly linked to faster cellular aging.

Depression is common and known to raise the risk of heart disease, diabetes**,** and other age-related illnesses. One possible explanation has been its connection to telomeres, tiny protective caps on DNA that naturally shorten as we age. Shorter telomeres are often viewed as a sign that the body is aging faster at a cellular level.

Previous research has suggested that people with depression tend to have shorter telomeres, but most studies only looked at individuals at a single point in time. This makes it difficult to know whether depression causes faster aging, or whether other factors linked to depression—such as lifestyle or physical health—play a bigger role.

Researchers behind the study sought to clarify this relationship. The team, led by Tsz Yan Wong from King’s College London, analyzed data from 958 women enrolled in the UK-based “TwinsUK” study. Included were 89 identical twin pairs, 215 fraternal twin pairs, and 350 unrelated individuals, ranging from 29 to 83 years old.

The participants had their telomere length measured from blood samples up to four times over roughly six years. The study also included information on depression diagnoses, antidepressant use, lifestyle habits, body measurements, and genetic risk scores for depression and several age-related diseases.

Over the follow-up period, telomeres shortened gradually in most participants, declining by about 1.3 percent per year on average. Women who reported having depression tended to have slightly shorter telomeres, but this link was weak and not statistically strong. Importantly, depression was not associated with faster telomere shortening over time.

Antidepressant use showed a small association with shorter telomere length. The researchers noted this could be “potentially via biological pathways such as increased cellular turnover or metabolic side effects.” However, there was no clear evidence that it sped up the rate of telomere loss.

Genetic risk for depression also showed no meaningful connection with telomere length or how quickly telomeres shortened. “Our study is the first to assess whether genetic risk influences telomere [shortening], providing a novel longitudinal perspective on potential dynamic effects,” Wong and colleagues noted.

Instead, the most notable finding involved body fat distribution. Women with a higher waist-to-hip ratio, which is a measure of central body fat, experienced faster telomere shortening over time. This suggests that carrying more fat around the abdomen may play a larger role in cellular aging than depression itself.

“[Internal body fat] is linked to chronic inflammation and oxidative stress, both of which are suggested mechanisms driving… telomere shortening,” Wong’s team explained.

Other factors often linked to depression, including smoking, alcohol use, physical activity, education level**,** and early-life experiences, showed no clear relationship with telomere length in this study.

The researchers emphasize that the findings suggest depression alone may not directly accelerate biological aging in women, despite its known links to physical illness. Instead, modifiable health factors such as central body fat may be more important targets for improving both physical and mental health outcomes.

However, the study has limitations. It included mostly older White women, so the results may not apply to men or more diverse populations. Depression was self-reported rather than clinically diagnosed, and the observational design cannot prove cause and effect.

The study, “Genetic and environmental risk factors for major depression in UK women and their association with telomere length longitudinally,” was authored by Tsz Yan Wong, Alexandra C. Gillett, Leena Habiballa, Rodrigo R.R. Duarte, Ajda Pristavec, Pirro Hysi, Claire J. Steves, Veryan Codd, and Timothy R. Powell.

New research links childhood inactivity to depression in a vicious cycle

14 February 2026 at 01:00

New research suggests a bidirectional relationship exists between how much time children spend sitting and their mental health, creating a cycle where inactivity feeds feelings of depression and vice versa. This dynamic appears to extend beyond the individual child, as a child’s mood and inactivity levels can eventually influence their parent’s mental well-being. These results were published in the journal Mental Health and Physical Activity.

For decades, health experts have recognized that humans spend a large portion of their waking hours in sedentary behaviors. This term refers to any waking behavior characterized by an energy expenditure of 1.5 metabolic equivalents or less while in a sitting, reclining, or lying posture. Common examples include watching television, playing video games while seated, or sitting in a classroom. While the physical health consequences of this inactivity are well documented, the impact on mental health is a growing area of concern.

In recent years, screen time has risen considerably among adolescents. This increase has prompted researchers to question how these behaviors interact with mood disorders such as depression. Most prior studies examining this link have focused on adults. When studies do involve younger populations, they often rely on the participants to report their own activity levels. Self-reported data is frequently inaccurate, as people struggle to recall exactly how many minutes they spent sitting days or weeks ago.

There is also a gap in understanding how these behaviors function within a family unit. Parents and children do not exist in isolation. They form a “dyad,” or a two-person group wherein the behavior and emotions of one person can impact the other. To address these gaps, a team of researchers led by Maria Siwa from the SWPS University in Poland investigated these associations using objective measurement tools. The researchers aimed to see if depression leads to more sitting, or if sitting leads to more depression. They also sought to understand if these effects spill over from child to parent.

The research team recruited 203 parent-child dyads to participate in the study. The children ranged in age from 9 to 15 years old. The parents involved were predominantly mothers, accounting for nearly 87 percent of the adult participants. The study was longitudinal, meaning the researchers tracked the participants over an extended period to observe changes. Data collection occurred at three specific points: the beginning of the study (Time 1), an eight-month follow-up (Time 2), and a 14-month follow-up (Time 3).

To ensure accuracy, the researchers did not rely solely on questionnaires for activity data. Instead, they asked participants to wear accelerometers. These are small devices worn on the hip that measure movement intensity and frequency. Participants wore these devices for six consecutive days during waking hours. This provided a precise, objective record of how much time each parent and child spent being sedentary versus being active.

For the assessment of mental health, the researchers used the Patient Health Questionnaire. This is a standard screening tool used to identify the presence and severity of depressive symptoms. It asks individuals to rate the frequency of specific symptoms over the past two weeks. The study took place in the context of a healthy lifestyle education program. Between the first and second measurement points, all families received education on the health consequences of sedentary behaviors and strategies to interrupt long periods of sitting.

The analysis of the data revealed a reciprocal relationship within the children. Children who spent more time being sedentary at the start of the study displayed higher levels of depressive symptoms eight months later. This supports the theory that physical inactivity can contribute to the development of poor mood. Proposed biological mechanisms for this include changes in inflammation markers or neurobiological pathways that affect how the brain regulates emotion.

However, the reverse was also true. Children who exhibited higher levels of depressive symptoms at the start of the study spent more time being sedentary at the eight-month mark. This suggests a “vicious cycle” where symptoms of depression, such as low energy or withdrawal, lead to less movement. The lack of movement then potentially exacerbates the depressive symptoms. This bidirectional pattern highlights how difficult it can be to break the cycle of inactivity and low mood.

The study also identified an effect that crossed from one person to the other. High levels of depressive symptoms in a child at the start of the study predicted increased sedentary time for that child eight months later. This increase in the child’s sedentary behavior was then linked to higher levels of depressive symptoms in the parent at the 14-month mark.

This “across-person” finding suggests a domino effect within the family. A child’s mental health struggles may lead them to withdraw into sedentary activities. Observing this behavior and potentially feeling ineffective in helping the child change their habits may then take a toll on the parent’s mental health. This aligns with psychological theories regarding parental stress. Parents often feel distress when they perceive their parenting strategies as ineffective, especially when trying to manage a child’s health behaviors.

One particular finding was unexpected. Children who reported lower levels of depressive symptoms at the eight-month mark actually spent more time sitting at the final 14-month check-in. The researchers hypothesize that this might be due to a sense of complacency. If adolescents feel mentally well, they may not feel a pressing need to follow the program’s advice to reduce sitting time. They might associate their current well-being with their current lifestyle, leading to less motivation to become more active.

The researchers controlled for moderate-to-vigorous physical activity in their statistical models. This ensures that the results specifically reflect the impact of sedentary time, rather than just a lack of exercise. Even when accounting for exercise, the links between sitting and depression remained relevant in specific pathways.

There are caveats to consider when interpreting these results. The sample consisted largely of families with higher education levels and average or above-average economic status. This limits how well the findings apply to the general population or to families facing economic hardship. Additionally, the study was conducted in Poland, and cultural factors regarding parenting and leisure time could influence the results.

Another limitation is the nature of the device used. While accelerometers are excellent for measuring stillness versus movement, they cannot distinguish between different types of sedentary behavior. They cannot tell the difference between sitting while doing homework, reading a book, or mindlessly scrolling through social media. Different types of sedentary behavior might have different psychological impacts.

The study also focused on a community sample rather than a clinical one. Most participants reported mild to moderate symptoms rather than severe clinical depression. The associations might look different in a population with diagnosed major depressive disorder. Furthermore, while the study found links over time, the observed effects were relatively small. Many other factors likely contribute to both depression and sedentary behavior that were not measured in this specific analysis.

Despite these limitations, the implications for public health are clear. Interventions aimed at improving youth mental health should not ignore physical behavior. Conversely, programs designed to get kids moving should address mental health barriers. The findings support the use of family-based interventions. Treating the child in isolation may miss the important dynamic where the child’s behavior impacts the parent’s well-being.

Future research should investigate the specific mechanisms that drive these connections. For example, it would be beneficial to study whether parental beliefs about their own efficacy mediate the link between a child’s inactivity and the parent’s mood. Researchers should also look at different types of sedentary behavior to see if screen time is more harmful than other forms of sitting. Understanding these nuances could lead to better guidance for families trying to navigate the complex relationship between physical habits and emotional health.

The study, “Associations between depressive symptoms and sedentary behaviors in parent-child Dyads: Longitudinal effects within- and across- person,” was authored by Maria Siwa, Dominika Wietrzykowska, Zofia Szczuka, Ewa Kulis, Monika Boberska, Anna Banik, Hanna Zaleskiewicz, Paulina Krzywicka, Nina Knoll, Anita DeLongis, Bärbel Knäuper, and Aleksandra Luszczynska.

Feelings of entrapment and powerlessness link job uncertainty to suicidality

13 February 2026 at 23:00

A qualitative study in Scotland examined the links between financial instability, employment insecurity, and suicidality. Results indicated that financial stressors create a cycle of unmet basic needs, powerlessness, and social isolation. Job precarity and lack of support further exacerbate these relationships, contributing to suicidal ideation. The research was published in Death Studies.

Suicide is the act of intentionally causing one’s own death. World Health Organization statistics indicate that 700,000 people die by suicide every year worldwide, making it a significant global public health issue. Although major religions have historically condemned suicide, contemporary public health and psychological perspectives view it as a preventable outcome arising from complex interactions rather than a moral failing. Suicide rarely has a single cause; instead, it reflects the intersection of personal, relational, community, and societal factors.

Economic instability, job insecurity, and financial distress are consistently linked to higher suicide risk, with those in insecure employment disproportionately affected. Evidence from the U.K. and Scotland shows particularly high vulnerability among working-age adults, even as poverty increasingly affects households where someone is employed.

Precarious work conditions—such as low income, unpredictable hours, limited rights, and low job autonomy—contribute to chronic stress and poorer mental health. Furthermore, stigma surrounding financial hardship and job insecurity can deter help-seeking, increasing isolation and risk.

Study author Nicola Cogan and her colleagues wanted to explore how insecure employment and financial instability are perceived to contribute toward suicidal thoughts and behaviors among adults living in Scotland. They also sought to identify risk and protective factors associated with the mental health impacts of economic insecurity and offer policy recommendations for improving mental health support for people facing economic precarity.

The study included 24 individuals from Scotland who reported being paid less than the living wage or below the minimum income standard, were on zero-hours contracts, working in the gig economy, were job-seeking long term, or had experience with Universal Credit (the UK’s main welfare benefit system). Sixteen participants were men. The participants’ average age was 30 years. On average, participants reported that their last suicidal thoughts or behaviors occurred more than six months prior. Individuals who were currently suicidal were not included in the study.

Participants took part in semi-structured interviews focusing on the interplay between employment status, financial instability, and experiences of suicidal ideation or behavior. They received a £20 voucher for their participation. The researchers transcribed the interviews and conducted reflexive thematic analysis with the goal of identifying the key themes within the narratives.

Analysis of the interviews identified six key themes. The first theme was the “struggle to meet basic needs and the vicious cycle.” When participants experienced financial instability, it created a struggle to meet basic needs like food, housing, and healthcare. This battle degraded their mental health. Diminished mental health, in turn, reduced their ability to improve their financial situation, creating a vicious cycle.

The second theme was “feeling trapped and powerless.” Participants reported that feelings of entrapment intersected with suicidal thoughts and behaviors, as they struggled to envision any escape from the situation. Theme three was the “stigma of financial instability.” Feeling financially unstable negatively impacted participants’ self-worth and self-esteem, making them feel inadequate and helpless. Theme four was “thinking about suicide and acting on such thoughts.” During these times, many of them imagined suicide to be the only way out of their struggles.

The fifth theme was the “need for hope and support from supportive others.” For many participants, hope and support from friends, family, and other individuals fostered resilience and prevented them from acting on suicidal thoughts.

The sixth theme was “active help-seeking and gaining a sense of control.” For many participants, actively seeking help was a turning point in managing the intersecting challenges of financial instability and mental health distress. This enabled them to regain a sense of control over their circumstances.

“Reflexive thematic analysis identified key themes, highlighting how financial stressors create a cycle of unmet basic needs, powerlessness, and social isolation, which exacerbates suicidal distress. Workplace conditions including job precarity and lack of support, further intensified these experiences, while protective factors included supportive relationships and proactive help-seeking,” the study authors concluded.

The study contributes to the scientific understanding of the mental health effects of financial instability. However, the study deliberately excluded prospective participants currently experiencing suicidality. Because of this, it did not fully capture the perspectives of individuals at the highest risk of suicide. Additionally, the collected data were based on the recall of past hardships, leaving room for recall and reporting biases to have affected the results.

The paper, “’It feels like the world is falling on your head’: Exploring the link between financial instability, employment insecurity, and suicidality,” was authored by Nicola Cogan, Susan Rasmussen, Kirsten Russell, Dan Heap, Heather Archbold, Lucy Milligan, Scott Thomson, Spence Whittaker, Dave Morris, and Danielle Rowley.

No association found between COVID-19 shots during pregnancy and autism or behavioral issues

13 February 2026 at 21:00

Recent research provides new evidence regarding the safety of COVID-19 vaccinations during pregnancy. The study, presented at the Society for Maternal-Fetal Medicine (SMFM) 2026 Pregnancy Meeting, indicates that receiving an mRNA vaccine while pregnant does not negatively impact a toddler’s brain development. The findings suggest that children born to vaccinated mothers show no difference in reaching developmental milestones compared to those born to unvaccinated mothers.

The question of vaccine safety during pregnancy has been a primary concern for expectant parents since the introduction of COVID-19 immunizations. Messenger RNA, or mRNA, vaccines function by introducing a genetic sequence that instructs the body’s cells to produce a specific protein. This protein triggers the immune system to create antibodies against the virus.

While health organizations have recommended these vaccines to prevent severe maternal illness, data regarding the longer-term effects on infants has been accumulating slowly. Parents often worry that the immune activation in the mother could theoretically alter the delicate process of fetal brain formation.

To address these specific concerns, a team of researchers investigated the neurodevelopmental outcomes of children aged 18 to 30 months. The study was led by George R. Saade from Eastern Virginia Medical School at Old Dominion University and Brenna L. Hughes from Duke University School of Medicine. They conducted this work as part of the Maternal-Fetal Medicine Units Network. This network is a collaboration of research centers funded by the Eunice Kennedy Shriver National Institute of Child Health and Human Development.

The researchers designed a prospective observational study. This type of study follows a group of participants over time to observe outcomes rather than intervening or experimenting on them. The team identified women who had received at least one dose of an mRNA SARS-CoV-2 vaccine. To be included in the exposed group, the mothers must have received the vaccine either during their pregnancy or within the 30 days prior to becoming pregnant.

The research team compared these women to a control group of mothers who did not receive the vaccine during that same period. To ensure the comparison was scientifically valid, the researchers used a technique called matching. Each vaccinated mother was paired with an unvaccinated mother who shared key characteristics.

These characteristics included the specific medical site where they delivered the baby and the date of the delivery. They also matched participants based on their insurance status and their race. This matching process is essential in observational research. It helps rule out other variables, such as access to healthcare or socioeconomic status, which could independently influence a child’s development.

The study applied strict exclusion criteria to isolate the effect of the vaccine. The researchers did not include women who delivered their babies before 37 weeks of gestation. This decision was necessary because preterm birth is a known cause of developmental delays. Including premature infants could have obscured the results. The team also excluded multifetal pregnancies, such as twins or triplets, and children born with major congenital malformations.

Ultimately, the study analyzed 217 matched pairs, resulting in a total of 434 children. The primary tool used to measure development was the Ages and Stages Questionnaire, Third Edition, often referred to as the ASQ-3. This is a standardized screening tool widely used in pediatrics. It relies on parents to observe and report their child’s abilities in five distinct developmental areas.

The first area is communication, which looks at how a child understands language and speaks. The second is gross motor skills, involving large movements like walking or jumping. The third is fine motor skills, which involves smaller movements like using fingers to pick up tiny objects. The fourth is problem-solving, and the fifth is personal-social interaction, covering how the child plays and interacts with others.

The researchers analyzed the data by looking for statistical equivalence. They established a specific margin of 10 points on the ASQ-3 scale. If the difference between the average scores of the vaccinated and unvaccinated groups was less than 10 points, the outcomes were considered practically identical.

The results demonstrated that the neurodevelopmental outcomes were indeed equivalent. The median total ASQ-3 score for the vaccinated group was 255. The median score for the unvaccinated group was 260. After adjusting for other factors, the difference was calculated to be -3.4 points. This falls well within the 10-point margin of equivalence, meaning there was no meaningful difference in development between the two groups.

Beyond the general developmental scores, the researchers utilized several secondary screening tools to check for specific conditions. They employed the Modified Checklist for Autism in Toddlers to assess the risk of autism spectrum disorder. The findings showed no statistical difference in risk levels.

Approximately 5 percent of the children in the vaccinated group screened positive for potential autism risk. This was comparable to the 6 percent observed in the unvaccinated group. These percentages suggest that vaccination status did not influence the likelihood of an autism diagnosis.

The team also used the Child Behavior Checklist. This tool evaluates various behavioral and emotional challenges. It looks at internalizing behaviors, such as anxiety, withdrawal, or sadness. It also examines externalizing behaviors, such as aggression or rule-breaking.

The scores for both internalizing and externalizing behaviors were nearly identical between the two groups. For example, 93 percent of children in the vaccinated group fell within the normal range for total behavioral problems. This was the exact same percentage found in the unvaccinated group.

Finally, the researchers assessed temperament using the Early Childhood Behavior Questionnaire. This measures traits such as “surgency,” which relates to positive emotional reactivity and high energy. It also measures “effortful control,” which is the ability to focus attention and inhibit impulses. Across all these psychological domains, the study found no association between maternal vaccination and negative outcomes.

The demographics of the two groups were largely similar due to the matching process. However, one difference remained. Mothers in the vaccinated group were more likely to be nulliparous. This is a medical term indicating that the woman had never given birth before the pregnancy in question.

Additionally, the children in the vaccinated group were slightly younger at the time of the assessment. Their median age was 25.4 months, compared to 25.9 months for the unvaccinated group. The researchers used statistical models to adjust for these slight variations. Even after these adjustments, the conclusion remained that the developmental outcomes were equivalent.

“Neurodevelopment outcomes in children born to mothers who received the COVID-19 vaccine during or shortly before pregnancy did not differ from those born to mothers who did not receive the vaccine,” said Saade.

While the findings are positive, there are context and limitations to consider. The study was observational, meaning it cannot prove causation as definitively as a randomized controlled trial. However, randomized trials are rarely feasible for widely recommended vaccines due to ethical considerations.

Another factor is the reliance on parent-reported data. Tools like the ASQ-3 depend on the accuracy of the parents’ observations, which can introduce some subjectivity. Furthermore, the study followed children only up to 30 months of age. Some subtle neurodevelopmental issues may not manifest until children are older and face the demands of school.

Despite these limitations, the rigorous matching and the use of multiple standardized screening tools provide a high level of confidence in the results for the toddler age group. The study fills a knowledge gap regarding the safety of mRNA technology for the next generation.

“This study, conducted through a rigorous scientific process in an NIH clinical trials network, demonstrates reassuring findings regarding the long-term health of children whose mothers received COVID-19 vaccination during pregnancy,” said Hughes.

The study, “Association Between SARS-CoV-2 Vaccine in Pregnancy and Child Neurodevelopment at 18–30 Months,” was authored by George R. Saade and Brenna L. Hughes, and will be published in the February 2026 issue of PREGNANCY.

Your attachment style predicts which activities boost romantic satisfaction

13 February 2026 at 19:00

New research provides evidence that the best way to spend time with a romantic partner depends on their specific emotional needs. A study published in Social Psychological and Personality Science suggests that people with avoidant attachment styles feel more satisfied when engaging in novel and exciting activities, while those with anxious attachment styles benefit more from familiar and comfortable shared experiences.

Psychological science identifies attachment insecurity as a significant barrier to relationship satisfaction. Individuals high in attachment avoidance often fear intimacy and prioritize independence, while those high in attachment anxiety fear abandonment and frequently seek reassurance.

Previous studies have shown that partners can mitigate these insecurities by adjusting their behavior, such as offering autonomy to avoidant partners or reassurance to anxious ones. However, less is known about how specific types of shared leisure activities function in this dynamic.

“This study was motivated by two main gaps. One was a gap in the attachment literature. Although attachment insecurity reliably predicts lower relationship satisfaction, these effects can be buffered, and most prior work has focused on partner behaviors. We wanted to know whether shared, everyday experiences could play a similar role,” said study author Amy Muise, a professor and York Research Chair in the Department of Psychology and director of the Sexual Health and Relationships (SHaRe) Lab at York University.

“We were also interested in testing the idea that novelty and excitement are universally good for relationships. Instead, we asked whether different types of shared experiences are more or less beneficial depending on people’s attachment-related needs.”

To explore these dynamics, the scientists conducted a meta-analysis across three separate daily diary studies. The total sample consisted of 390 couples from Canada and the United States. Participants were required to be in a committed relationship and living together or seeing each other frequently. The average relationship length varied slightly by study but ranged generally from seven to eight years.

For a period of 21 days, each partner independently completed nightly surveys. They reported their daily relationship satisfaction and the types of activities they shared with their partner that day. The researchers measured two distinct types of shared experiences. “Novel and exciting” experiences were defined as activities that felt new, challenging, or expanding, such as learning a skill or trying a new restaurant.

“Familiar and comfortable” experiences involved routine, calming, and predictable activities. Examples included watching a favorite TV show, cooking a standard meal together, or simply relaxing at home. The participants also rated their levels of attachment avoidance and anxiety at the beginning of the study. This design allowed the researchers to track how fluctuations in daily activities related to fluctuations in relationship satisfaction.

The data revealed that, in general, both types of shared experiences were linked to higher daily relationship satisfaction. “The effects are modest in size, which is typical for daily experience research because they reflect within-person changes in everyday life,” Muise told PsyPost. “These are not dramatic shifts in relationship quality, but small day-to-day effects that may accumulate over time.”

“Overall, both novel and familiar shared experiences were linked to greater relationship satisfaction, but the effect of familiar, comfortable experiences was larger (roughly two to three times larger) than novel, experiences overall.”

Importantly, the benefits differed depending on a person’s attachment style. For individuals high in attachment avoidance, engaging in novel and exciting activities provided a specific benefit.

On days when avoidant individuals reported more novelty and excitement than usual, the typical link between their avoidant style and lower relationship satisfaction was weakened. The researchers found that these exciting activities increased perceptions of “relational reward.” This means the avoidant partners felt a sense of intimacy and connection that did not feel threatening or smothering. Familiar and comfortable activities did not provide this same buffering effect for avoidant individuals.

In contrast, individuals high in attachment anxiety derived the most benefit from familiar and comfortable experiences. On days marked by high levels of familiarity and comfort, the usual association between attachment anxiety and lower relationship satisfaction disappeared entirely. The study suggests that these low-stakes, comforting interactions help reduce negative emotions for anxiously attached people.

Novel and exciting activities did not consistently buffer the relationship satisfaction of anxiously attached individuals. The researchers noted that while novelty is generally positive, it does not address the specific need for security that defines attachment anxiety. The calming nature of routine appears to be the key ingredient for soothing these specific fears.

“One thing that surprised us was how familiar and comfortable activities seemed to help people who are more anxiously attached,” Muise said. “We expected these experiences to work by lowering worries about rejection or judgment, but that wasn’t what we found. Instead, they seemed to help by lowering people’s overall negative mood.”

“This made us think more carefully about what comfort and routine might actually be doing emotionally. It’s possible that for people higher in attachment anxiety, familiar and comfortable time together helps them feel more secure, and that sense of security is what supports relationship satisfaction. We weren’t able to test that directly in this study, but it’s an important direction for future work.”

The researchers also examined how one person’s attachment style affected their partner’s satisfaction. The results showed that when a person had a highly avoidant partner, they reported higher satisfaction on days they shared novel and exciting experiences. Conversely, when a person had a highly anxious partner, they reported higher satisfaction on days filled with familiar and comfortable activities. This indicates that tailoring activities benefits both the insecure individual and their romantic partner.

“The main takeaway is that there is no single ‘right’ way to spend time together that works for all couples,” Muise explained. “What matters is whether shared experiences align with people’s emotional needs. For people who are more avoidantly attached, doing something novel or exciting together (something that feels new and fun rather than overtly intimate) can make the relationship feel more rewarding and satisfying.”

“For people who are more anxiously attached, familiar and comfortable time together seems especially important for maintaining satisfaction. These findings suggest that tailoring shared time, rather than maximizing novelty or excitement per se, may be a more effective way to support relationship well-being.”

While the findings offer practical insights, the study has certain limitations. The research relied on daily diary entries, which are correlational. This means that while the researchers can observe a link between specific activities and higher satisfaction, they cannot definitively prove that the activities caused the satisfaction. It is possible that feeling satisfied makes a couple more likely to engage in fun or comfortable activities.

“Another potential misinterpretation is that novelty is ‘bad’ for anxiously attached people or that comfort is ‘bad’ for avoidantly attached people,” Muise clarified. “That is not what we found. Both types of experiences were generally associated with higher satisfaction; the difference lies in when they are most helpful for buffering insecurity, not whether they are beneficial at all.”

Future research is needed to determine if these daily buffering effects lead to long-term improvements in attachment security. The scientists also hope to investigate who initiates these activities and whether the motivation behind them impacts their effectiveness. For now, the data suggests that checking in on a partner’s emotional needs might be the best guide for planning the next date night.

“One long-term goal is to understand whether these day-to-day buffering effects can lead to longer-term changes in attachment security,” Muise said. “If repeatedly engaging in the ‘right’ kinds of shared experiences could that have implications for how attachment insecurity evolves over time?”

“Another direction is to examine how these experiences are initiated. Who suggests the activity, and whether it feels voluntary or pressured, might matter, for whether certain experiences are associated with satisfaction.”

“One thing I really appreciate about this study is that it allowed us to look at both partners’ experiences,” Muise added. “The partner effects suggest that tailoring shared experiences doesn’t only benefit the person who is more insecure, it is also associated with how their partner feels about the relationship. Overall, engaging in shared experiences that was aligned with one partner’s attachment needs, has benefits for both partners.”

The study, “Novel and Exciting or Tried and True? Tailoring Shared Relationship Experiences to Insecurely Attached Partners,” was authored by Kristina M. Schrage, Emily A. Impett, Mustafa Anil Topal, Cheryl Harasymchuk, and Amy Muise.

Ultra-processed foods in early childhood linked to lower IQ scores

13 February 2026 at 17:00

Toddlers who consume a diet high in processed meats, sugary snacks, and soft drinks may have lower intelligence scores by the time they reach early school age. A new study published in the British Journal of Nutrition suggests that this negative association is even stronger for children who faced physical growth delays in infancy. These findings add to the growing body of evidence linking early childhood nutrition to long-term brain development.

The first few years of human life represent a biological window of rapid change. The brain grows quickly during this time and builds the neural connections necessary for learning and memory. This process requires a steady supply of specific nutrients to work correctly. Without enough iron, zinc, or healthy fats, the brain might not develop to its full capacity.

Recent trends in global nutrition show that families are increasingly relying on ultra-processed foods. These are industrial products that often contain high levels of sugar, fat, and artificial additives but very few essential vitamins. Researchers are concerned that these foods might displace nutrient-rich options. They also worry that the additives or high sugar content could directly harm biological systems.

Researchers from the Federal University of Pelotas in Brazil and the University of Illinois Urbana-Champaign investigated this issue. The lead author is Glaucia Treichel Heller, a researcher in the Postgraduate Program in Epidemiology in Pelotas. She worked alongside colleagues including Thaynã Ramos Flores and Pedro Hallal to analyze data from thousands of children. The team wanted to determine if eating habits established at age two could predict cognitive abilities years later.

The researchers used data from the 2015 Pelotas Birth Cohort. This is a large, long-term project that tracks the health of children born in the city of Pelotas, Brazil. The team analyzed information from more than 3,400 children. When the children were two years old, their parents answered questions about what the toddlers usually ate.

The scientists did not just look at single foods like apples or candy. Instead, they used a statistical method called principal component analysis. This technique allows researchers to find general dietary patterns based on which foods are typically eaten together. They identified two main types of eating habits in this population.

One pattern was labeled “healthy” by the researchers. This diet included regular consumption of beans, fruits, vegetables, and natural fruit juices. The other pattern was labeled “unhealthy.” This diet featured instant noodles, sausages, soft drinks, packaged snacks, and sweets.

When the children reached six or seven years of age, trained psychologists assessed their intelligence. They used a standard test called the Wechsler Intelligence Scale for Children. This test measures different mental skills to generate an IQ score. The researchers then looked for a statistical link between the diet at age two and the test results four years later.

The analysis showed a clear connection between the unhealthy dietary pattern and lower cognitive scores. Children who frequently ate processed and sugary foods at age two tended to have lower IQ scores at school age. This link remained even when the researchers accounted for other factors that influence intelligence. They adjusted the data for the mother’s education, family income, and how much mental stimulation the child received at home.

The researchers faced a challenge in isolating the effect of diet. Many factors can shape a child’s development. For example, a family with more money might buy healthier food and also buy more books. To manage this, the team identified potential confounding factors. Thaynã Ramos Flores, one of the study authors, noted, “The covariates were identified as potential confounding factors based on a literature review and the construction of a directed acyclic graph.”

The team used these adjustments to ensure the results were not simply reflecting the family’s socioeconomic status. Even with these controls, the negative association between processed foods and IQ persisted. The findings suggest that diet quality itself plays a specific role.

The negative impact appeared to be worse for children who were already biologically vulnerable. The study looked at children who had early-life deficits. These were defined as having low weight, height, or head circumference for their age during their first two years.

For these children, a diet high in processed foods was linked to a drop of nearly 5 points in IQ. This is a substantial difference that could affect school performance. For children without these early physical growth problems, the decline was smaller but still present. In those cases, the reduction was about 2 points.

This finding points to a concept known as cumulative disadvantage. It appears that biological vulnerability and environmental exposures like poor diet interact with each other. A child who is already struggling physically may be less resilient to the harms of a poor diet.

The researchers also looked at the impact of the healthy dietary pattern. They did not find a statistical link between eating healthy foods and higher IQ scores. This result might seem counterintuitive, as fruits and vegetables are known to be good for the brain. However, the authors explain that this result is likely due to the specific population studied.

Most children in the Pelotas cohort ate beans, fruits, and vegetables regularly. Because almost everyone ate the healthy foods, there was not enough difference between the children to show a statistical effect. Flores explained, “The lack of association observed for the healthy dietary pattern can be largely explained by its lower variability.” She added that “approximately 92% of children habitually consumed four or more of the foods that characterize the healthy pattern.”

The study suggests potential biological mechanisms for why the unhealthy diet lowers IQ. One theory involves the gut-brain axis. The human gut contains trillions of bacteria that communicate with the brain. Diets high in sugar and processed additives can alter this bacterial community. These changes might lead to systemic inflammation that affects brain function.

Another possibility involves oxidative stress. Ultra-processed foods often lack the antioxidants found in fresh produce. Without these protective compounds, brain cells might be more susceptible to damage during development. The rapid growth of the brain in early childhood makes it highly sensitive to these physiological stressors.

There are limitations to this type of research. The study is observational, which means it cannot prove that the food directly caused the lower scores. Other factors that the researchers could not measure might explain the difference. For example, the study relied on parents to report what their children ate. Parents might not always remember or report this accurately.

Additionally, the study did not measure the parents’ IQ scores. Parental intelligence is a strong predictor of a child’s intelligence. However, the researchers used maternal education and home stimulation scores as proxies. These measures help account for the intellectual environment of the home.

The findings have implications for public health policy. The results suggest that officials need to focus on reducing the intake of processed foods in early childhood. Merely encouraging fruit and vegetable intake may not be enough if children are still consuming high amounts of processed items. This is particularly important for children who have already shown signs of growth delays.

Future studies could look at how these dietary habits change as children become teenagers. It would also be helpful to see if these results are similar in countries with different food cultures. The team notes that early nutrition is a specific window of opportunity for supporting brain health.

The study, “Dietary patterns at age 2 and cognitive performance at ages 6-7: an analysis of the 2015 Pelotas Birth Cohort (Brazil),” was authored by Glaucia Treichel Heller, Thaynã Ramos Flores, Marina Xavier Carpena, Pedro Curi Hallal, Marlos Rodrigues Domingues, and Andréa Dâmaso Bertoldi.

Bias against AI art is so deep it changes how viewers perceive color and brightness

13 February 2026 at 15:00

New research suggests that simply labeling an artwork as created by artificial intelligence can reduce how much people enjoy and value it. This bias appears to affect not just how viewers interpret the meaning of the art, but even how they process basic visual features like color and brightness. The findings were published in the Psychology of Aesthetics, Creativity, and the Arts.

Artificial intelligence has rapidly become a common tool for visual artists. Artists use technologies ranging from text-to-image generators to robotic arms to produce new forms of imagery. Despite this widespread adoption, audiences often react negatively when they learn technology was involved in the creative process.

Alwin de Rooij, an assistant professor at Tilburg University and associate professor at Avans University of Applied Sciences, sought to understand the consistency of this negative reaction. De Rooij aimed to determine if this bias occurs across different psychological systems involved in viewing art. The researcher also wanted to see if this negative reaction is a permanent structural phenomenon or if it varies by context.

“AI-generated images can now be nearly indistinguishable from art made without AI, yet both public debate and scientific studies suggest that people may respond differently once they are told AI was involved,” de Rooij told PsyPost. “These reactions resemble earlier anxieties around new technologies in art, such as the introduction of photography in the nineteenth century, which is now a fully established art form. This raised the question of how consistent bias against AI in visual art is, and whether it might already be changing.”

To examine this, De Rooij conducted a meta-analysis. This statistical technique combines data from multiple independent studies to find overall trends that a single experiment might miss. The researcher performed a systematic search for experiments published between January 2017 and September 2024.

The analysis included studies where participants viewed visual art and were told it was made by AI. These responses were compared to responses for art labeled as human-made or art presented with no label. The researcher extracted 191 distinct effect sizes from the selected studies.

De Rooij categorized these measurements using a framework known as the Aesthetic Triad model. This model organizes the art experience into three specific systems. The first is the sensory-motor system, which deals with basic visual processing. The second is the knowledge-meaning system, which involves interpretation and context. The third is the emotion-valuation system, which covers subjective feelings and personal preferences.

The investigation revealed that knowing AI was used generally diminishes the aesthetic experience. A small but significant negative effect appeared within the sensory-motor system. This system involves the initial processing of visual features such as color, shape, and spatial relationships. When viewers believed an image was AI-generated, they tended to perceive these basic qualities less favorably.

A moderate negative effect appeared in the knowledge-meaning system. This aspect of the aesthetic experience relates to how people interpret an artwork’s intent. It also includes judgments about the skill required to make the piece. Participants consistently attributed less profundity and creativity to works labeled as artificial intelligence.

The researcher also found a small negative effect in the emotion-valuation system. This system governs subjective feelings of beauty, awe, and liking. Viewers tended to report lower emotional connection when they thought AI was responsible for the work. They also rated these works as less beautiful compared to identical works labeled as human-made.

“The main takeaway is that knowing AI was involved in making an artwork can change how we experience it, even when the artwork itself is identical,” de Rooij explained. “People tend to attribute less meaning and value to art once it is labeled as AI-made, not because it looks worse, but because it is interpreted differently. In some cases, this bias even feeds into basic visual judgments, such as how colorful or vivid an image appears. This shows that bias against AI is not just an abstract opinion about technology. It can deeply shape the aesthetic experience itself.”

But these negative responses were not uniform across all people. The researcher identified age as a significant factor in the severity of the bias. Older participants demonstrated a stronger negative reaction to AI art. Younger audiences showed much weaker negative effects.

This difference suggests a possible generational shift in how people perceive technology in art. Younger viewers may be less troubled by the integration of algorithms in the creative process. The style of the artwork also influenced viewer reactions.

Representational art, which depicts recognizable objects, reduced the negative bias regarding meaning compared to abstract art. However, representational art worsened the bias regarding emotional connection. The setting of the study mattered as well. Experiments conducted online produced stronger evidence of bias than those conducted in laboratories or real-world galleries.

“Another surprising finding was how unstable the bias is,” de Rooij said. “Rather than being a fixed reaction, it varies across audiences and contexts. As mentioned earlier, the bias tends to be stronger among older populations, but the results show it is also influenced by the style of the artworks and by how and where they are presented. In some settings, the bias becomes very weak or nearly disappears. This further supports the observation that, much like earlier reactions to new technologies in art, resistance to AI may be transitional rather than permanent.”

A key limitation involves how previous experiments presented artificial intelligence. Many studies framed the technology as an autonomous agent that created art independently. This description often conflicts with real-world artistic practice.

“The practical significance of these findings need to be critically examined,” de Rooij noted. “Many of the studies included in the meta-analysis frame AI as if it were an autonomous artist, which does not reflect artistic practice, where AI is typically used as a responsive material. The AI-as-artist framing evoke dystopian imaginaries about AI replacing human artists or threatening the humanity in art. As a result, some studies may elicit stronger negative responses to AI, but in a way that has no clear real-world counterpart.”

Future research should investigate the role of invisible human involvement in AI art. De Rooij plans to conduct follow-up studies.

“The next step is to study bias against AI in art in more realistic settings, such as galleries or museums, and in ways that better reflect how artists actually use AI in their creative practice,” de Rooij said. “This is a reaction to the finding that bias against AI seemed particularly strong in online studies, which merits verification of the bias in real-world settings. This proposed follow-up research has recently received funding from the Dutch Research Council, and the first results are expected in late 2026. We are excited about moving this work forward!”

The study, “Bias against artificial intelligence in visual art: A meta-analysis,” was authored by Alwin de Rooij.

Why oversharing might be the smartest move for your career and relationships

13 February 2026 at 06:15

PsyPost’s PodWatch highlights interesting clips from recent podcasts related to psychology and neuroscience.

In a recent episode of the Hidden Brain podcast titled “Coming Clean,” released on Monday, February 9, experts discussed the surprising power of vulnerability. Between the five and fifteen-minute marks of the broadcast, host Shankar Vedantam spoke with Harvard Business School psychologist Leslie John. They examined why admitting to our failures often yields better results than hiding them.

John described a common psychological phenomenon she calls the “disclosure hangover.” This is the sinking feeling of regret or anxiety that settles in the morning after you share a personal, embarrassing, or vulnerable story with colleagues. While many people worry that these moments destroy their professional image, John argues that these fears are often misplaced.

Research conducted by John indicates that calculated vulnerability can actually boost a leader’s standing. In one study involving a Google executive, the leader recorded a video introduction where he admitted he applied for roughly twenty jobs before landing his current role. Viewers trusted him more and expressed a greater willingness to work for him compared to when he hid this past failure.

The most significant finding from this experiment was that the executive’s perceived competence remained stable. Employees did not think he was less capable of doing his job simply because he struggled in the past. This evidence challenges the common belief that leaders must appear perfect to command respect.

The episode also highlighted the experience of Dr. Anna Lembke, a psychiatrist at Stanford University who treats addiction. Lembke publicly shared her own personal struggle with a compulsive habit of reading graphic romance novels. Despite her fears that this would ruin her reputation, the admission made her appear more confident and relatable to her audience.

Beyond social benefits, there is a biological reason humans feel the urge to share personal details. John cited research by scientist Diana Tamir showing that self-disclosure activates the brain’s reward centers. Talking about oneself generates a neurological response similar to the pleasure derived from eating good food.

This biological drive aligns with a deep psychological need to be truly understood by others. The discussion noted that individuals, particularly those with low self-esteem, feel more secure when partners see them accurately rather than through an overly positive lens. Being known for who you really are provides a profound sense of relief.

While society often warns against sharing “too much information,” John suggests we should worry more about sharing too little. Authentic self-expression acts as a powerful tool for building trust. By letting down their guard, professionals and partners alike can foster stronger connections.

Younger women find men with beards less attractive than older women do

13 February 2026 at 05:00

A new study published in Adaptive Human Behavior and Physiology suggests that a woman’s age and reproductive status may influence her preferences for male physical traits. The research indicates that postmenopausal women perceive certain masculine characteristics, such as body shape and facial features, differently than women who are still in their reproductive years. These findings offer evidence that biological shifts associated with menopause might alter the criteria women use to evaluate potential partners.

Scientists have recognized that physical features act as powerful biological signals in human communication. Secondary sexual characteristics are traits that appear during puberty and visually distinguish men from women. These include features such as broad shoulders, facial hair, jawline definition, and muscle mass.

Evolutionary psychology suggests that these traits serve as indicators of health and genetic quality. For instance, a muscular physique or a strong jawline often signals high testosterone levels and physical strength. Women of reproductive age typically prioritize these markers because they imply that a potential partner possesses “good genes” that could be passed to offspring.

However, researchers have historically focused most of their attention on the preferences of young women. Less is known about how these preferences might change as women age and lose their reproductive capability. The biological transition of menopause involves significant hormonal changes, including a decrease in estrogen levels.

This hormonal shift may correspond to a change in mating strategies. The “Grandmother Hypothesis” proposes that older women shift their focus from reproduction to investing in their existing family line. Consequently, they may no longer prioritize high-testosterone traits, which can be associated with aggression or short-term mating.

Instead, older women might prioritize traits that signal cooperation, reliability, and long-term companionship. To test this theory, a team of researchers from Poland designed a study to compare the preferences of women at different stages of life. The research team included Aurelia Starzyńska and Łukasz Pawelec from the Wroclaw University of Environmental and Life Sciences and the University of Warsaw, alongside Maja Pietras from Wroclaw Medical University and the University of Wroclaw.

The researchers recruited 122 Polish women to participate in an online survey. The participants ranged in age from 19 to 70 years old. Based on their survey responses regarding menstrual regularity and history, the researchers categorized the women into three groups.

The first group was premenopausal, consisting of women with regular reproductive functions. The second group was perimenopausal, including women experiencing the onset of menopausal symptoms and irregular cycles. The third group was postmenopausal, defined as women whose menstrual cycles had ceased for at least one year.

To assess preferences, the researchers created a specific set of visual stimuli. They started with photographs of a single 22-year-old male model. Using photo-editing applications, they digitally manipulated the images to create distinct variations in appearance.

The researchers modified the model’s face to appear either more feminized, intermediate, or heavily masculinized. They also altered the model’s facial hair to show a clean-shaven look, light stubble, or a full beard.

Body shape was another variable manipulated in the study. The scientists adjusted the hip-to-shoulder ratio to create three silhouette types: V-shaped, H-shaped, and A-shaped. Finally, they modified the model’s musculature to display non-muscular, moderately muscular, or strongly muscular builds.

Participants viewed these twelve modified images and rated them on a scale from one to ten. They evaluated the man in the photos based on three specific criteria. The first criterion was physical attractiveness.

The second and third criteria involved personality assessments. The women rated how aggressive they perceived the man to be. They also rated the man’s perceived level of social dominance.

The results showed that a woman’s reproductive status does influence her perception of attractiveness. One significant finding related to the shape of the male torso. Postmenopausal women rated the V-shaped body, which is typically characterized by broad shoulders and narrow hips, as less attractive than other shapes.

This contrasts with general evolutionary expectations where the V-shape is a classic indicator of male fitness. The data suggests that as women exit their reproductive years, the appeal of this strong biological signal may diminish.

Age also played a distinct role in how women viewed facial hair. The study found that older women rated men with medium to full beards as more attractive compared to younger women. This preference for beards increased with the age of the participant.

The researchers suggest that beards might signal maturity and social status rather than just raw genetic fitness. Younger women in the study showed a lower preference for beards. This might occur because facial hair can mask other facial features that young women use to assess mate quality.

The study produced complex results regarding facial masculinity. Chronological age showed a slight positive association with finding feminized faces attractive. This aligns with the idea that older women might prefer “softer” features associated with cooperation.

However, when isolating the specific biological factor of menopause, the results shifted. Postmenopausal women rated feminized faces as less attractive than premenopausal women did. This indicates that the relationship between aging and facial preference is not entirely linear.

Perceptions of aggression also varied by group. Postmenopausal women rated men with medium muscularity as more aggressive than men with other body types. This association was not present in the younger groups.

The researchers propose that older women might view visible musculature as a signal of potential threat rather than protection. Younger women, who are more likely to seek a partner for reproduction, may view muscles as a positive sign of health and defense.

Interestingly, the study found no significant connection between the physical traits and perceived social dominance. Neither the age of the women nor their menopausal status affected how they rated a man’s dominance. This suggests that while attractiveness and aggression are linked to physical cues, dominance might be evaluated through other means not captured in static photos.

The study, like all research, has limitations. One issue involved the method used to find participants, known as snowball sampling. In this process, existing participants recruit future subjects from among their own acquaintances. This method may have resulted in a sample that is not fully representative of the general population.

Reliance on online surveys also introduces a technology bias. Older women who are less comfortable with the internet may have been excluded from the study. This could skew the results for the postmenopausal group.

Another limitation involved the stimuli used. The photographs were all based on a single 22-year-old male model. This young age might not be relevant or appealing to women in their 50s, 60s, or 70s. Postmenopausal women might naturally prefer older men, and evaluating a man in his early twenties could introduce an age-appropriateness bias. The researchers acknowledge that future studies should use models of various ages to ensure more accurate ratings.

Despite these limitations, the study provides evidence that biological changes in women influence social perception. The findings support the concept that mating psychology evolves across the lifespan. As the biological need for “good genes” fades, women appear to adjust their criteria for what makes a man attractive.

The study, “The Perception of Women of Different Ages of Men’s Physical attractiveness, Aggression and Social Dominance Based on Male Secondary Sexual Characteristics,” was authored by Aurelia Starzyńska, Maja Pietras, and Łukasz Pawelec.

Genetic risk for depression predicts financial struggles, but the cause isn’t what scientists thought

13 February 2026 at 05:00

A new study published in the Journal of Psychopathology and Clinical Science offers a nuanced look at how genetic risk for depression interacts with social and economic life circumstances to influence mental health over time. The findings indicate that while people with a higher genetic liability for depression often experience financial and educational challenges, these challenges may not be directly caused by the genetic risk itself.

Scientists conducted the study to better understand the developmental pathways that lead to depressive symptoms. A major theory in psychology, known as the bioecological model, proposes that genetic predispositions do not operate in a vacuum. Instead, this model suggests that a person’s genetic makeup might shape the environments they select or experience. For example, a genetic tendency toward low mood or low energy might make it harder for an individual to complete higher education or maintain steady employment.

If this theory holds true, those missed opportunities could lead to financial strain or a lack of social resources. These environmental stressors would then feed back into the person’s life, potentially worsening their mental health. The researchers aimed to test whether this specific chain of events is supported by data. They sought to determine if genetic risk for depression predicts changes in depressive symptoms specifically by influencing socioeconomic factors like wealth, debt, and education.

To investigate these questions, the researchers utilized data from two massive, long-term projects in the United States. The first dataset came from the National Longitudinal Study of Adolescent Health, also known as Add Health. This sample included 5,690 participants who provided DNA samples. The researchers tracked these individuals from adolescence, starting around age 16, into early adulthood, ending around age 29.

The second dataset served as a replication effort to see if the findings would hold up in a different group. This sample came from the Wisconsin Longitudinal Study, or WLS, which included 8,964 participants. Unlike the younger cohort in Add Health, the WLS participants were tracked across a decade in mid-to-late life, roughly from age 53 to 64. Using two different age groups allowed the scientists to see if these patterns persisted across the lifespan.

For both groups, the researchers calculated a “polygenic index” for each participant. This is a personalized score that summarizes thousands of tiny genetic variations across the entire genome that are statistically associated with depressive symptoms. A higher score indicates a higher genetic probability of experiencing depression. The researchers then measured four specific socioeconomic resources: educational attainment, total financial assets, total debt, and access to health insurance.

In the initial phase of the analysis, the researchers looked at the population as a whole. This is called a “between-family” analysis because it compares unrelated individuals against one another. In the Add Health sample, they found that higher genetic risk for depression was indeed associated with increases in depressive symptoms over the 12-year period.

The data showed that this link was partially explained by the socioeconomic variables. Participants with higher genetic risk tended to have lower educational attainment, fewer assets, more debt, and more difficulty maintaining health insurance. These difficult life circumstances, in turn, were associated with rising levels of depression.

The researchers then repeated this between-family analysis in the older Wisconsin cohort. The results were largely consistent. Higher genetic risk predicted increases in depression symptoms over the decade. Once again, this association appeared to be mediated by the same social factors. Specifically, participants with higher genetic risk reported lower net worth and were more likely to have gone deeply into debt or experienced healthcare difficulties.

These results initially seemed to support the idea that depression genes cause real-world problems that then cause more depression. However, the researchers took a significant additional step to test for causality. They performed a “within-family” analysis using siblings included in the Wisconsin study.

Comparing siblings provides a much stricter test of cause and effect. Siblings share roughly 50 percent of their DNA and grow up in the same household, which controls for many environmental factors like parenting style and childhood socioeconomic status. If the genetic risk for depression truly causes a person to acquire more debt or achieve less education, the sibling with the higher polygenic score should have worse economic outcomes than the sibling with the lower score.

When the researchers applied this sibling-comparison model, the findings changed. Within families, the sibling with higher genetic risk did report more depressive symptoms. This confirms that the genetic score is picking up on a real biological vulnerability. However, the link between the depression genetic score and the socioeconomic factors largely disappeared.

The sibling with higher genetic risk for depression was not significantly more likely to have lower education, less wealth, or more debt than their co-sibling. This lack of association in the sibling model suggests that the genetic risk for depression does not directly cause these negative socioeconomic outcomes. Instead, the correlation seen in the general population is likely due to other shared factors.

One potential explanation for the discrepancy involves a concept called pleiotropy, where the same genes influence multiple traits. The researchers conducted sensitivity analyses that accounted for genetic scores related to educational attainment. They found that once they controlled for the genetics of education, the apparent link between depression genes and socioeconomic status vanished.

This suggests that the same genetic variations that influence how far someone goes in school might also be correlated with depression risk. It implies that low education or financial struggle is not necessarily a downstream consequence of depression risk, but rather that both depression and socioeconomic struggles may share common genetic roots or be influenced by broader family environments.

The study has some limitations. Both datasets were comprised almost entirely of individuals of European ancestry. This lack of diversity means the results may not apply to people of other racial or ethnic backgrounds. Additionally, the measures of debt and insurance were limited to the questions available in these pre-existing surveys. They may not have captured the full nuance of financial stress.

Furthermore, while sibling models help rule out family-wide environmental factors, they cannot account for every unique experience a person has. Future research is needed to explore how these genetic risks interact with specific life events, such as trauma or job loss, which were not the primary focus of this investigation. The researchers also note that debt and medical insurance difficulties are understudied in this field and deserve more detailed attention in future work.

The study, “Genotypic and Socioeconomic Risks for Depressive Symptoms in Two U.S. Cohorts Spanning Early to Older Adulthood,” was authored by David A. Sbarra, Sam Trejo, K. Paige Harden, Jeffrey C. Oliver, and Yann C. Klimentidis.

The biology of bonding: Andrew Huberman explains attachment and desire

13 February 2026 at 04:17

PsyPost’s PodWatch highlights interesting clips from recent podcasts related to psychology and neuroscience.

In a recent episode of the Huberman Lab podcast, released on Thursday, February 12, Dr. Andrew Huberman explores the biological and psychological roots of human connection. The episode, titled “Essentials: The Science of Love, Desire & Attachment,” examines how early life experiences and specific brain functions create the feelings of romance. Huberman breaks down the complex science behind why humans bond with certain people and how relationships either succeed or fail over time.

During the first five minutes of the broadcast, Huberman explains that adult romantic styles often mirror the emotional bond a person had with their caregivers as a toddler. He references the famous “Strange Situation Task” developed by psychologist Mary Ainsworth in the 1980s. In this experiment, researchers observed how children reacted when their parents left a room and subsequently returned.

Based on these reactions, researchers categorized children into groups such as securely attached or anxious-avoidant. Huberman notes that these early classifications are strong predictors of how individuals will behave in romantic partnerships later in life. However, he emphasizes that these emotional templates are not permanent and can change once a person understands them.

The discussion moves beyond psychology to look at the physical brain. Huberman clarifies that there is no single area in the brain responsible for creating the feeling of love. Instead, multiple brain regions work together in a coordinated sequence to produce the states of desire and attachment.

Around the ten-minute mark, the host details the specific chemical and electrical systems involved in bonding. He corrects a common misconception about dopamine, explaining that it is primarily a chemical for motivation and craving rather than just pleasure. This chemical acts as a currency in the brain that drives the pursuit of a partner.

A major component of connection is the neural circuit for empathy, which involves the prefrontal cortex and the insula. The insula is a region of the brain that helps people sense their own internal body state, a process known as interoception. This area allows individuals to pay attention to their own feelings while simultaneously reading the emotions of others.

Huberman introduces the concept of “positive delusion” as a requirement for long-term stability. This describes a mental state where a person believes that only their specific partner can make them feel a certain way. This unique biological bias helps maintain the bond between two people over time.

Huberman reviews research from the Gottman Lab at the University of Washington regarding relationship breakdown. The researchers identified four negative behaviors that predict failure: criticism, defensiveness, stonewalling, and contempt. Stonewalling occurs when a listener withdraws from an interaction and stops responding to their partner.

Among these negative behaviors, contempt is identified as the most destructive force in a partnership. Huberman cites the researchers who describe contempt as the “sulfuric acid” of a relationship because it erodes the emotional bond. This hostility completely shuts down the empathy circuits required for connection.

Evening screen use may be more relaxing than stimulating for teenagers

13 February 2026 at 03:00

A recent study published in the Journal of Sleep Research suggests that evening screen use might not be as physically stimulating for teenagers as many parents and experts have assumed. The findings provide evidence that most digital activities actually coincide with lower heart rates compared to non-screen activities like moving around the house or playing. This indicates that the common connection between screens and poor sleep is likely driven by the timing of device use rather than a state of high physical arousal.

Adolescence is a time when establishing healthy sleep patterns is essential for mental health and growth, yet many young people fall short of the recommended eight to ten hours of sleep. While screen use has been linked to shorter sleep times, the specific reasons why this happens are not yet fully understood.

Existing research has looked at several possibilities, such as the light from screens affecting hormones or the simple fact that screens take up time that could be spent sleeping. Some experts have also worried that the excitement from social media or gaming could keep the body in an active state that prevents relaxation. The new study was designed to investigate the physical arousal theory by looking at heart rate in real-world settings rather than in a laboratory.

“In our previous research, we found that screen use in bed was linked with shorter sleep, largely because teens were falling asleep later. But that left an open question: were screens simply delaying bedtime, or were they physiologically stimulating adolescents in a way that made it harder to fall asleep?” said study author Kim Meredith-Jones, a research associate professor at the University of Otago.

“In this study, we wanted to test whether evening screen use actually increased heart rate — a marker of physiological arousal — and whether that arousal explained delays in falling asleep. In other words, is it what teens are doing on screens that matters, or just the fact that screens are replacing sleep time?”

By using objective tools to track both what teens do on their screens and how their hearts respond, the team hoped to fill gaps in existing knowledge. They aimed to see if different types of digital content, such as texting versus scrolling, had different effects on the heart. Understanding these connections is important for creating better guidelines for digital health in young people.

The research team recruited a group of 70 adolescents from Dunedin, New Zealand, who were between 11 and nearly 15 years old. This sample was designed to be diverse, featuring 31 girls and 39 boys from various backgrounds. Approximately 33 percent of the participants identified as indigenous Māori, while others came from Pacific, Asian, or European backgrounds.

To capture a detailed look at their evening habits, the researchers used a combination of wearable technology and video recordings over four different nights. Each participant wore a high-resolution camera attached to a chest harness starting three hours before their usual bedtime. This camera recorded exactly what they were doing and what screens they were viewing until they entered their beds.

Once the participants were in bed, a stationary camera continued to record their activities until they fell asleep. This allowed the researchers to see if they used devices while under the covers and exactly when they closed their eyes. The video data was then analyzed by trained coders who categorized screen use into ten specific behaviors, such as watching videos, gaming, or using social media.

The researchers also categorized activities as either passive or interactive. Passive activities included watching, listening, reading, or browsing, while interactive activities included gaming, communication, and multitasking. Social media use was analyzed separately to see its specific impact on heart rate compared to other activities.

At the same time, the participants wore a Fitbit Inspire 2 on their dominant wrist to track their heart rate every few seconds. The researchers used this information to see how the heart reacted to each specific screen activity in real time. This objective measurement provided a more accurate picture than asking the teenagers to remember how they felt or what they did.

To measure sleep quality and duration, each youth also wore a motion-sensing device on their other wrist for seven consecutive days. This tool, known as an accelerometer, provided data on when they actually fell asleep and how many times they woke up. The researchers then used statistical models to see if heart rate patterns during screen time could predict these sleep outcomes.

The data revealed that heart rates were consistently higher during periods when the teenagers were not using screens. The average heart rate during non-screen activities was approximately 93 beats per minute, which likely reflects the physical effort of moving around or doing chores. In contrast, when the participants were using their devices, their average heart rate dropped to about 83 beats per minute.

This suggests that screen use is often a sedentary behavior that allows the body to stay relatively calm. When the participants were in bed, the difference was less extreme, but screen use still tended to accompany lower heart rates than other in-bed activities. These findings indicate that digital engagement may function as a way for teenagers to wind down after a long day.

The researchers also looked at how specific types of digital content affected the heart. Social media use was associated with the lowest heart rates, especially when the teenagers were already in bed. Gaming and multitasking between different apps also showed lower heart rate readings compared to other screen-based tasks.

“We were surprised to find that heart rates were lower during social media use,” Meredith-Jones told PsyPost. “Previous research has suggested that social media can be stressful or emotionally intense for adolescents, so we expected to see higher arousal. Instead, our findings suggest that in this context, teens may have been using social media as a way to unwind or switch off. That said, how we define and measure ‘social media use’ matters, and we’re now working on more refined ways to capture the context and type of engagement.”

On the other hand, activities involving communication, such as texting or messaging, were linked to higher heart rates. This type of interaction seemed to be less conducive to relaxation than scrolling through feeds or watching videos. Even so, the heart rate differences between these various digital activities were relatively small.

When examining sleep patterns, the researchers found that heart rate earlier in the evening had a different relationship with sleep than heart rate closer to bedtime. Higher heart rates occurring more than two hours before bed were linked to falling asleep earlier in the night. This may be because higher activity levels in the early evening help the body build up a need for rest.

However, the heart rate in the two hours before bed and while in bed had the opposite effect on falling asleep. For every increase of 10 beats per minute during this window, the participants took about nine minutes longer to drift off. This provides evidence that physical excitement right before bed can delay the start of sleep.

Notably, while a higher heart rate made it harder to fall asleep, it did not seem to reduce the total amount of sleep the teenagers got. It also did not affect how often they woke up during the night or the general quality of their rest. The researchers noted that a person would likely need a very large increase in heart rate to see a major impact on their sleep schedule.

“The effects were relatively small,” Meredith-Jones explained. “For example, our data suggest heart rate would need to increase by around 30 beats per minute to delay sleep onset by about 30 minutes. The largest differences we observed between screen activities were closer to 10 beats per minute, making it unlikely that typical screen use would meaningfully delay sleep through physiological arousal alone.”

“The key takeaway is that most screen use in the evening did not increase heart rate. In fact, many types of screen activity were associated with lower heart rates compared to non-screen time. Although higher heart rate before bed was linked with taking longer to fall asleep, the changes in heart rate we observed during screen use were generally small. Overall, most evening screen activities appeared more relaxing than arousing.”

One limitation of this study is that the researchers did not have a baseline heart rate for each participant while they were completely at rest. Without this information, it is difficult to say for certain if screens were actively lowering the heart rate or if the teens were just naturally calm. Individual differences in biology could account for some of the variations seen in the data.

“One strength of this study was our use of wearable cameras to objectively classify screen behaviours such as gaming, social media, and communication,” Meredith-Jones noted. “This approach provides much richer and more accurate data than self-report questionnaires or simple screen-time analytics. However, a limitation is that we did not measure each participant’s true resting heart rate, so we can’t definitively say whether higher heart rates reflected arousal above baseline or just individual differences. That’s an important area for refinement in future research.”

It is also important to note that the findings don’t imply that screens are always helpful for sleep. Even if they are not physically arousing, using a device late at night can still lead to sleep displacement. This happens when the time spent on a screen replaces time that would otherwise be spent sleeping, leading to tiredness the next day. On the other hand, one shouldn’t assume that screens always impede sleep, either.

“A common assumption is that all screen use is inherently harmful for sleep,” Meredith-Jones explained. “Our findings don’t support that blanket statement. In earlier work, we found that screen use in bed was associated with shorter sleep duration, but in this study, most screen use was not physiologically stimulating. That suggests timing and context matter, and that some forms of screen use may even serve as a wind-down activity before bed.”

Looking ahead, “we want to better distinguish between different types of screen use, for example, interactive versus passive engagement, or emotionally charged versus neutral communication,” Meredith-Jones said. “We’re also developing improved real-world measurement tools that can capture not just how long teens use screens, but what they’re doing, how they’re engaging, and in what context. That level of detail is likely to give us much clearer answers than simple ‘screen time’ totals.”

The study, “Screens, Teens, and Sleep: Is the Impact of Nighttime Screen Use on Sleep Driven by Physiological Arousal?” was authored by Kim A. Meredith-Jones, Jillian J. Haszard, Barbara C. Galland, Shay-Ruby Wickham, Bradley J. Brosnan, Takiwai Russell-Camp, and Rachael W. Taylor.

Can brain stimulation treat psychopathy?

13 February 2026 at 01:00

Scientists exploring new ways to address psychopathic traits have found that gentle electrical or magnetic stimulation of the brain may slightly improve empathy and prosocial behavior. A new study published in Progress in Neuro-Psychopharmacology and Biological Psychiatry suggests the technology shows promise—but there is currently no direct evidence it works in people with psychopathy.

Psychopathy is often associated with persistent antisocial behavior and emotional differences, such as reduced empathy, guilt, and concern for others. Traditional treatments, including therapy programs and anger-management courses, have had limited success in changing these core emotional traits.

This has led researchers to explore whether differences in brain activity might help explain psychopathy, and whether targeting the brain directly could offer new treatment possibilities.

Brain imaging studies have shown that people with psychopathic traits often have unusual activity in regions linked to emotion and decision-making. These include areas involved in recognizing fear, responding to others’ pain, and regulating behavior.

Scientists have therefore begun testing non-invasive brain stimulation, which utilizes magnets or weak electrical currents applied to the scalp, to see whether altering brain activity can influence emotional responses.

Led by Célia F. Camara from the University of Essex in the U.K., the research team behind the new study wanted to know whether these brain-stimulation techniques could change traits related to psychopathy.

Camara and colleagues conducted a large review and statistical analysis of 64 experiments involving 122 measured effects. The studies examined several forms of stimulation, including transcranial magnetic stimulation and transcranial direct current stimulation, and compared them with sham (placebo-like) conditions.

Most experiments were conducted with healthy adult volunteers rather than people diagnosed with psychopathy. Participants completed tasks or questionnaires measuring empathy, emotional reactions, or prosocial behavior before and after brain stimulation. The researchers then combined results across studies to see whether any consistent patterns emerged.

The findings demonstrated that certain types of “excitatory” brain stimulation—designed to increase activity in targeted brain regions—produced small to moderate improvements in social and emotional responses. In some cases, participants reported greater empathy, increased willingness to help others, or increased feelings of guilt. Other types of stimulation that dampen brain activity sometimes reduced these responses.

Overall, the analysis suggests that non-invasive brain stimulation can influence emotional and social processing in ways that are relevant to psychopathic traits. However, the results were mixed and varied widely depending on the type of stimulation, the brain area targeted, and how many sessions participants received.

The researchers noted that while the findings provide early proof that emotional traits can be influenced by brain stimulation, the technology is far from being a practical treatment. Notably, the review found that the only available study conducted specifically on psychopathic individuals reported null effects.

“The generalizability of our findings is limited by insufficient research on psychopathy-relevant samples. Responses to non-invasive brain stimulation in individuals with psychopathy may differ from those of non-psychopathic populations, as evidence indicates that individuals with psychopathy exhibit distinct neurobiological profiles compared with non-psychopathic cohorts,” Camara and colleagues cautioned.

Nevertheless, the results open the door to new ways of understanding and potentially addressing the emotional aspects of psychopathy.

The study, “On the possibility to modulate psychopathic traits via non-invasive brain stimulation: A systematic review and meta-analysis,” was authored by Célia F. Camara, Carmen S. Sergiou, Andrés Molero Chamizo, Alejandra Sel, Nathzidy G. Rivera Urbina, Michael A. Nitsche, and Paul H.P. Hanel.

Childhood trauma and genetics drive alcoholism at different life stages

12 February 2026 at 23:00

New research suggests that the path to alcohol dependence may differ depending on when the condition begins. A study published in Drug and Alcohol Dependence identifies distinct roles for genetic variations and childhood experiences in the development of Alcohol Use Disorder (AUD). The findings indicate that severe early-life trauma accelerates the onset of the disease, whereas specific genetic factors are more closely linked to alcoholism that develops later in adulthood. This separation of causes provides a more nuanced view of a condition that affects millions of people globally.

Alcohol Use Disorder is a chronic medical condition characterized by an inability to stop or control alcohol use despite adverse consequences. Researchers understand that the risk of developing this condition stems from a combination of biological and environmental factors. Genetic predisposition accounts for approximately half of the risk. The remaining risk comes from life experiences, particularly those occurring during formative years. However, the specific ways these factors interact have remained a subject of debate.

One specific gene of interest produces a protein called Brain-Derived Neurotrophic Factor, or BDNF. This protein acts much like a fertilizer for the brain. It supports the survival of existing neurons and encourages the growth of new connections and synapses. This process is essential for neuroplasticity, which is the brain’s ability to reorganize itself by forming new neural connections.

Variations in the BDNF gene can alter how the brain adapts to stress and foreign substances. Because alcohol consumption changes the brain’s structure, the gene that regulates brain plasticity is a prime suspect in the search for biological causes of addiction.

Yi-Wei Yeh and San-Yuan Huang, researchers from the Tri-Service General Hospital and National Defense Medical University in Taiwan, led the investigation. They aimed to untangle how BDNF gene variants, childhood trauma, and family dysfunction contribute to alcoholism. They specifically wanted to determine if these factors worked alone or if they amplified each other. For example, they sought to answer whether a person with a specific genetic variant would be more susceptible to the damaging effects of a difficult childhood.

The team recruited 1,085 participants from the Han Chinese population in Taiwan. After excluding individuals with incomplete data or DNA issues, the final analysis compared 518 patients diagnosed with Alcohol Use Disorder against 548 healthy control subjects.

The researchers categorized the patients based on when their drinking became a disorder. They defined early-onset as occurring at or before age 25 and late-onset as occurring after age 25. This distinction allowed them to see if different drivers were behind the addiction at different life stages.

To analyze the biological factors, the researchers collected blood samples from all participants. They extracted DNA to examine four distinct locations on the BDNF gene. These specific locations are known as single-nucleotide polymorphisms. They represent single-letter changes in the genetic code that can alter how the gene functions. The team looked for patterns in these variations to see if any were more common in the group with alcoholism.

Participants also completed detailed psychological assessments. The Childhood Trauma Questionnaire asked about physical, emotional, and sexual abuse, as well as physical and emotional neglect. A second survey measured Adverse Childhood Experiences (ACEs), which covers a broader range of household challenges such as divorce or incarcerated family members. A third tool, the Family APGAR, assessed how well the participants’ families functioned in terms of emotional support, communication, and adaptability.

The genetic analysis revealed a specific pattern of DNA variations associated with the disorder. This pattern, known as a haplotype, appeared more frequently in patients with Alcohol Use Disorder. A deeper look at the data showed that this genetic link was specific to late-onset alcoholism. This category includes individuals who developed the condition after the age of 25. This was a somewhat unexpected finding, as earlier research has often linked strong genetic factors to early-onset disease. The authors suggest that genetic influences on brain plasticity might become more pronounced as the brain ages.

The results regarding childhood experiences painted a different picture. Patients with Alcohol Use Disorder reported much higher rates of childhood trauma compared to the healthy control group. This included higher scores for physical abuse, emotional abuse, and neglect. The study found a clear mathematical relationship between trauma and age. The more severe the childhood trauma, the younger the patient was when they developed a dependency on alcohol. This supports the theory that some individuals use alcohol to self-medicate the emotional pain of early abuse.

The impact of Adverse Childhood Experiences (ACEs) was particularly stark. The data showed a compounding risk. Individuals with one or more adverse experiences were roughly 3.5 times more likely to develop the disorder than those with none. For individuals with two or more adverse experiences, the likelihood skyrocketed. They were 48 times more likely to develop Alcohol Use Disorder. This suggests that there may be a tipping point where the cumulative burden of stress overwhelms a young person’s coping mechanisms.

The researchers uncovered distinct differences between men and women regarding trauma. Men with the disorder reported higher rates of physical abuse in childhood compared to female patients. Women with the disorder reported higher rates of sexual abuse compared to males. The data suggested that for women, a history of sexual abuse was associated with developing alcoholism seven to ten years earlier than those without such history. This highlights a critical need for gender-specific approaches when addressing trauma in addiction treatment.

Family environment played a major role across the board. Patients with the disorder consistently reported lower family functioning compared to healthy individuals. This dysfunction was present regardless of whether the alcoholism started early or late in life. It appears that a lack of family support is a general risk factor rather than a specific trigger for a certain type of the disease. A supportive family acts as a buffer against stress. When that buffer is missing, the risk of maladaptive coping strategies increases.

The team tested the hypothesis that trauma might change how the BDNF gene affects a person. The analysis did not support this idea. The genetic risks and the environmental risks appeared to operate independently of one another. The gene variants did not make the trauma worse, and the trauma did not activate the gene in a specific way. This suggests that while both factors lead to the same outcome, they may travel along parallel biological pathways to get there.

There are limitations to this study that affect how the results should be interpreted. The participants were all Han Chinese, so the genetic findings might not apply to other ethnic populations. Genetic variations often differ by ancestry, and what is true for one group may not hold for another.

The study also relied on adults remembering their childhoods. This retrospective approach can introduce errors, as memory is not always a perfect record of the past. Additionally, the number of female participants was relatively small compared to males, which mirrors the prevalence of the disorder but limits statistical power for that subgroup.

The study also noted high rates of nicotine use among the alcohol-dependent group. Approximately 85 percent of the patients used nicotine. Since smoking can also affect brain biology, it adds another layer of complexity to the genetic analysis. The researchers attempted to control for this, but it remains a variable to consider.

Despite these caveats, the research offers a valuable perspective for clinicians. It suggests that patients who develop alcoholism early in life are likely driven by environmental trauma. Treatment for these individuals might prioritize trauma-informed therapy and psychological processing of past events. In contrast, patients who develop the disorder later in life might be grappling with a genetic vulnerability that becomes relevant as the brain ages. This could point toward different biological targets for medication or different behavioral strategies.

The authors recommend that future research should focus on replicating these findings in larger and more diverse groups. They also suggest using brain imaging technologies. Seeing how these gene variants affect the physical structure of the brain could explain why they predispose older adults to addiction.

Understanding the distinct mechanisms of early versus late-onset alcoholism is a step toward personalized medicine in psychiatry. By identifying whether a patient is fighting a genetic predisposition or the ghosts of a traumatic past, doctors may eventually be able to tailor treatments that address the root cause of the addiction.

The study, “Childhood trauma, family functioning, and the BDNF gene may affect the development of alcohol use disorder,” was authored by Yi-Wei Yeh, Catherine Shin Huey Chen, Shin-Chang Kuo, Chun-Yen Chen, Yu-Chieh Huang, Jyun-Teng Huang, You-Ping Yang, Jhih-Syuan Huang, Kuo-Hsing Ma, and San-Yuan Huang.

A key personality trait is linked to the urge to cheat in unhappy men

12 February 2026 at 21:00

A study in Sexual and Relationship Therapy found that men are more open to casual sex and infidelity than women. The research also highlights a strong link between relationship dissatisfaction, the desire for uncommitted sex, and the intention to cheat.

Infidelity has long been defined as a violation of promises and commitments within a romantic relationship, reflecting a failure to uphold expectations of love, loyalty, and support. However, modern views conceptualize infidelity as physical, sexual, or emotional behaviors that violate relationship norms and cause distress and negative relationship outcomes. Exactly which behaviors constitute infidelity varies across couples, as norms regarding emotional and sexual exclusivity differ between relationships.

The most common forms of infidelity are sexual and emotional infidelity. Sexual infidelity usually involves physical sexual behaviors with someone other than one’s partner. Emotional infidelity consists of forming intimate emotional bonds with a person other than the partner that breach relationship rules agreed upon by the couple. Research indicates that sexual and emotional infidelity often co-occur; they are, most often, not independent phenomena.

A key psychological characteristic linked to infidelity is sociosexuality. Sociosexuality is the level of openness to casual sex without commitment. Individuals with higher sociosexuality are more likely to engage in both sexual and emotional infidelity, as their attitudes and desires may conflict with monogamous relationship norms.

Study author Paula Pricope and her colleagues wanted to investigate whether sociosexuality plays a mediating role in the relationship between relationship satisfaction and intentions towards infidelity. They also wanted to know whether these associations are the same in men and women. The authors hypothesized that men would be more inclined to engage in infidelity compared to women and that their sociosexuality would be higher (i.e., they would be more open to casual sex).

Study participants were 246 individuals from Romania. Their average age was 24 years. All participants were volunteers. Sixty-one percent of participants were women. Seventy-two percent were in a non-marital romantic relationship, while 28% were married. Sixty-eight percent of participants were from urban areas of Romania.

Participants completed assessments of intentions towards infidelity (the Intentions Towards Infidelity Scale), relationship satisfaction (the Relationship Assessment Scale), and sociosexuality (the Sociosexual Orientation Inventory – Revised).

Results showed that individuals reporting stronger intentions towards infidelity tended to have higher sociosexuality and be less satisfied with their relationships. In other words, individuals more willing to cheat on their partners tended to be more open to uncommitted sex and less satisfied with their relationships. Men tended to report higher sociosexuality and higher intentions towards infidelity than women.

The authors tested a statistical model proposing that lower relationship satisfaction leads to higher sociosexuality, which, in turn, increases intentions to cheat. The results indicated that this pathway was significant specifically for men. For male participants, lower relationship satisfaction was linked to higher sociosexuality, which then predicted higher intentions to cheat. However, this mediation pathway was not significant for women.

The study contributes to the scientific understanding of infidelity. However, all study data came from self-reports, leaving room for reporting bias to have affected the results. Additionally, the design of the study does not allow for causal inferences.

While it is indeed possible that lower relationship satisfaction leads to increased sociosexuality and infidelity intentions, it is also possible that higher sociosexuality and infidelity intentions reduce relationship satisfaction or make it harder for a person to be satisfied with a committed relationship. Other possibilities also remain open.

The paper, “The roles of sociosexuality and gender in the relationship between relationship satisfaction and intentions toward infidelity: a moderated mediation model,” was authored by Paula Pricope, Tudor-Daniel Huțul, Adina Karner-Huțuleac, and Andreea Huțul.

Methamphetamine increases motivation through brain processes separate from euphoria

12 February 2026 at 19:00

A study published in the journal Psychopharmacology has found that the increase in motivation people experience from methamphetamine is separate from the drug’s ability to produce a euphoric high. The findings suggest that these two common effects of stimulant drugs likely involve different underlying biological processes in the brain. This research indicates that a person might become more willing to work hard without necessarily feeling a greater sense of pleasure or well-being.

The researchers conducted the new study to clarify how stimulants affect human motivation and personal feelings. They intended to understand if the pleasurable high people experience while taking these drugs is the primary reason they become more willing to work for rewards. By separating these effects, the team aimed to gain insight into how drugs could potentially be used to treat motivation-related issues without causing addictive euphoria.

Another reason for the study was to investigate how individual differences in personality or brain chemistry change how a person responds to a stimulant. Scientists wanted to see if people who are naturally less motivated benefit more from these drugs than those who are already highly driven. The team also sought to determine if the drug makes tasks feel easier or if it simply makes the final reward seem more attractive to the user.

“Stimulant drugs like amphetamine are thought to produce ‘rewarding’ effects that contribute to abuse or dependence, by increasing levels of the neurotransmitter dopamine. Findings from animal models suggest that stimulant drugs, perhaps because of their effects on dopamine, increase motivation, or the animals’ willingness to exert effort,” explained study author Harriet de Wit, a professor at the University of Chicago.

“Findings from human studies suggest that stimulant drugs lead to repeated use because they produce subjective feelings of wellbeing. In the present study, we tested the effects of amphetamine in healthy volunteers, on both an effort task and self-reported euphoria.”

For their study, the researchers recruited a group of 96 healthy adults from the Chicago area. This group consisted of 48 men and 48 women between the ages of 18 and 35. Each volunteer underwent a rigorous screening process that included a physical exam, a heart health check, and a psychiatric interview to ensure they were healthy.

The study used a double-blind, placebo-controlled design to ensure the results were accurate and unbiased. This means that neither the participants nor the staff knew if a volunteer received the actual drug or an inactive pill on a given day. The participants attended two separate laboratory sessions where they received either 20 milligrams of methamphetamine or a placebo.

During these sessions, the participants completed a specific exercise called the Effort Expenditure for Rewards Task. This task required them to choose between an easy option for a small amount of money or a more difficult option for a larger reward. The researchers used this to measure how much physical effort a person was willing to put in to get a better payoff.

The easy task involved pressing a specific key on a keyboard 30 times with the index finger of the dominant hand within seven seconds. Successfully completing this task always resulted in a small reward of one dollar. This served as a baseline for the minimum amount of effort a person was willing to expend for a guaranteed but small gain.

The hard task required participants to press a different key 100 times using the pinky finger of their non-dominant hand within 21 seconds. The rewards for this more difficult task varied from about one dollar and 24 cents to over four dollars. This task was designed to be physically taxing and required a higher level of commitment to complete.

Before making their choice on each trial, participants were informed of the probability that they would actually receive the money if they finished the task. These probabilities were set at 12 percent, 50 percent, or 88 percent. This added a layer of risk to the decision, as a person might work hard for a reward but still receive nothing if the odds were not in their favor.

Throughout the four-hour sessions, the researchers measured the participants’ personal feelings and physical reactions at regular intervals. They used standardized questionnaires to track how much the participants liked the effects of the drug and how much euphoria they felt. They also monitored physical signs such as heart rate and blood pressure to ensure the safety of the volunteers.

Before the main sessions, the participants completed the task during an orientation to establish their natural effort levels. The researchers then divided the group in half based on these baseline scores. This allowed the team to compare people who were naturally inclined to work hard against those who were naturally less likely to choose the difficult task.

The results showed that methamphetamine increased the frequency with which people chose the hard task over the easy one across the whole group. This effect was most visible when the chances of winning the reward were in the low to medium range. The drug seemed to give participants a boost in motivation when the outcome was somewhat uncertain.

The data provides evidence that the drug had a much stronger impact on people who were naturally less motivated. Participants in the low baseline group showed a significantly larger increase in their willingness to choose the hard task compared to those in the high baseline group. For people who were already high achievers, the drug did not seem to provide much of an additional motivational boost.

To understand why the drug changed behavior, the researchers used a mathematical model to analyze the decision-making process. This model helped the team separate how much a person cares about the difficulty of a task from how much they value the reward itself. It provided a more detailed look at the internal trade-offs people make when deciding to work.

The model showed that methamphetamine specifically reduced a person’s sensitivity to the physical cost of effort. This suggests that the drug makes hard work feel less unpleasant or demanding than it normally would. Instead of making the reward seem more exciting, the drug appears to make the work itself feel less like a burden.

This change in effort sensitivity was primarily found in the participants who started with low motivation levels. For these individuals, the drug appeared to lower the mental or physical barriers that usually made them avoid the difficult option. In contrast, the drug did not significantly change the effort sensitivity of those who were already highly motivated.

Methamphetamine did not change how sensitive people were to the probability of winning the reward. This indicates that the drug affects the drive to work rather than changing how people calculate risks or perceive the odds of success. The volunteers still understood the chances of winning, but they were more willing to try anyway despite the difficulty.

As the researchers expected, the drug increased feelings of happiness and euphoria in the participants. It also caused the usual physical changes associated with stimulants, such as an increase in heart rate and blood pressure. Most participants reported that they liked the effects of the drug while they were performing the tasks.

A major finding of the study is that the boost in mood was not related to the boost in productivity. The participants who felt the highest levels of euphoria were not the same people who showed the greatest increase in hard task choices. “This suggests that different receptor actions of amphetamine mediate willingness to exert effort and feelings of wellbeing,” de Wit explained.

There was no statistical correlation between how much a person liked the drug and how much more effort they were willing to exert. This provides evidence that the brain processes that create pleasure from stimulants are distinct from those that drive motivated behavior. A person can experience the motivational benefits of a stimulant without necessarily feeling the intense pleasure that often leads to drug misuse.

The findings highlight that “drugs have numerous behavioral and cognitive actions, which may be mediated by different neurotransmitter actions,” de Wit told PsyPost. “The purpose of research in this area is to disentangle which effects are relevant to misuse or dependence liability, and which might have clinical benefits, and what brain processes underlie the effects.”

The results also highlight the importance of considering a person’s starting point when predicting how they will respond to a medication. Because the drug helped the least motivated people the most, it suggests that these treatments might be most effective for those with a clear deficit in drive.

The study, like all research, has some limitations. The participants were all healthy young adults, so it is not clear if the results would be the same for older people or those with existing health conditions. A more diverse group of volunteers would be needed to see if these patterns apply to the general population.

The study only tested a single 20-milligram dose of methamphetamine given by mouth. It is possible that different doses or different ways of taking the drug might change the relationship between mood and behavior. Using a range of doses in future studies would help researchers see if there is a point where the mood and effort effects begin to overlap.

Another limitation is that the researchers did not directly look at the chemical changes inside the participants’ brains. While they believe dopamine is involved, they did not use brain imaging technology to confirm this directly. Future research could use specialized scans to see exactly which brain regions are active when these changes in motivation occur.

“The results open the door to further studies to determine what brain mechanisms underlie the two behavioral effects,” de Wit said.

The study, “Effects of methamphetamine on human effort task performance are unrelated to its subjective effects,” was authored by Evan C. Hahn, Hanna Molla, Jessica A. Cooper, Joseph DeBrosse, and Harriet de Wit.

Most Americans experience passionate love only twice in a lifetime, study finds

12 February 2026 at 17:00

Most adults in the United States experience the intense rush of passionate love only about twice throughout their lives, according to a recent large-scale survey. The study, published in the journal Interpersona, suggests that while this emotional state is a staple of human romance, it remains a relatively rare occurrence for many individuals. The findings provide a new lens through which to view the frequency of deep romantic attachment across the entire adult lifespan.

The framework for this research relies on a classic model where love consists of three parts: passion, intimacy, and commitment. Passion is described as the physical attraction and intense longing that often defines the start of a romantic connection. Amanda N. Gesselman, a researcher at the Kinsey Institute at Indiana University, led the team of scientists who conducted this work.

The research team set out to quantify how often this specific type of love happens because earlier theories suggest passion is high at the start of a relationship but fades as couples become more comfortable. As a relationship matures, it often shifts toward companionate love, which is defined by deep affection and entwined lives rather than obsessive longing. Because this intense feeling is often fleeting, it might happen several times as people move through different stages of life.

The researchers wanted to see if social factors like age, gender, or sexual orientation influenced how often someone falls in love. Some earlier studies on university students suggested that most young people fall in love at least once by the end of high school. However, very little data existed regarding how these experiences accumulate for adults as they reach middle age or later life.

To find these answers, the team analyzed data from more than 10,000 single adults in the U.S. between the ages of 18 and 99. Participants were recruited to match the general demographic makeup of the country based on census data. This large group allowed the researchers to look at a wide variety of life histories and romantic backgrounds.

Participants were asked to provide a specific number representing how many times they had ever been passionately in love during their lives. On average, the respondents reported experiencing this intense feeling 2.05 times. This number suggests that for the average person, passionate love is a rare event that happens only a few times in a century of living.

A specific portion of the group, about 14 percent, stated they had never felt passionate love at all. About 28 percent had felt it once, while 30 percent reported two experiences. Another 17 percent had three experiences, and about 11 percent reported four or more. These figures show that while the experience is common, it is certainly not a daily or even a yearly occurrence for most.

The study also looked at how these numbers varied based on the specific characteristics of the participants. Age showed a small link to the number of experiences, meaning older adults reported slightly more instances than younger ones. This result is likely because older people have had more years and more opportunities to encounter potential partners.

The increase with age was quite small, which suggests that people do not necessarily keep falling in love at a high rate as they get older. One reason for this might be biological, as the brain systems involved in reward and excitement are often most active during late adolescence and early adulthood. As people transition into mature adulthood, their responsibilities and self-reflection might change how they perceive or pursue new romantic passion.

Gender differences were present in the data, with men reporting slightly more experiences than women. This difference was specifically found among heterosexual participants, where heterosexual men reported more instances of passionate love than heterosexual women. This finding aligns with some previous research suggesting that men may be socialized to fall in love or express those feelings earlier in a relationship.

Among gay, lesbian, and bisexual participants, the number of experiences did not differ by gender. The researchers did not find that sexual orientation on its own created any differences in how many times a person fell in love. For example, the difference between heterosexual and bisexual participants was not statistically significant.

The researchers believe these results have important applications for how people view their own romantic lives. Many people feel pressure from movies, songs, and social media to constantly chase a state of high passion. Knowing that the average person only feels this a couple of times may help people feel more normal if they are not currently in a state of intense romance.

In a clinical or counseling setting, these findings could help people who feel they are behind in their romantic development. If someone has never been passionately in love, they are part of a group that includes more than one in ten adults. Seeing this as a common variation in human experience rather than a problem can reduce feelings of shame.

The researchers also noted that people might use a process called retrospective cognitive discounting. This happens when a person looks back at their past and views old relationships through a different lens based on their current feelings. An older person might look back at a past “crush” and decide it was not true passionate love, which would lower their total count.

This type of self-reflection might help people stay resilient after a breakup. By reinterpreting a past relationship as something other than passionate love, they might remain more open to finding a new connection in the future. This mental flexibility is part of how humans navigate the ups and downs of their romantic histories.

There are some limitations to the study that should be considered. Because the researchers only surveyed single people, the results might be different if they had included people who are currently married or in long-term partnerships. People who are in stable relationships might have different ways of remembering their past experiences compared to those who are currently unattached.

The study also relied on people remembering their entire lives accurately, which can be a challenge for older participants. Future research could follow the same group of people over many years to see how their feelings change as they happen. This would remove the need for participants to rely solely on their memories of the distant past.

The participants were all located in the United States, so these findings might not apply to people in other cultures. Different societies have different rules about how people meet, how they express emotion, and what they consider to be love. A global study would be needed to see if the “twice in a lifetime” average holds true in other parts of the world.

Additionally, the survey did not provide a specific definition of passionate love for the participants. Each person might have used their own personal standard for what counts as being passionately in love. Using a more standardized definition in future studies could help ensure that everyone is answering the question in the same way.

The researchers also mentioned that they did not account for individual personality traits or attachment styles. Some people are naturally more prone to falling in love quickly, while others are more cautious or reserved. These internal traits likely play a role in how many times someone experiences passion throughout their life.

Finally, the study did not include a large enough number of people with diverse gender identities beyond the categories of men and women. Expanding the research to include more gender-diverse individuals would provide a more complete picture of the human experience. Despite these gaps, the current study provides a foundation for understanding the frequency of one of life’s most intense emotions.

The study, “Twice in a lifetime: quantifying passionate love in U.S. single adults,” was authored by Amanda N. Gesselman, Margaret Bennett-Brown, Jessica T. Campbell, Malia Piazza, Zoe Moscovici, Ellen M. Kaufman, Melissa Blundell Osorio, Olivia R. Adams, Simon Dubé, Jessica J. Hille, Lee Y. S. Weeks, and Justin R. Garcia.

AI boosts worker creativity only if they use specific thinking strategies

12 February 2026 at 15:00

A new study published in the Journal of Applied Psychology suggests that generative artificial intelligence can boost creativity among employees in professional settings. But the research indicates that these tools increase innovative output only when workers use specific mental strategies to manage their own thought processes.

Generative artificial intelligence is a type of technology that can produce new content such as text, images, or computer code. Large language models like ChatGPT or Google’s Gemini use massive datasets to predict and generate human-like responses to various prompts. Organizations often implement these tools with the expectation that they will help employees come up with novel and useful ideas. Many leaders believe that providing access to advanced technology will automatically lead to a more innovative workforce.

However, recent surveys indicate that only a small portion of workers feel that these tools actually improve their creative work. The researchers conducted the new study to see if the technology truly helps and to identify which specific factors make it effective. They also wanted to see how these tools function in a real office environment where people manage multiple projects at once. Most previous studies on this topic took place in artificial settings using only one isolated task.

“When ChatGPT was released in November 2022, generative AI quickly became part of daily conversation. Many companies rushed to integrate generative AI tools into their workflows, often expecting that this would make employees more creative and, ultimately, give organizations a competitive advantage,” said study author Shuhua Sun, who holds the Peter W. and Paul A. Callais Professorship in Entrepreneurship at Tulane University’s A. B. Freeman School of Business.

“What struck us, though, was how little direct evidence existed to support those expectations, especially in real workplaces. Early proof-of-concept studies in labs and online settings began to appear, but their results were mixed. Even more surprisingly, there were almost no randomized field experiments examining how generative AI actually affects employee creativity on the job.”

“At the same time, consulting firms started releasing large-scale surveys on generative AI adoption. These reports showed that only a small percentage of employees felt that using generative AI made them more creative. Taken together with the mixed lab/online findings, this raised a simple but important question for us: If generative AI is supposed to enhance creativity, why does it seem to help only some employees and not others? What are those employees doing differently?”

“That question shaped the core of our project. So, instead of asking simply whether generative AI boosts creativity, we wanted to understand how it does so and for whom. Driven by these questions, we developed a theory and tested it using a randomized field experiment in a real organizational setting.”

The researchers worked with a technology consulting firm in China to conduct their field experiment. This company was an ideal setting because consulting work requires employees to find unique solutions for many different clients. The study included a total of 250 nonmanagerial employees from departments such as technology, sales, and administration. These participants had an average age of about 30 years and most held university degrees.

The researchers randomly split the workers into two groups. The first group received access to ChatGPT accounts and was shown how to use the tool for their daily tasks. The second group served as a control and did not receive access to the artificial intelligence software during the study. To make sure the experiment was fair, the company told the first group that the technology was meant to assist them rather than replace them.

The experiment lasted for about one week. During this time, the researchers tracked how often the treated group used their new accounts. At the end of the week, the researchers collected data from several sources to measure the impact of the tool. They used surveys to ask employees about their work experiences and their thinking habits.

They also asked the employees’ direct supervisors to rate their creative performance. These supervisors did not know which employees were using the artificial intelligence tool. Additionally, the researchers used two external evaluators to judge specific ideas produced by the employees. These evaluators looked at how novel and useful the ideas were without knowing who wrote them.

The researchers looked at cognitive job resources, which are the tools and mental space people need to handle complex work. This includes having enough information and the ability to switch between hard and easy tasks. They also measured metacognitive strategies. This term describes how people actively monitor and adjust their own thinking to reach a goal.

A person with high metacognitive strategies might plan out their steps before starting a task. They also tend to check their own progress and change their approach if they are not making enough headway. The study suggests that the artificial intelligence tool increased the cognitive resources available to employees. The tool helped them find information quickly and allowed them to manage their mental energy more effectively.

The results show that the employees who had access to the technology generally received higher creativity ratings from their supervisors. The external evaluators also gave higher scores for novelty to the ideas produced by this group. The evidence suggests that the tool was most effective when workers already used strong metacognitive strategies. These workers were able to use the technology to fill specific gaps in their knowledge.

For employees who did not use these thinking strategies, the tool did not significantly improve their creative output. These individuals appeared to be less effective at using the technology to gain new resources. The study indicates that the tool provides the raw material for creativity, but the worker must know how to direct the process. Specifically, workers who monitored their own mental state knew when to use the tool to take a break or switch tasks.

This ability to switch tasks is important because it prevents a person from getting stuck on a single way of thinking. When the technology handled routine parts of a job, it gave workers more mental space to focus on complex problem solving. The researchers found that the positive effect of the technology became significant once a worker’s use of thinking strategies reached a certain level. Below that threshold, the tool did not provide a clear benefit for creativity.

The cognitive approach to creativity suggests that coming up with new ideas is a mental process of searching through different areas of knowledge. People must find pieces of information and then combine them in ways that have not been tried before. This process can be very demanding because people have a limited amount of time and mental energy. Researchers call this the knowledge burden.

It takes a lot of effort to find, process, and understand new information from different fields. If a person spends all their energy just gathering facts, they might not have enough strength left to actually be creative. Artificial intelligence can help by taking over the task of searching for and summarizing information. This allows the human worker to focus on the high level task of combining those facts into something new.

Metacognition is essentially thinking about one’s own thinking. It involves a person being aware of what they know and what they do not know. When a worker uses metacognitive strategies, they act like a coach for their own brain. They ask themselves if their current plan is working or if they need to try a different path.

The study shows that this self-awareness is what allows a person to use artificial intelligence effectively. Instead of just accepting whatever the computer says, a strategic thinker uses the tool to test specific ideas. The statistical analysis revealed that the artificial intelligence tool provided workers with more room to think. This extra mental space came from having better access to knowledge and more chances to take mental breaks.

The researchers used a specific method called multilevel analysis to account for the way employees were organized within departments and teams. This helps ensure that the findings are not skewed by the influence of a single department or manager. The researchers also checked to see if other factors like past job performance or self-confidence played a role. Even when they accounted for these variables, the link between thinking strategies and the effective use of artificial intelligence remained strong.

The data showed that the positive impact of the tool on creativity was quite large for those who managed their thinking well. For those with low scores in that area, the tool had almost no impact on their creative performance. To test creativity specifically, the researchers asked participants to solve a real problem. They had to provide suggestions for protecting employee privacy in a digital office.

This task required at least 70 Chinese characters in response. It was designed to see if the participants could think of novel ways to prevent information leaks or excessive monitoring by leadership. The external raters then scored these responses based on how original and useful they were. This provided a more objective look at creativity than just asking a supervisor for their opinion.

“The main takeaway is that generative AI does not automatically make people more creative,” Sun told PsyPost. “Simply providing access to AI tools is not enough, and in many cases it yields little creative benefit. Our findings show that the creative value of AI depends on how people engage with it during the creative process. Individuals who actively monitor their own understanding, recognize what kind of help they need, and deliberately decide when and how to use AI are much more likely to benefit creatively.”

“In contrast, relying on AI in a more automatic or unreflective way tends to produce weaker creative outcomes. For the average person, the message is simple: AI helps creativity when it is used thoughtfully: Pausing to reflect on what you need, deciding when AI can be useful, and actively shaping its output iteratively are what distinguish creative gains from generic results.”

As with all research, there are some limitations to consider. The researchers relied on workers to report their own thinking strategies, which can sometimes be inaccurate. The study also took place in a single company within one specific country. People in different cultures might interact with artificial intelligence in different ways.

Future research could look at how long-term use of these tools affects human skills. There is a possibility that relying too much on technology could make people less independent over time. Researchers might also explore how team dynamics influence the way people use these tools. Some office environments might encourage better thinking habits than others.

It would also be helpful to see if the benefits of these tools continue to grow over several months or if they eventually level off. These questions will be important as technology continues to change the way we work. The findings suggest that simply buying new software is not enough to make a company more innovative. Organizations should also consider training their staff to be more aware of their own thinking processes.

Since the benefits of artificial intelligence depend on a worker’s thinking habits, generic software training might not be enough. Instead, programs might need to focus on how to analyze a task and how to monitor one’s own progress. These metacognitive skills are often overlooked in traditional professional development. The researchers note that these skills can be taught through short exercises. Some of these involve reflecting on past successes or practicing new ways to plan out a workday.

The study, “How and for Whom Using Generative AI Affects Creativity: A Field Experiment,” was authored by Shuhua Sun, Zhuyi Angelina Li, Maw-Der Foo, Jing Zhou, and Jackson G. Lu.

Scientists asked men to smell hundreds of different vulvar odors to test the “leaky-cue hypothesis”

12 February 2026 at 06:00

A new study published in Evolution and Human Behavior suggests that modern women may not chemically signal fertility through vulvar body odor, a trait commonly observed in other primates. The findings indicate that men are unable to detect when a woman is in the fertile phase of her menstrual cycle based solely on the scent of the vulvar region. This research challenges the idea that humans have retained these specific evolutionary mating signals.

In the animal kingdom, particularly among non-human primates like lemurs, baboons, and chimpanzees, females often broadcast their reproductive status to males. This is frequently done through olfactory signals, specifically odors from the genital region, which change chemically during the fertile window. These scents serve as information for males, helping them identify when a female is capable of conceiving. Because humans share a deep evolutionary history with these primates, scientists have debated whether modern women retain these chemical signals.

A concept known as the “leaky-cue hypothesis” proposes that women might unintentionally emit subtle physiological signs of fertility. While previous research has investigated potential signals in armpit odor, voice pitch, or facial attractiveness, results have been inconsistent.

The specific scent of the vulvar region has remained largely unexplored using modern, rigorous methods, despite its biological potential as a source of chemical communication. To address this gap, a team led by Madita Zetzsche from the Behavioural Ecology Research Group at Leipzig University and the Max Planck Institute for Evolutionary Anthropology conducted a detailed investigation.

The researchers recruited 28 women to serve as odor donors. These participants were between the ages of 20 and 30, did not use hormonal contraception, and had regular menstrual cycles. To ensure the accuracy of the fertility data, the team did not rely on simple calendar counting. Instead, they used high-sensitivity urinary tests to detect luteinizing hormone and analyzed saliva samples to measure levels of estradiol and progesterone. This allowed the scientists to pinpoint the exact day of ovulation for each participant.

To prevent external factors from altering body odor, the donors adhered to a strict lifestyle protocol. They followed a vegetarian or vegan diet and avoided foods with strong scents, such as garlic, onion, and asparagus, as well as alcohol and tobacco. The women provided samples at ten specific points during their menstrual cycle. These points were clustered around the fertile window to capture any rapid changes in odor that might occur just before or during ovulation.

The study consisted of two distinct parts: a chemical analysis and a perceptual test. For the chemical analysis, the researchers collected 146 vulvar odor samples from a subset of 16 women. They used a specialized portable pump to draw air from the vulvar region into stainless steel tubes containing polymers designed to trap volatile compounds. These are the lightweight chemical molecules that evaporate into the air and create scent.

The team analyzed these samples using gas chromatography–mass spectrometry. This is a laboratory technique that separates a mixture into its individual chemical components and identifies them. The researchers looked for changes in the chemical profile that corresponded to the women’s conception risk and hormone levels. They specifically sought to determine if the abundance of certain chemical compounds rose or fell in a pattern that tracked the menstrual cycle.

The chemical analysis revealed no consistent evidence that the overall scent profile changed in a way that would allow fertility to be tracked across the menstrual cycle. While some specific statistical models suggested a potential link between the risk of conception and levels of certain substances—such as an increase in acetic acid and a decrease in a urea-related compound—these findings were not stable. When the researchers ran robustness checks, such as excluding samples from donors who had slightly violated dietary rules, the associations disappeared. The researchers concluded that there is likely a low retention of chemical fertility cues in the vulvar odor of modern women.

In the second part of the study, 139 men participated as odor raters. To collect the scent for this experiment, the female participants wore cotton pads in their underwear overnight for approximately 12 hours. These pads were then frozen to preserve the scent and later presented to the male participants in glass vials. The men, who were unaware of the women’s fertility status, sniffed the samples and rated them on three dimensions: attractiveness, pleasantness, and intensity.

The perceptual results aligned with the chemical findings. The statistical analysis showed that the men’s ratings were not influenced by the women’s fertility status. The men did not find the odor of women in their fertile window to be more attractive or pleasant than the odor collected during non-fertile days. Neither the risk of conception nor the levels of reproductive hormones predicted how the men perceived the scents.

These null results were consistent even when the researchers looked at the data in different ways, such as examining specific hormone levels or the temporal distance to ovulation. The study implies that if humans ever possessed the ability to signal fertility through vulvar scent, this trait has likely diminished significantly over evolutionary time.

The researchers suggest several reasons for why these cues might have been lost or suppressed in humans. Unlike most primates that walk on four legs, humans walk upright. This bipedalism moves the genital region away from the nose of other individuals, potentially reducing the role of genital odor in social communication. Additionally, human cultural practices, such as wearing clothing and maintaining high levels of hygiene, may have further obscured any remaining chemical signals.

It is also possible that social odors in humans have shifted to other parts of the body, such as the armpits, although evidence for axillary fertility cues remains mixed. The researchers noted that while they found no evidence of fertility signaling in this context, it remains possible that such cues require more intimate contact or sexual arousal to be detected, conditions that were not replicated in the laboratory.

Additionally, the strict dietary and behavioral controls, while necessary for scientific rigor, might not reflect real-world conditions where diet varies. The sample size for the chemical analysis was also relatively small, which can make it difficult to detect very subtle effects.

Future research could investigate whether these cues exist in more naturalistic settings or investigate the role of the vaginal microbiome, which differs significantly between humans and non-human primates. The high levels of Lactobacillus bacteria in humans create a more acidic environment, which might alter the chemical volatility of potential fertility signals.

The study, “Understanding olfactory fertility cues in humans: chemical analysis of women’s vulvar odour and perceptual detection of these cues by men,” was authored by Madita Zetzsche, Marlen Kücklich, Brigitte M. Weiß, Julia Stern, Andrea C. Marcillo Lara, Claudia Birkemeyer, Lars Penke, and Anja Widdig.

Blue light exposure may counteract anxiety caused by chronic vibration

12 February 2026 at 05:00

Living in a modern environment often means enduring a constant hum of background noise and physical vibration. From the rumble of heavy traffic to the oscillation of industrial machinery, these invisible stressors can gradually erode mental well-being.

A new study suggests that a specific color of light might offer a simple way to counter the anxiety caused by this chronic environmental agitation. The research indicates that blue light exposure can calm the nervous system even when the physical stress of vibration continues. These findings were published in the journal Physiology & Behavior.

Anxiety disorders are among the most common mental health challenges globally. They typically arise from a complicated mix of biological traits and social pressures. Environmental factors are playing an increasingly large role in this equation. Chronic exposure to low-frequency noise and vibration is known to disrupt the body’s hormonal balance. This disruption frequently leads to psychological symptoms such as irritability, fatigue, and persistent anxiety.

Doctors often prescribe medication to manage these conditions once a diagnosis is clear. These drugs usually work by altering the chemical signals in the brain to inhibit anxious feelings. However, pharmaceutical interventions are not always the best first step for early-stage anxiety. There is a growing demand for therapies that are accessible and carry fewer side effects. This has led scientists to investigate light therapy as a promising alternative.

Light does more than allow us to see. It also regulates our internal biological clocks and influences our mood. Specialized cells in the eyes detect light and send signals directly to the brain regions that control hormones. This pathway allows light to modulate the release of neurotransmitters associated with emotional well-being.

Despite this general knowledge, there has been little research on how specific light wavelengths might combat anxiety caused specifically by vibration. A team of researchers decided to fill this gap using zebrafish as a model organism. Zebrafish are small, tropical freshwater fish that are widely used in neuroscience. Their brain chemistry and genetic structure share many similarities with humans.

The study was led by Longfei Huo and senior author Muqing Liu from the School of Information Science and Technology at Fudan University in China. They aimed to identify if light could serve as a preventative measure against vibration-induced stress. The team designed a controlled experiment to first establish which vibrations caused the most stress. They subsequently tested whether light could reverse that stress.

The researchers began by separating the zebrafish into different groups. Each group was exposed to a specific frequency of vibration for one hour daily. The frequencies tested were 30, 50, and 100 Hertz. To ensure consistency, the acceleration of the vibration was kept constant across all groups. This phase of the experiment lasted for one week.

To measure anxiety in fish, the scientists relied on established behavioral patterns. When zebrafish are comfortable, they swim freely throughout their tank. When they are anxious, they tend to sink to the bottom. They also exhibit “thigmotaxis,” which is a tendency to hug the walls of the tank rather than exploring open water.

The team utilized a “novel tank test” to observe these behaviors. They placed the fish in a new environment and recorded how much time they spent in the lower half. The results showed that daily exposure to vibration made the fish act more anxious. The effect was strongest in the group exposed to 100 Hertz. These fish spent a statistically significant amount of time at the bottom of the tank.

The researchers also used a “light-dark box test.” In this setup, half the tank is illuminated and the other half is dark. Anxious fish prefer to hide in the dark. The fish exposed to 100 Hertz vibration spent much more time in the dark zones compared to the control group. This confirmed that the vibration was inducing a strong anxiety-like state.

After establishing that 100 Hertz vibration caused the most stress, the researchers moved to the second phase of the study. They wanted to see if light color could mitigate this effect. They repeated the vibration exposure but added a light therapy component. While the fish underwent vibration, they were bathed in either red, green, blue, or white light.

The blue light used in the experiment had a wavelength of 455 nanometers. The red light was 654 nanometers, and the green was 512 nanometers. The light exposure lasted for two hours each day. The researchers then ran a comprehensive battery of behavioral tests to see if the light made a difference.

The team found that the color of the light had a profound impact on the mental state of the fish. Zebrafish exposed to the blue light showed much less anxiety than those in the other groups. In the novel tank test, the blue-light group spent less time at the bottom. They explored the upper regions of the water almost as much as fish that had never been vibrated at all.

In contrast, the red light appeared to offer no benefit. In some metrics, the red light seemed to make the anxiety slightly worse. Fish under red light spent the longest time hiding in the dark during the light-dark box test. This suggests that the calming effect is specific to the wavelength of the light and not just the brightness.

The researchers also introduced two innovative testing methods to validate their results. One was a “social interaction test.” Zebrafish are social animals and usually prefer to be near others. Stress often causes them to withdraw. The researchers placed a group of fish inside a transparent cylinder within the tank. They then measured how much time the test fish spent near this cylinder.

Fish exposed to vibration and white light avoided the group. However, the fish treated with blue light spent a large amount of time near their peers. This indicated that their social anxiety had been alleviated. The blue light restored their natural desire to interact with others.

The second new method was a “pipeline swimming test.” This involved placing the fish in a tube with a gentle current. The setup allowed the scientists to easily measure swimming distance and smoothness of movement. Stressed fish tended to swim erratically or struggle against the flow. The blue-light group swam longer distances with smoother trajectories.

To understand the biological mechanism behind these behavioral changes, the scientists analyzed the fish’s brain chemistry. They measured the levels of three key chemicals: cortisol, norepinephrine, and serotonin. Cortisol is the primary stress hormone in both fish and humans. High levels of cortisol are a hallmark of physiological stress.

The analysis revealed that vibration exposure caused a spike in cortisol and norepinephrine. This hormonal surge matched the anxious behavior observed in the tanks. However, the application of blue light blocked this increase. The fish treated with blue light had cortisol levels comparable to the unstressed control group.

Even more striking was the effect on serotonin. Serotonin is a neurotransmitter that helps regulate mood and promotes feelings of well-being. The study found that 455 nm blue light specifically boosted serotonin levels in the fish. This suggests that blue light works by simultaneously lowering stress hormones and enhancing mood-regulating chemicals.

The authors propose that the blue light activates specific cells in the retina. These cells, known as intrinsically photosensitive retinal ganglion cells, contain a pigment called melanopsin. Melanopsin is highly sensitive to blue wavelengths. When activated, these cells send calming signals to the brain’s emotional centers.

There are some limitations to this study that must be considered. The research focused heavily on specific frequencies and wavelengths. It is possible that other combinations of light and vibration could yield different results. The study also did not investigate potential interaction effects between the light and vibration in a full factorial design.

Additionally, while zebrafish are a good model, they are not humans. The neural pathways are similar, but the complexity of human anxiety involves higher-level cognitive processes. Future research will need to replicate these findings in mammals. Scientists will also need to determine the optimal intensity and duration of light exposure for therapeutic use.

The study opens up new possibilities for managing environmental stress. It suggests that modifying our lighting environments could protect against the invisible toll of noise and vibration. For those living or working in industrial areas, blue light therapy could become a simple, non-invasive tool for mental health.

The study, “Blue light exposure mitigates vibration noise-induced anxiety by enhancing serotonin levels,” was authored by Longfei Huo, Xiaojing Miao, Yi Ren, Xuran Zhang, Qiqi Fu, Jiali Yang, and Muqing Liu.

❌
❌