Normal view

Today — 18 February 2026Main stream

Trump support in 2024 linked to White Americans’ perception of falling to the bottom of the racial hierarchy

17 February 2026 at 23:00

A new study published in the journal Advances in Psychology sheds light on the psychological factors that influenced voting behavior in the 2024 U.S. presidential election. The findings suggest that White Americans who perceive themselves as ranking at the bottom of the racial economic hierarchy—specifically those who feel tied with Black Americans—were the most likely to support Donald Trump. These individuals also expressed the strongest opposition to Diversity, Equity, and Inclusion (DEI) initiatives.

The United States currently exhibits a significant racial wealth gap. Economic statistics consistently show that the average White family holds considerably more wealth than the average Black or Hispanic family. Despite this objective reality, previous polling indicates that many White Americans feel as though they are personally falling behind in terms of their status. Psychological theories regarding “relative deprivation” suggest that people evaluate their well-being by comparing themselves to others rather than by looking at their resources in isolation.

The authors of the new research aimed to understand how these subjective comparisons influence political attitudes. Specifically, they investigated where non-Hispanic White individuals think they stand compared to their own, and other, racial groups. Previous research identified a phenomenon known as “last place aversion,” where people fear being at the very bottom of a social hierarchy.

“This line of research was motivated by recent political trends among some White Americans, including support for DEI bans, alignment with alt-right ideology, and endorsement of political violence in pursuit of political goals (e.g., January 6th),” explained study authors Erin Cooley and Jazmin Brown-Iannuzzi, associate professors of psychology at Colgate University and the University of Virginia, respectively. “Many of these attitudes are not only extreme but also anti-democratic, raising questions about how such views can coexist with identities centered on being ‘most American’ (e.g., White nationalist belief systems).”

For their study, the researchers recruited a representative sample of 506 non-Hispanic White Americans. They utilized a quota system to ensure the group accurately reflected the U.S. population in terms of age, gender, education, and geographic region. The study employed a longitudinal design, collecting data in five distinct waves from early September 2024 through the days immediately following the November presidential election.

The primary tool used to assess status was a measure called the “Perceived Self-Group Hierarchy,” developed by the study authors. Participants viewed a diagram representing a status ladder based on money, education, and job prestige. They were asked to place markers representing themselves, White people, Black people, Asian people, and Hispanic people onto this ladder. If participants wanted to indicate no difference among racial groups, they could place all icons in the same spot.

Using a statistical technique called Latent Profile Analysis, the researchers identified distinct subgroups based on how they viewed the social hierarchy. One group, comprising about 15% of the sample, fit a “last place (tied)” profile. These individuals perceived themselves as ranking below White, Asian, and Hispanic Americans. Notably, they viewed themselves as tied for the bottom position with Black Americans. In this profile, the participants also perceived the entire hierarchy as a “tight race,” meaning they felt the gaps between racial groups were relatively small.

The researchers found a consistent link between this “last place” profile and specific political views. White Americans who fit this profile reported the highest levels of support for Donald Trump throughout the campaign season. They also expressed the strongest intention to vote for him. When surveyed the day after the election, this group was the most likely to report having cast their ballot for Trump.

Beyond voting choices, this group showed the strongest opposition to DEI programs, favoring policies that would ban such initiatives in universities. Additionally, they showed higher alignment with alt-right ideologies, agreeing more frequently with statements such as “White people are generally under attack in the U.S.” and “The government threatens my personal rights.”

Importantly, the researchers found that these attitudes were not driven by actual poverty. The researchers controlled for objective indicators of socioeconomic status, such as income and education levels. They found that belonging to the “last place” profile predicted Trump support and anti-DEI attitudes regardless of how much money or education the participant actually had.

“We originally hypothesized that we would observe a subset of non-Hispanic, White Americans who feel ‘last place.’ That said, we expected this profile to be more likely among working class individuals,” Cooley told PsyPost. “However, perceiving oneself to be ‘last place’ was not associated with the lowest objective income nor the lowest objective education among the White Americans in our samples.”

According to Cooley, because these individuals are not objectively the lowest in status, the findings suggest that “racialized perceptions—rather than objective socioeconomic position—are reliably associated with the political outcomes examined here.”

The researchers also examined whether these feelings intensified as the election drew closer. They hypothesized that political campaigning might heighten status anxieties. However, the data showed that the relationship between profile membership and political support was stable over the three months. The link between feeling “last place” and supporting Trump was just as strong in September as it was in November.

“Although the effects are modest at the individual level—as is typical in political psychology—the consistency of the pattern across large samples with census-based quotas suggests meaningful practical significance,” Cooley noted. “When a psychologically distinct subgroup consistently emerges and is reliably associated with support for certain policies and votes cast in a presidential election, even small effects can matter at the population level.”

As with all research, there are limitations to consider. The design was correlational, which means it cannot prove that feeling “last place” causes someone to vote a certain way. It is possible that the relationship works in the opposite direction. Engaging with certain political media or movements could cultivate or intensify feelings of being left behind.

“One potential misinterpretation is that political outcomes are driven simply by feelings of falling behind other White Americans,” Cooley noted. “Indeed, across these studies, and others, we find that many White Americans perceive themselves as falling behind the perceived high status of ‘White Americans.'”

“However, when used as a predictor on its own, this perception of falling behind White people in particular does not predict political outcomes. Instead, it is the full pattern of how individuals perceive their own status relative to both other White Americans and Asian, Hispanic, and Black Americans that is predictive of alt-right tendencies, support for President Trump, and support for DEI bans.”

For future inquiries, the scientists plan to use mixed-methods research. This would involve interviewing participants to understand the personal life experiences that lead a White American to feel they are tied for last place in the economic hierarchy. Qualitative interviews could reveal the narratives and specific life events that shape these statistical profiles.

“At present, we have a limited understanding of the factors and life experiences that shape perceptions of personal status within the perceived racial economic hierarchy, particularly ‘last place’ perceptions,” Cooley said. “As a next step, we are moving toward mixed-methods approaches that combine quantitative analyses of racialized status perception profiles with structured follow-up interviews of participants—such as those classified into the “last place” profile based on their responses.”

Another limitation is the focus solely on non-Hispanic White Americans. The researchers chose this focus because of the group’s historically advantaged position in the U.S. racial hierarchy. However, this limits the ability to generalize the findings to other racial or ethnic groups. The dynamics of status perception likely operate differently for Black, Hispanic, or Asian Americans. Some initial data suggests that Hispanic Americans may be more likely to see themselves as “first place” than “last place,” presenting an interesting contrast to non-Hispanic White Americans’ status perceptions captured in the work reviewed here.

“Among Hispanic Americans, rather than a subset who feel ‘last place,’ we consistently observe a subset of Hispanic Americans who perceive themselves as close to, or tied for, ‘first place,’ and it is this subset of Hispanic Americans who are most supportive of alt-right ideology, President Trump, and DEI bans,” Cooley told PsyPost.

“Interestingly, Hispanic Americans who also identify as White are most likely to fall into these ‘first place’ profiles. We are currently testing competing theoretical explanations for these divergent patterns between non-Hispanic and Hispanic White Americans using additional mixed-methods research.”

The study, “White Americans’ feelings of being ‘last place’ are associated with anti-DEI attitudes, Trump support, and Trump vote during the 2024 U.S. presidential election,” was authored by Alisa Kukharkin, Fiona Barber, Erin Cooley, Nava Caluori, Xanni Brown, Anshita Singh, William Cipolli, and Jazmin L. Brown-Iannuzzi.

Yesterday — 17 February 2026Main stream

Alcohol drinking habits predict long-term anxiety differently across age groups

17 February 2026 at 21:00

A recent study published in the journal Addictive Behaviors provides evidence that the relationship between alcohol use and future anxiety depends significantly on a person’s age and how they consume alcohol. The findings indicate that while consuming larger amounts of alcohol per occasion predicts slightly higher anxiety levels in most adults, drinking more frequently but in smaller amounts is linked to slightly lower anxiety in older populations.

Scientific literature has established a robust link between alcohol consumption and physical health issues, such as liver disease and cardiovascular problems. However, the connection between drinking and psychological conditions, particularly anxiety, is much less understood. Existing evidence often appears contradictory. Some past inquiries found that alcohol use leads to increased anxiety, while others found no link or even a decrease in symptoms.

A potential reason for these mixed results is that previous work often combined different drinking habits into broad categories, such as “heavy” versus “low volume” consumption. This approach misses the nuance between drinking a small amount often versus drinking a large amount at once. The researchers aimed to separate these behaviors to see if the frequency of drinking and the quantity consumed predict anxiety differently. They also sought to determine if these patterns vary based on demographic factors like sex, age, and income level.

“It’s really strange how little is done on the long-term impacts of alcohol on anxiety given all the research on alcohol which is out there. It helps us understand if alcohol is a good means of self-medicating anxiety or whether it actually induces anxiety over time,” said study author Simon D’Aquino, a clinical psychologist.

To investigate this, the researchers analyzed data from the Household, Income, and Labour Dynamics in Australia (HILDA) survey. This is a large, nationally representative study that tracks the same individuals over many years. The final sample included 21,405 Australian adults who provided data between 2006 and 2021. This longitudinal design allowed the scientists to look at how behaviors in one year influenced mental health in the following year.

The survey measured anxiety using the Kessler-10 anxiety subscale. This is a widely used screening tool that asks participants how often they felt nervous, restless, or hopeless in the past four weeks. Alcohol patterns were assessed by asking participants two specific questions. First, they reported how many days per week they drank alcohol, which established the frequency. Second, they reported how many standard drinks they usually consumed on those days, which established the quantity. A standard drink was defined as containing 10 grams of alcohol.

The researchers used complex statistical models to examine the data. They looked at whether a person’s drinking habits in a given year could predict their anxiety levels one year later. These models accounted for individual differences and adjusted for other variables. The analysis included up to eight pairs of year-to-year data per participant, providing a comprehensive view of changes over time.

The analysis revealed that the relationship between alcohol and anxiety is generally small but statistically significant. Age emerged as a key factor influencing this dynamic, while sex and income did not significantly change the outcome. This means that men and women, as well as rich and poor, showed similar patterns, but young adults and older adults did not.

“I was surprised the relationship varies with age and not gender. I thought women would be susceptible to stronger effects due to lower blood volume, but it might suggest the mechanisms here are not biological.”

For adults aged 51 and older, drinking more frequently was associated with a slight decrease in anxiety scores one year later. This finding aligns with some previous studies on older populations. However, for this same age group, consuming larger quantities of alcohol on a single occasion was linked to increased anxiety. This suggests a divergence in outcomes based on drinking style for older adults.

A different pattern appeared for adults between the ages of 26 and 50. In this group, drinking larger amounts per sitting predicted higher anxiety. This provides evidence that heavier drinking sessions may have negative long-term impacts on mental well-being for mid-life adults. Unlike the older group, the frequency of drinking showed no significant association with anxiety for those aged 26 to 50.

For the youngest group, those aged 18 to 25, the researchers found no significant link between drinking habits and future anxiety. This lack of association might be due to the social context of drinking in early adulthood. Heavy episodic drinking is often more normative and socially accepted in this age bracket. It is possible that the negative psychological effects of alcohol take longer to manifest or are masked by the social nature of drinking during these formative years.

“There aren’t large effects of alcohol on your long term anxiety, but drinking heavily to manage anxiety or other moods will likely make the mood worse,” D’Aquino told PsyPOst. “I think it also highlights that alcohol can have a constructive psychosocial role in our lives too if consumed in small volumes (i.e. a single standard drink each day).”

The researchers propose several explanations for why frequent, low-quantity drinking might be linked to lower anxiety in older adults. It is possible that for this demographic, having a drink is often tied to social activities. As people age, they are at higher risk for loneliness and social isolation. If frequent drinking occurs in the context of socializing with friends or family, the benefits of social connection could be what buffers against anxiety, rather than the alcohol itself.

On the other hand, the finding that larger quantities per occasion predict higher anxiety across most of adulthood supports the idea of a reciprocal relationship. Heavier drinking can disrupt brain chemistry and sleep patterns, which may worsen anxiety symptoms over time. This creates a cycle where anxiety might increase, potentially leading to more drinking, though this specific study only looked at alcohol predicting future anxiety.

But it is important to note that this study is observational. This means it cannot prove that alcohol causes changes in anxiety levels. There may be other unmeasured factors at play. For instance, nicotine use often overlaps with alcohol consumption and is known to affect anxiety, but it was not included in this specific analysis.

“It’s important to note this doesn’t demonstrate that alcohol causes changes in anxiety,” D’Aquino explained. “It’s very possible there are indirect routes through which alcohol consumption affects anxiety such as changes in social environment.”

Future research should aim to replicate these findings in other countries to see if the results hold true outside of the Australian context. The researchers also suggest investigating the mechanisms behind why frequent, low-dose drinking seems protective for older adults. Clarifying whether this is due to biological factors or social benefits would help refine public health guidelines.

“I have more research coming out soon to help explain why older people tend to experience anxiety reductions with more frequent drinking,” D’Aquino said. “I have a suspicion that it helps older people socially bond at a time of life when loneliness typically increases.”

The study, “Alcohol consumption patterns and Long-Term Anxiety: The influence of Sex, Age, and income,” was authored by Simon D’Aquino, Benjamin Riordan, Megan Cook, and Sarah Callinan.

Surprising new research links LSD-induced brain entropy to seizure protection

17 February 2026 at 17:00

Two recent studies conducted by scientists at the University Health Network and the University of Toronto provide new evidence regarding the effects of lysergic acid diethylamide (LSD) on the brain. The findings suggest that this psychedelic compound may have unexpected neuroprotective properties against severe seizures in mice.

Additionally, the research indicates that LSD significantly alters the electrical stability of brain networks. These papers, published in Next Research and Brain Research, challenge conventional assumptions about psychedelics and safety in the context of epilepsy.

Lysergic acid diethylamide is a potent psychoactive substance known for its ability to alter perception, mood, and cognitive processes. It functions primarily by binding to serotonin receptors in the brain. These receptors are proteins that receive chemical signals to regulate various biological functions. While LSD is famous for its recreational use and its ability to induce hallucinations, medical researchers are increasingly examining its potential therapeutic benefits. Past studies suggest it may help treat conditions such as depression and anxiety.

The rationale for investigating LSD in the context of seizures stemmed from a need to improve treatments for epilepsy. Epilepsy is a neurological disorder characterized by recurrent seizures. It affects roughly 50 to 60 million people globally. Current medications fail to control seizures in about one-third of patients. This drug resistance creates an urgent need for alternative therapeutic approaches.

“This work started in a completely different direction than it ended up going in,” said study author Brenden Rabinovitch, a PhD student affiliated with the University of Toronto and the Krembil Brain Institute.

The researchers initially designed the study to test the safety of LSD rather than its efficacy as a treatment. They were interested in using psychedelics to treat functional seizures. Functional seizures are behavioral events that resemble epileptic seizures but are psychological in origin. They do not involve the abnormal electrical discharges seen in epilepsy. Because some patients suffer from both epilepsy and functional seizures, the scientists needed to verify that LSD would not worsen epileptic seizures before proposing it as a treatment.

“We thought this was a fascinating phenomenon that psychedelics could potentially treat due to their therapeutic promise in functional neurological disorder and other psychiatric disorders,” Rabinovitch said. “However, we knew this was very non-traditional, since psychedelics are often treated as potential seizure-inducing drugs, although there is no evidence to suggest this is true in the context stated (we also wrote a review on this recently).”

“Now, it is also important to consider that between 9–11% of epilepsy patients have some functional seizures. If a drug we think may treat functional seizures also has a risk of inducing seizures, that is a clear conflict. Thus, we initially set out to do a ‘safety’ experiment, and we viewed a ‘positive’ result as one in which LSD had no effect on epileptic seizures at all. When we saw that certain behavioural characteristics of seizures actually improved in these mice, we were shocked to say the least.”

The first study, published in Next Research, utilized a mouse model to observe the effects of LSD on acute seizures. The researchers worked with adolescent male and female C57BL/6J mice. They divided the animals into groups and administered either a saline solution or LSD. The LSD was given at doses of 17 or 30 micrograms per kilogram. Forty minutes after this pre-treatment, the researchers injected the mice with kainic acid. Kainic acid is a chemical that mimics the neurotransmitter glutamate. It overstimulates neurons and reliably induces seizures in rodents.

The researchers recorded the behavior of the mice for eighty minutes following the injection. They used a modified version of the Racine scale to measure seizure severity. This scale rates behaviors from mild stages, such as facial movements and freezing, to severe stages, such as full-body convulsions and continuous seizing. The team analyzed the video footage to determine how the pre-treatment influenced the onset, severity, and outcome of the seizures.

The results revealed a distinct difference between the treated mice and the control group, particularly among the males. In the control group, nearly 40 percent of the male mice progressed to status epilepticus. Status epilepticus is a medical emergency where a seizure lasts longer than five minutes or seizures occur close together without recovery. This state is life-threatening. In the control group, roughly 23 percent of the male mice died as a result.

The mice treated with the higher dose of LSD showed a complete absence of status epilepticus. None of the male mice in the 30-microgram group entered this dangerous state, and none of them died. The lower dose of LSD also provided protection, reducing the incidence of severe seizures and death compared to the controls. The drug appeared to alter the early stages of the seizure as well. The treated mice spent more time in a “freezing” state and less time performing repetitive involuntary movements known as automatisms.

“We thought that the best-case scenario was LSD would have no effect on the behavioural seizure characteristics,” Rabinovitch told PsyPost. “At worst, we thought the mice might go into status epilepticus—a prolonged, life-threatening seizure state—and we would urgently need to treat them with an anti-epileptic drug and provide fluids and other supportive treatments until the LSD was fully out of their system.”

“I almost did not believe it when I did the seizure-induction injections and some of the mice did not progress past stage 1 or 2 on the Racine scale, which is a standard measure of seizure severity ranging from mild facial movements at stage 1 up to full convulsive seizures at stage 5.”

The results for the female mice were less dramatic. The female control group was naturally more resistant to the kainic acid and did not experience status epilepticus or death even without the drug. However, the researchers noted that LSD increased the variability of the behavioral responses in both sexes. This suggests that the drug affects individuals differently, leading to a wider range of reactions.

To understand the mechanism behind these behavioral changes, the scientists focused on how LSD modulates the electrical activity of the brain in freely moving mice. The researchers surgically implanted electrodes into the brains of male mice. They targeted two specific regions: the hippocampus and the cortex. The hippocampus is essential for memory and navigation, while the cortex is involved in sensory processing and decision-making.

After the mice recovered from surgery, the researchers recorded their baseline brain activity using intracranial electroencephalography (iEEG). They then administered the same 30-microgram dose of LSD used in the seizure study. They continued to record the electrical signals to observe changes in neural oscillations. Neural oscillations, or brain waves, are rhythmic patterns of electrical activity produced by the synchronized firing of neurons.

The analysis, published in Brain Research, showed that LSD caused a broad reduction in the power of these brain waves. The term “power” in this context refers to the strength or amplitude of the electrical signal. The researchers observed this decrease across all measured frequency bands, including delta, theta, alpha, and beta waves. This effect was most pronounced in the ventral hippocampus, a region associated with emotional memory.

In addition to reducing signal power, the drug increased the variance of the brain activity. Variance refers to the fluctuation or instability of the signal over time. The brain waves became less predictable and more heterogeneous after LSD administration. This finding aligns with the “entropic brain” theory. This hypothesis suggests that psychedelics work by increasing the entropy, or disorder, within the brain. They disintegrate rigid, organized networks and allow for a more flexible state of connectivity.

The scientists propose a theoretical link between the two studies. Seizures are characterized by hypersynchronization. This means that large groups of neurons fire together in an excessive and rigid pattern. By inducing a state of desynchronization and lowering the power of neural rhythms, LSD may make it difficult for this hypersynchronized seizure activity to organize and spread. The drug essentially introduces enough noise or “chaos” into the system to prevent the seizure from generalizing across the brain.

The researchers emphasize that these findings are preliminary. There are several limitations to consider. Both studies were conducted in mice, and human brain physiology is significantly more complex. The protective effects against status epilepticus were clear in male mice, but the differences were harder to assess in females due to their natural resistance to the seizure model used. Additionally, the variability in the responses suggests that the effects of LSD are highly individual-specific.

The scientists also caution against interpreting this as a recommendation for using LSD to treat epilepsy directly. The study used a specific timing for the dosage relative to the seizure induction. In a clinical setting, predicting when a seizure will occur is often impossible.

“The average person should not look at this and immediately think that we should start dosing people with LSD to cure their seizures,” Rabinovitch said. “I think people can take away the idea that epilepsy may be a condition which benefits from multi-target drugs rather than classical single-target anti-seizure medications (ASMs), which have focused on blocking excitatory and enhancing inhibitory transmission in the brain.”

“Epilepsy is complex, so maybe complex drugs could be of benefit to individuals who are ‘drug-resistant’—meaning their seizures persist after having tried two or more classical ASMs. More broadly, novel treatments in the future of epilepsy drug development may want to focus on drugs with broad, multi-mechanism pharmacology instead of focusing on single mechanisms that may differ from person to person.”

The primary long-term goal for the researchers is to explore the use of LSD for functional seizures. Since functional seizures are psychogenic, the psychological effects of psychedelics could address the root cause of the disorder. The fact that the drug appears safe—and potentially protective—regarding epileptic seizures removes a significant barrier to testing it in patient populations who may have both conditions.

“In the long term, we would love the opportunity to run a clinical trial with patients who have functional seizures but do NOT have epilepsy,” Rabinovitch explanined. “This would be the ideal population since we think they would stand to benefit the most from treatment due to the psychological nature of their seizures, absent other neurological pathologies.”

“Looking ahead, we would love to follow up by examining how the electrical (EEG) signal of mice undergoing seizures may be altered with LSD. We are also interested in more translational questions around the framing of treatment. For example, does it make sense to ‘treat’ seizures with LSD directly, or is LSD something that may augment concomitant anti-seizure medications? It is also not clear that LSD would work like a traditional rescue medication such as diazepam, because when and how often it is given likely has a substantial effect on outcomes. This was a preliminary investigation, so we naturally have many ideas for how this could play out.”

The scientists urge the medical research community to remain open to ideas that might initially seem counterintuitive. They note that epilepsy is still a surprisingly misunderstood condition given how many people it affects globally.

“This has resulted in nearly all anti-seizure medications being different flavors of the same idea for over 50 years,” Rabinovitch said. “This is likely a contributing factor to 1/3 of epilepsy patients being treatment-resistant with uncontrollable seizures. When we look at the blossoming of psychedelic research in recent years, it is quite clear that these drugs—when used appropriately in controlled clinical settings with physician supervision—have potential ameliorative effects for many more conditions than we had previously thought.”

The study, “Lysergic acid diethylamide inhibits status epilepticus and mortality in a mouse model of acute kainic acid-induced motor seizures,” was authored by BS Rabinovitch, W Hu, C Tang, N Silverman, EC Lewis, and PL Carlen.

The study, “Lysergic acid diethylamide modulates hippocampal and cortical local field potential oscillatory rhythms in male mice,” was authored by B.S. Rabinovitch, N. Silverman, D. Ji, D. Shizgal, E.C. Lewis, and P.L. Carlen.

Scientists have found a fascinating link between breathing and memory

17 February 2026 at 15:00

New research suggests that the natural rhythm of breathing plays an important role in organizing the brain activity required for human memory. The study indicates that successful memory retrieval is linked to the timing of inhalation and exhalation, with specific brain patterns synchronizing to the respiratory cycle. These findings were published in The Journal of Neuroscience.

Scientists have known for some time that respiration serves functions beyond simply supplying oxygen to the body. Previous studies in both animals and humans have demonstrated that breathing can influence brain activity during sleep and wakefulness. For instance, prior research has shown that people tend to identify facial expressions or perceive tactile stimuli more accurately when they are inhaling.

Despite this knowledge, the specific neural mechanisms connecting breathing phases to the conscious recovery of memories have remained less clear. The research team sought to determine if respiration acts as a pacemaker that synchronizes the brain activity required to recall specific associations. They aimed to understand whether the timing of breathing aligns with the replay of neural patterns that represent stored memories.

“Much of memory research has traditionally focused on neural mechanisms within the brain itself. However, growing evidence suggests that bodily rhythms, particularly breathing, can systematically influence brain activity,” explained study author Thomas Schreiner, Emmy Noether Group Leader at Ludwig Maximilian University of Munich.

“While this link had been demonstrated for general brain states, it remained unclear whether respiration also shapes the specific neural processes that support remembering. Our motivation was to address this gap by testing whether different phases of breathing are directly linked to the neural signatures of successful memory retrieval in humans.”

For their study, the researchers analyzed data from 18 healthy participants. The group consisted of 15 females and 3 males, with a mean age of approximately 21 years. The experiment involved two separate sessions spaced about one week apart.

During the initial phase of each session, participants completed a learning task. They were shown verbs, such as “jump,” paired with images of either objects or scenes. The participants were instructed to create a mental image or story linking the verb to the picture. This process created an associative memory, which is a type of memory that links two unrelated items.

Later, the participants underwent a memory test to see how well they had retained the information. During this test, they were presented with the verbs they had seen earlier. They were then asked to recall the associated image and describe it.

While the participants performed these tasks, the scientists recorded their physiological activity. They used electroencephalography, or EEG, to monitor electrical activity in the brain. Simultaneously, they used a thermistor airflow sensor to track the participants’ breathing patterns. This setup allowed the team to precisely match moments of brain activity with specific phases of the respiratory cycle.

The researchers analyzed the data to see if memory performance varied depending on where the participant was in their breathing cycle when the memory cue appeared. They examined the EEG data for specific oscillatory patterns. Oscillations are rhythmic fluctuations in electrical activity, often called brain waves.

The team focused specifically on the alpha and beta frequency bands, which range from roughly 8 to 20 Hertz. In memory research, a decrease in power within these frequency bands is typically a sign that the brain is successfully processing information. The scientists also used a sophisticated computer model to detect “memory reactivation.” This refers to the moment the brain recreates the specific neural pattern associated with the original image.

The results revealed a connection between breathing and memory performance. The researchers found that participants were more likely to successfully remember an image if the cue word appeared while they were inhaling. Specifically, the optimal sequence for memory retrieval appeared to involve inhaling when the cue was presented, followed by exhaling as the brain processed the memory.

“We were struck by how selectively the effects emerged during successful remembering, rather than during unsuccessful retrieval or control conditions,” Schreiner told PsyPost. “This suggests that respiration is not merely influencing general arousal, but is specifically linked to the neural reinstatement of stored information.”

When the scientists looked at the neural data, they found that the brain waves tracked with the breathing cycle. The characteristic decrease in alpha and beta power, which signals successful memory engagement, was modulated by respiration. These power decreases were most prominent around the time of exhalation.

The study also showed that memory reactivation was synchronized with breathing. The neural patterns indicating that the participant was bringing the image back to mind tended to emerge during the exhalation phase. This suggests that while inhalation may be important for taking in the cue, exhalation is the period when the brain effectively reconstructs the memory.

The scientists observed a correlation between the strength of this synchronization and how well individuals performed on the test. Participants who showed a stronger coupling between their breathing rhythm and their brain’s reactivation patterns achieved better memory scores. This implies that the coordination between breath and brain is not random but is functionally relevant for cognitive performance.

These findings provide evidence that respiration may act as a scaffold for episodic memory retrieval. Episodic memory involves the recollection of specific events, situations, and experiences. The data suggests that the respiratory cycle helps coordinate the neural conditions necessary for this complex cognitive process.

“Our results suggest that breathing is not just a background bodily function, but is closely coordinated with brain activity that supports remembering,” Schreiner explained. “In particular, the timing of inhalation and exhalation appears to structure when memory related neural patterns are most effectively reactivated. This highlights that cognitive processes such as memory emerge from tight interactions between the brain and the body, rather than from the brain alone.”

However, the researchers note that while the effects are consistent, they are relatively modest in size. This is typical for physiological influences on complex mental tasks. The study identifies a correlation but does not definitively prove that breathing causes the changes in brain activity. It is possible that a third factor, such as general arousal or attention, influences both respiration and memory simultaneously.

The researchers also point out that the study focused on spontaneous breathing. The current data reflects natural, unconscious physiological coupling rather than the effects of a breathing exercise.

“A key caveat is that our findings do not imply that consciously changing one’s breathing will immediately improve memory performance,” Schreiner noted. “The study focuses on spontaneous breathing and its natural coupling to brain dynamics. Whether deliberate breathing interventions can reliably enhance memory remains an open question.”

Another potential limitation involves the role of eye movements. Recent scientific debates have questioned whether alpha and beta power decreases are partly driven by oculomotor activity. Future studies will need to track eye movements alongside respiration and EEG to disentangle these factors completely.

The research team plans to expand this line of research. “Our core research focus is sleep and memory, and we have previously shown that respiration plays a key role in structuring memory reactivation during sleep. With the present study, we aimed to extend this framework to wakeful remembering,” Schreiner said.

“Going forward, we want to push this work further by understanding how respiratory stability or instability shapes memory consolidation during sleep, and how disruptions of breathing, such as in sleep disordered breathing, may impair memory related neural coordination in aging and clinical populations.”

“More broadly, we hope this work contributes to a growing view of cognition as an embodied process, in which brain function is continuously shaped by physiological rhythms throughout the body.”

The study, “Respiration shapes the neural dynamics of successful remembering in humans,” was authored by Esteban Bullón Tarrasó, Fabian Schwimmbeck, Marit Petzka, Tobias Staudigl, Bernhard P. Staresina, and Thomas Schreiner.

AI chatbots generate weight loss coaching messages perceived as helpful as human-written advice

17 February 2026 at 01:00

Artificial intelligence systems are increasingly being tested for their ability to support personal health goals. A recent study published in the Journal of Technology in Behavioral Science provides evidence that AI chatbots can generate weight-loss coaching messages that are perceived as helpful as those written by human experts. The findings suggest that large language models may soon offer a scalable way to provide personalized support for individuals managing obesity.

Obesity remains a significant global health challenge. It affects a large percentage of the adult population and increases the risk of conditions like diabetes and cardiovascular disease. While losing a moderate amount of weight can reduce these risks, accessing consistent and personalized coaching is often difficult and expensive. Many people rely on mobile health applications that send automated messages to help them stay on track.

Current automated systems typically rely on pre-written templates. These messages often function on simple rules. For example, if a user does not log their food, the system sends a generic reminder. Previous research indicates that users often find these messages repetitive and impersonal. This lack of customization can lead to lower engagement and limited success in weight management programs.

Scientists conducted this study to determine if modern artificial intelligence could solve this problem. They utilized large language models, which are advanced AI systems capable of understanding and generating human-like text. The researchers wanted to see if an AI chatbot could create messages that felt personalized and empathetic rather than robotic.

“Overweight and obesity affect around 40% of adults worldwide and over 70% in the United States, posing serious health risks. At the same time, there is a growing shortage of clinicians available to provide weight-loss coaching,” said study author Zhuoran Huang, a PhD student at Northeastern University.

“Automated coaching messages are one potential way to increase access while saving time and costs. Still, most existing systems rely on pre-written, templated messages that many users find repetitive and impersonal. We wanted to examine whether generative AI, such as ChatGPT, could create more personalized and engaging coaching messages without the high development costs of traditional tailored systems. While interest in AI for health interventions is increasing, there has been limited research testing whether AI-generated weight-loss coaching messages are feasible to produce or how they compare with messages written by experienced human coaches.”

The study included 87 adults who were already enrolled in a year-long behavioral weight-loss trial. These participants had a body mass index, or BMI, that classified them as overweight or obese. BMI is a standard measure used to estimate body fat based on height and weight. The researchers designed the experiment to measure how helpful the participants found specific coaching messages.

The scientific investigation took place in two phases. In both phases, the researchers presented participants with hypothetical scenarios based on typical weight-loss data. These scenarios included situations where a person might have lost weight, gained weight, or maintained their weight over the previous week. For each scenario, the participants reviewed data summaries regarding calorie intake and physical activity.

Participants then read ten coaching messages. A trained human coach with a master’s degree and extensive experience wrote five of the messages. The AI chatbot, specifically ChatGPT, generated the other five messages based on prompts provided by the researchers. Participants rated each message on a scale of one to five. They also attempted to identify whether a human or a computer wrote each message.

In the first phase, the researchers gave the AI basic instructions to act as a coach and summarize the data. The results favored the human coach. Participants rated the human-written messages as significantly more helpful than the AI-generated ones. Only 66 percent of the AI messages received a rating of three or higher. Feedback indicated that the AI sounded impersonal, overly negative, and somewhat bossy.

Based on this feedback, the researchers adjusted the instructions given to the AI for the second phase. They explicitly asked the chatbot to use an empathetic and encouraging tone. They also instructed it to include touches of humor and to avoid being overly repetitive.

The results in the second phase showed a marked improvement. The participants rated the revised AI messages as equally helpful to the human messages. In this phase, 82 percent of the AI-generated messages received a helpfulness rating of three or higher. This suggests that with the right instructions, AI can perform at a level comparable to a human professional in this specific context.

The study also revealed that participants had difficulty distinguishing between the two sources. In the second phase, participants misidentified the AI messages as human-written 50 percent of the time. This indicates that the updated prompts allowed the technology to mimic human speech patterns effectively.

Qualitative feedback helped explain these numerical findings. Participants expressed appreciation for the empathy and specific suggestions found in the revised AI messages. They liked that the messages validated their struggles without being overly critical.

However, the analysis also highlighted distinct differences. Some participants noted that the AI messages still felt slightly formulaic. They described the AI as being too focused on the data, whereas the human coach tended to sound more curious about the person behind the numbers. The human messages were often described as encouraging more autonomy, while the AI messages were sometimes perceived as more instructional.

Participants also pointed out the importance of context. Some noted that the AI messages made assumptions based solely on the numbers. For instance, if a person did not log their food, the AI might assume they forgot. A human coach might consider that the person was on vacation or sick. This highlights a lingering gap in the AI’s ability to understand the full complexity of a user’s life.

“Our study provides initial evidence that AI can generate weight-loss coaching messages that people find helpful and that are difficult to distinguish from those written by humans,” Huang told PsyPost. “We found that 82% of AI-generated messages were rated ‘somewhat helpful’ or better, comparable to messages written by an experienced human coach.

“However, participants noted that AI messages sometimes felt more formulaic and data-focused, suggesting that there is still room for improvement in capturing the warm, empathetic tone that human coaches naturally provide. This technology could potentially help address gaps in access to coaching support, though much more research is needed.”

“The findings suggest promising potential for practical application. AI-generated messages received ratings of helpfulness comparable to those from an experienced human coach, and participants could not reliably distinguish between them in this setting. As AI technology continues to advance, this approach could help scale weight-loss support and allow clinicians to devote more time to complex or highly personalized care.”

There are some limitations to this study that should be noted. The participants rated the helpfulness of messages based on hypothetical data rather than their own real-time progress. It is possible that people would react differently if the feedback were directed at their actual behaviors and weight fluctuations. Additionally, the study measured perceived helpfulness rather than actual weight loss. Believing a message is helpful does not guarantee it will lead to behavior change.

“This study should be viewed as a proof of concept showing that AI can generate coaching messages perceived as comparable in quality to those written by experienced human coaches,” Huang said. “We see this technology as a tool to support clinicians by handling routine coaching tasks and helping address workforce shortages, not as a replacement for human expertise.”

Future research will need to test these AI-generated messages in active clinical trials. Scientists intend to investigate whether receiving these messages actually helps people lose weight over time. They also aim to explore how to make the AI more sensitive to situational contexts, such as illness or travel.

Another area for future investigation involves safety and privacy. Using large language models in healthcare requires strict adherence to data protection laws. Researchers must ensure that these systems do not accidentally provide inaccurate medical advice. Establishing protocols for human oversight will be essential before such technology is widely deployed.

The study, “Comparing Large Language Model AI and Human-Generated Coaching Messages for Behavioral Weight Loss,” was authored by Zhuoran Huang, Michael P. Berry, Christina Chwyl, Gary Hsieh, Jing Wei and Evan M. Forman.

Before yesterdayMain stream

New sexting study reveals an “alarming” reality for teens who share explicit images

16 February 2026 at 17:00

A new study published in the Journal of Adolescent Health indicates that while most American teenagers do not engage in sexting, those who do face a high probability of negative consequences. The findings suggest that nearly half of adolescents who send sexually explicit images experience nonconsensual sharing of those images or become targets of sextortion. These risks appear to increase dramatically when the content is shared with individuals who are not current romantic partners.

The integration of digital technology into the daily lives of young people has altered how they explore their identities and sexuality. As adolescents navigate their developing romantic lives, some experiment with sending or receiving sexually suggestive images or videos. This behavior is commonly known as sexting. While this can be a form of consensual exploration, it carries potential legal, social, and emotional costs.

Educators and mental health professionals have expressed concern regarding the misuse of these digital images. Once an image is sent, the sender loses control over its distribution. This can lead to the image being shown to others without permission. In more severe cases, it can lead to sextortion. This crime involves threatening to disseminate explicit images to force the victim to provide money, sexual acts, or additional images.

Past estimates regarding the prevalence of sexting have varied. Some earlier reports suggested widespread participation, while others indicated it was less common. The researchers behind this new study sought to provide updated, nationally representative data. They aimed to determine how many teens are currently sexting and, more importantly, how frequently these interactions result in victimization.

“It is important to know the extent of teen sexting, as well as the likelihood of negative experiences when one participates. There is a lot of hyperbole or anecdotes about teen sexting, but not a lot of scientific evidence,” said study author Justin W. Patchin, a professor of criminal justice at the University of Wisconsin-Eau Claire and co-director of the Cyberbullying Research Center.

To investigate, the scientists collected data from a national sample of 3,466 adolescents. The participants were between the ages of 13 and 17 and resided in the United States. The survey was conducted in 2025. The research team used specific quotas to ensuring the sample accurately reflected the U.S. population in terms of age, gender, race, and geographic region.

The survey defined sexting as sending or receiving a naked or semi-naked image or video of oneself. Participants answered questions about their own experiences with sending and receiving these images. They also reported whether they had ever asked for such images or been asked to provide them. The survey specifically inquired about whether the other party involved was a current romantic partner or someone else.

The results indicate that sexting is not a universal behavior among American teens. Approximately 24 percent of the respondents reported that they had sent a sext at some point. A slightly larger group, about 32 percent, reported receiving a sext. These figures suggest that while the behavior is present, a distinct majority of adolescents are not participating in it.

“I think the assumption by many is that all or most teens are participating in sexting,” Patchin told PsyPost. “Our research suggests that is not true.”

Despite the fact that most teens abstain from sexting, the outcomes for those who do participate are concerning. Among the youth who reported sending a sext, 46.8 percent stated that their image was shared with others without their permission. This finding highlights a significant breach of trust in these digital interactions. It suggests that the expectation of privacy is frequently violated.

The study also shed light on the prevalence of sextortion. Among those who had sent a sext, nearly 50 percent reported being the target of sextortion. This means someone threatened to share their private images if they did not comply with certain demands. This rate of victimization is alarmingly high relative to the number of participants.

“The high rate of sextortion and nonconsensual sharing of images definitely surprised us,” Patchin said. “We knew from other research that these behaviors have been increasing lately, but seeing that nearly half the time a teen shares an explicit image something bad will happen really surprised us.”

The researchers analyzed how these behaviors and risks varied across different demographic groups. Male adolescents reported higher rates of involvement than females. Males were more likely to send and receive sexts. They were also more likely to report that their images were shared without permission.

The data further indicated that males were more frequently the targets of sextortion compared to females. Approximately 55 percent of males who sent sexts reported being targeted, compared to roughly 40 percent of females. This contrasts with some public perceptions that frame females as the primary victims of image-based abuse.

Sexual orientation also played a role in the findings. Non-heterosexual youth reported higher rates of sending and receiving sexts compared to their heterosexual peers. However, heterosexual youth reported higher rates of having their images shared without consent. Heterosexual youth were also more likely to engage in the nonconsensual sharing of others’ images.

One of the most significant findings from the study relates to the relationship between the sender and the recipient. The researchers examined the odds of negative outcomes based on who received the image. The analysis showed a strong correlation between sharing images with non-partners and experiencing harm.

Teenagers who sent sexts to someone who was not a current boyfriend or girlfriend faced substantially higher risks. These individuals were more than 13 times as likely to have their image shared without permission compared to those who only shared with a romantic partner. This suggests that the lack of a committed relationship removes a layer of protection and trust.

The risk of sextortion followed a similar pattern. Youth who sent explicit content to non-partners were nearly five times more likely to be targeted by sextortion schemes. This aligns with reports from law enforcement regarding criminals who target minors online to extort money. These perpetrators often feign romantic interest to acquire images before making their threats.

“The amount of sextortion and nonconsensual sharing of images should be alarming,” Patchin told PsyPost.

As with all research, there are some limitations to consider. The data relies on self-reporting from adolescents. It is possible that some participants did not answer truthfully about sensitive topics due to embarrassment or fear. While the researchers assured anonymity to encourage honesty, underreporting is a common challenge in research on risky behaviors.

Additionally, the study is cross-sectional, meaning it captures data at a single point in time. This prevents the researchers from establishing a definitive causal order for all observed associations. For instance, it is difficult to determine if certain psychological factors predispose teens to both sexting and victimization.

“We will continue to track the trends in these behaviors over time,” Patchin said. “We are hopeful that other researchers will also focus on this problem so corroborate our results.”

The findings have practical implications for parents, educators, and policymakers. The results suggest that “everyone is doing it” is a misconception. Correcting this social norm could help reduce peer pressure. If teenagers understand that sexting is not the standard behavior for their age group, they may feel less compelled to participate.

Furthermore, the high rates of nonconsensual sharing indicate a need for education on digital consent and privacy. “The results speak to the importance of talking to teens about their online behaviors,” Patchin said. “They also raise the question about whether we should consider formally teaching teens about ‘safe sexting.'”

The researchers have previously argued that traditional fear-based and punitive approaches to preventing teen sexting are largely ineffective and may actually exacerbate harm by discouraging youth from seeking help. Drawing a parallel to the limitations of abstinence-only sex education, they advocate for a “harm reduction” strategy that accepts digital sexual exploration as a reality for some adolescents and seeks to minimize negative outcomes.

This “safe sexting” curriculum would equip teens with practical knowledge to reduce reputational and legal risks, such as the importance of excluding identifiable features like faces or tattoos from images, rather than simply forbidding the behavior.

The study, “When Sexting Goes Wrong: The Extent of Nonconsensual Sharing and Sextortion Among U.S. Teens,” was authored by Justin W. Patchin and Sameer Hinduja.

Cannabis use associated with better decision-making skills in people with bipolar disorder

16 February 2026 at 15:00

A new study published in Translational Psychiatry suggests that chronic cannabis use may not be associated with cognitive impairment in people with bipolar disorder, contrasting with its effects on healthy individuals. The findings indicate that people with bipolar disorder who use cannabis moderately may possess better decision-making skills than those with the disorder who do not use the drug. This research offers a potential explanation for why many individuals with this condition turn to cannabis for symptom management.

Bipolar disorder is a chronic mental health condition characterized by extreme shifts in mood, energy, and activity levels. These shifts typically range from periods of extremely energized behavior, known as manic episodes, to very sad or hopeless periods, known as depressive episodes. Beyond these emotional symptoms, the disorder is frequently accompanied by cognitive deficits.

Individuals with bipolar disorder often struggle with goal-directed behaviors. This includes difficulties with decision-making and inhibitory control. These cognitive impairments can lead to impulsive actions and engagement in risky behaviors. These deficits can severely impact social relationships, occupational stability, and overall quality of life.

A significant number of people with bipolar disorder report using cannabis. Statistics suggest that over 70 percent of individuals with this diagnosis have a lifetime history of regular use. Patients frequently report using the drug to self-medicate. They claim it helps alleviate specific symptoms such as racing thoughts or hyperactivity.

Medical professionals have historically viewed this high rate of use with concern. In the general population, chronic cannabis use is typically linked to cognitive decline. Regular use is often associated with worse memory, reduced attention, and poorer decision-making. The researchers wanted to investigate whether these negative effects hold true for the unique neurobiology of bipolar disorder.

“People with bipolar disorder face a difficult life-long illness that sees them shift from mania to depressive episodes with regularity, massively disrupting lives and likely contributing to 1/3rd attempting suicide, reducing life expectancy up to 20 years, in addition to the toll on their and friends and families lives,” said study author Jared W. Young of the University of California San Diego and VA San Diego Healthcare System.

“Current treatments are obviously insufficient so novel treatments are needed. We observed that people with bipolar disorder use cannabis at a rate three times higher than the general population. When queried, many with bipolar disorder described using cannabis to alleviate their symptoms, slowing them down when they feel too energetic, and help them manage their thinking. We sought to determine whether cannabis may have unique or even beneficial effects on thinking and behavior in such people, despite evidence for negative effects in healthy people.”

To explore this, the scientists recruited 87 participants between the ages of 18 and 50. They divided the participants into four specific groups to allow for detailed comparisons. The first two groups consisted of healthy individuals: those who did not use cannabis and those who did.

The remaining two groups consisted of participants diagnosed with bipolar disorder. One group was comprised of non-users, while the other was comprised of chronic cannabis users. The researchers defined “chronic” use as using cannabis at least four times per week for the past 90 days. Non-users were those with minimal lifetime exposure and no recent use.

The study employed the Iowa Gambling Task to measure decision-making abilities. This is a computerized psychological test designed to simulate real-life decision-making. Participants are presented with four decks of cards and asked to draw from them to win play money.

Two of the decks are considered “risky.” They offer high immediate rewards but also come with large penalties that result in a long-term loss. The other two decks are “safe.” They offer smaller immediate rewards but also smaller penalties, leading to a long-term gain. The test measures how well a person learns to avoid the risky decks in favor of the safe ones.

The researchers also assessed functional capacity using the UCSD Performance-Based Skills Assessment. This test involves role-playing scenarios to evaluate everyday life skills. The study focused specifically on medication management. Participants had to plan a complex medication routine involving multiple prescriptions to demonstrate their ability to adhere to a treatment plan.

The results showed a clear divergence between the healthy participants and those with bipolar disorder. Healthy participants who used cannabis performed worse on the gambling task than healthy non-users. This confirms previous research showing that cannabis tends to impair decision-making in the general population.

However, the pattern was reversed for the participants with bipolar disorder. Those who did not use cannabis exhibited deficits in decision-making. They frequently chose from the risky decks and failed to adjust their strategy after losing money.

In contrast, the participants with bipolar disorder who used cannabis performed better. Their scores were not only higher than the non-using bipolar group, but they were also comparable to the healthy non-users. This suggests that cannabis use was associated with a normalization of decision-making abilities in this specific clinical population.

The researchers also analyzed the frequency of use. They found that these cognitive benefits were primarily associated with moderate use. Moderate use was defined as using cannabis between four and twenty-four times per week. Heavy use, defined as twenty-five times or more per week, was associated with worse performance.

“It is important to note that only moderate cannabis use was associated with improved function, whereas heavy use worsened functioning in people with bipolar disorder,” Young told PsyPost. “This finding supports the need to identify what component of cannabis and what dose is likely driving the beneficial effects.”

The functional assessment yielded similar results. Participants with bipolar disorder who did not use cannabis struggled with the medication management task. Those who used cannabis demonstrated better functional skills. Their ability to manage a complex medication schedule was statistically similar to that of the healthy participants.

The scientists propose a biological mechanism involving dopamine to explain these findings. Bipolar disorder is often linked to an excess of dopamine transmission in certain brain areas, which can drive impulsive behavior. Chronic cannabis use is known to reduce dopamine transmission over time. The researchers suggest that cannabis might be correcting the dopamine imbalance in people with bipolar disorder, thereby improving their decision-making.

“In short, cannabis use may improve cognition in people with bipolar disorder, though there are caveats,” Young said.

The research was cross-sectional, meaning it looked at a single point in time. It shows an association but cannot prove that cannabis caused the improvement. It is possible that individuals with better cognitive functioning are simply more likely to use cannabis.

The sample size was also relatively small. There were roughly twenty participants in each of the four subgroups. This limits the statistical power of the analysis. Larger studies are needed to confirm these results.

“Care must be taken in simplistic interpretations, given that this work is associative – those choosing to use cannabis perform better, they may simply have better performance than those that do not,” Young explained. “Hence, more research is needed to test if potential cannabinoid-based treatments improve cognition in non-cannabis users with bipolar disorder.”

The researchers caution against interpreting these results as a clinical recommendation. While decision-making seemed improved, cannabis can still have detrimental effects on other aspects of bipolar disorder. It has been linked to increased risks of mania and psychosis in some patients.

“Even though we observed potential beneficial effects here, cannabis use can still have harmful effects on other aspects of bipolar disorder and free use of cannabis should not be encouraged as yet,” Young told PsyPost. “Finally, it cannot be emphasized enough that this study was only associative as mentioned – in other words we cannot say that cannabis caused this as we only compared people who used cannabis vs. those that did not, we need studies where we assign people to doses in a randomized blinded manner.”

“Our long-term goal for this line of research is to investigate the biological mechanisms that underlie the potentially beneficial cannabis effects in people with bipolar disorder using parallel human and animal experiments (translational studies). We also hope to investigate the effects of specific cannabis use patterns (e.g., use frequency and cannabinoid types) on other bipolar disorder symptoms to better understand the potential risks and benefits of cannabis use in bipolar disorder.”

“What we also hope to understand and are currently studying is whether similar things happen in other conditions where cannabis is also used to manage symptoms, like in people with HIV,” Young continued. “We believe that this kind of research will help us do better at making more specific recommendations for people who use cannabis, in terms of how much might help them but how much is too much.”

“It is vital that future studies should test the actual cannabis products used by people with bipolar disorder versus administering cannabis in a controlled laboratory setting in both single and multiple dosing treatment studies. We conduct this research because we want to help people with bipolar disorder manage their disease and hopefully better interact with their friends, families, and society at large. More studies are needed to determine whether this approach will be beneficial long-term, and we hope to continue these studies.”

The study, “Chronic cannabis use in people with bipolar disorder is associated with comparable decision-making and functional outcome to healthy participants,” was authored by Alannah Miranda, Benjamin Z. Roberts, Breanna M. Holloway, Elizabeth Peek, Holden Rosberg, Samantha M. Ayoub, Daniele Piomelli, Kwang-Mook Jung, Samuel A. Barnes, Steven Rossi, Mark A. Geyer, William Perry, Arpi Minassian, and Jared W. Young.

Gender-affirming hormone therapy linked to shifts in personality traits

16 February 2026 at 03:00

A new study published in Comprehensive Psychoneuroendocrinology suggests that gender-affirming hormone therapy may influence specific personality traits in transgender individuals. The findings indicate that medical transition can shift certain emotional and behavioral patterns toward those typically associated with the individual’s identified gender. While personality is often viewed as a static set of characteristics, this research provides evidence that sex hormones might play a role in shaping how people think, feel, and behave.

The relationship between hormone levels and personality traits remains a complex area of study. Previous research on cisgender populations (people whose gender identity matches their sex assigned at birth) has documented average differences in personality traits between men and women. For instance, women tend to score higher on traits related to agreeableness and neuroticism compared to men. The researchers wanted to determine if altering hormone levels through medical treatment would cause personality shifts in transgender individuals.

“This investigation was part of a larger study regarding possible effects on the brain from gender-affirming hormonal treatment. For us, who are clinically active in transgender care, it is obvious that sex hormones have effects on the brain in an extent not totally acknowledged. So the deeper aim was to investigate the effects of sex hormones on personality traits, something usually believed to be rather static,” explained study author Mats Holmberg of the Karolinska Institutet.

The research team conducted a prospective study involving adults referred for gender-affirming hormone therapy at the Karolinska University Hospital in Stockholm, Sweden. To ensure the results specifically reflected hormonal changes rather than other factors, the scientists excluded individuals with known psychiatric disorders, autism spectrum disorder, or those taking antidepressant medications. This helped minimize variables that could skew the personality assessments.

The final group of participants consisted of 58 individuals. This included 34 people assigned female at birth who were prescribed testosterone and 24 people assigned male at birth who received anti-androgens and estradiol. Anti-androgens are medications that block the effects of testosterone.

The researchers used the NEO-PI-R inventory to assess personality. This is a comprehensive questionnaire based on the Five-Factor Model, often called the “Big Five.” This model categorizes personality into five main dimensions: Neuroticism, Extraversion, Openness, Agreeableness, and Conscientiousness. Neuroticism refers to the tendency to experience negative emotions like anxiety or sadness.

Extraversion describes sociability and enthusiasm. Openness involves curiosity and a willingness to try new things. Agreeableness relates to how cooperative and compassionate a person is. Conscientiousness reflects organization and dependability. Participants completed this assessment twice: once before starting hormones and again after at least six months of treatment.

Before treatment began, the researchers observed specific differences between the two groups. Participants assigned female at birth scored higher in the dimension of Agreeableness compared to those assigned male at birth. They also scored higher in specific sub-categories, known as facets, such as excitement seeking and straightforwardness. These baseline differences suggested that even prior to hormonal intervention, the groups displayed distinct personality profiles.

After six months of testosterone therapy, the participants assigned female at birth showed distinct changes. Their scores for Neuroticism decreased significantly. Within this dimension, they reported lower levels of depression and vulnerability. Simultaneously, this group showed an increase in the facet of “Actions,” which falls under the Openness dimension. This suggests a greater willingness to try different activities or behaviors. The reduction in Neuroticism aligns with patterns seen in cisgender men, who generally score lower in this trait than cisgender women.

The group assigned male at birth, who received feminizing hormones, experienced different shifts. These participants showed an increase in the “Feelings” facet of the Openness dimension. This indicates a greater receptivity to one’s own inner emotional states. Unlike the testosterone group, they did not show significant changes in the broad dimensions of Neuroticism or Extraversion. The increase in emotional receptivity mirrors findings in cisgender women, who typically score higher in this specific facet.

The scientists also looked for relationships between the amount of hormone change in the blood and the degree of personality change. In the group assigned male at birth, higher increases in estradiol levels correlated with lower scores in several traits, including Openness and Agreeableness. This finding was unexpected and somewhat contradictory to general sex differences, indicating a complex relationship between estrogen and personality that requires further study.

The comparison between the two groups after treatment revealed a divergence in the trait of vulnerability. Following six months of therapy, the group treated with testosterone showed a significant reduction in vulnerability. The group treated with estrogen did not show a corresponding increase or decrease. This resulted in a larger gap between the two groups post-treatment than existed beforehand.

While the study offers new insights, the researchers caution against drawing broad conclusions due to several limitations. The sample size was relatively small, which makes it difficult to generalize the findings to the entire transgender population. Additionally, the study did not include a control group of individuals not receiving hormone therapy. This makes it impossible to distinguish clearly between the biological effects of the hormones and the psychological effects of social transition.

The relief of treating gender dysphoria—the distress caused by a mismatch between gender identity and physical sex—could naturally improve mood and reduce neuroticism, regardless of specific chemical changes. The act of living authentically and being perceived correctly by others likely impacts personality expression as well.

The researchers also noted that the study only covered the first six months of treatment. It remains unknown if these personality changes persist, evolve, or revert over a longer period. Personality is generally considered stable in adulthood, so observing changes within such a short timeframe is notable. However, longer-term data is necessary to see if these shifts are permanent.

Future research requires larger groups of participants followed over several years to confirm these initial observations. The scientists emphasize that these results do not validate conservative gender roles but rather highlight that sex hormones may influence brain function and personality formation more than previously understood. They suggest that understanding these potential changes can help patients better anticipate the effects of medical transition.

The study, “The effect of gender-affirming hormonal treatment on personality traits – a NEO-PI-R study,” was authored by Mats Holmberg, Alex Wallen, and Ivanka Savic.

Scientists confirm non-genitally stimulated orgasms are biologically real

15 February 2026 at 23:00

A new case study provides biological evidence that a post-menopausal woman can induce orgasms solely through the use of pelvic floor muscle exercises, without any direct genital stimulation. The findings indicate that these non-genitally stimulated orgasms trigger a surge in the hormone prolactin, mirroring the physiological response seen in typical sexual climaxes. This research was published in the International Journal of Sexual Health.

Orgasms typically result from direct physical stimulation of the genitals, but evidence indicates they can also occur through mental imagery or specific muscle movements. Previous research demonstrated that a premenopausal woman could induce orgasms using tantric techniques, a practice involving deep breathing and mental focus to control bodily sensations and sexual energy.

This earlier case was confirmed by a rise in plasma prolactin, a hormone released during sexual climax. However, it remained unclear whether this ability relied on the higher levels of ovarian hormones found in younger women or if it could occur after menopause. Consequently, the researchers aimed to determine if a post-menopausal woman could achieve these outcomes using a systematic routine targeting the pelvic floor.

The pelvic floor is a hammock-like group of muscles at the base of the pelvis that supports internal organs like the bladder and uterus, and plays a primary role in sexual response and control. The team sought to validate the experience using objective biological markers rather than relying solely on the participant’s description. Confirming the physiological reality of these experiences provides evidence for potential new therapeutic avenues for women facing difficulties with orgasm.

“I am generally interested in the neurobiology of sexual function, and in particular how the brain is organized for sexual arousal, desire, orgasm, sexual pleasure, and sexual inhibition. Recently, I started studying people who can have orgasms without genital stimulation (Non-Genitally Stimulated Orgasms, or NGSOs),” said study author James G. Pfaus, an assistant professor at Charles University in Prague and the director of research for the Center for Sexual Health and Interventions at the Czech National Institute of Mental Health.

“Women seem to be able to do this better than men, and the ability seems to come from training of the pelvic floor muscles and breathing exercises, either through tantra or pelvic floor therapy. An obvious question is whether these orgasms are ‘real,’ meaning whether they are accompanied by objective markers similar to those found during genitally stimulated orgasms (GSOs). We used the hormone prolactin as our objective measure, since it increases at orgasm (and the only other reasons it would go up would be a pituitary tumor, nursing, or extreme stress).”

“This occurs because at orgasm the neurotransmitter dopamine is instantly inhibited by both opioid and serotonin release. Dopamine in the hypothalamus keeps prolactin inhibited, so when it is inhibited, prolactin is released from inhibition. Prolactin increases reliably in both men and women during orgasm and stays elevated for at least an hour after.”

The new study focused on a 55-year-old woman who had undergone a hysterectomy and was not taking hormone replacement therapy. She had trained in a specific method called the “Wave Technique,” which involves rhythmic flexing and relaxing of the pelvic floor muscles. This training originally involved using a small jade egg to sensitize the muscles, but the participant had advanced to performing the movements without any device.

The experiment took place in a private hospital room where the participant remained fully clothed. The participant engaged in three distinct testing sessions, each separated by 48-hour intervals to ensure her hormone levels returned to baseline. These sessions included a 2.5-minute orgasm induction, a 10-minute orgasm induction, and a 10-minute Pilates workout which served as a control condition.

To measure physiological changes, a registered nurse drew blood samples fifteen minutes before each session, immediately afterward, and fifteen minutes post-session. The scientists analyzed the blood for prolactin to see if the muscle-induced orgasms triggered the expected hormonal release. They also measured levels of luteinizing hormone, follicle-stimulating hormone, and testosterone to track other potential endocrine changes.

In a separate session, the participant used a Bluetooth-enabled biofeedback device called the Lioness 2.0 to record muscle activity. The researchers modified the device to prevent any vibration or direct clitoral stimulation. This ensured the device only recorded pressure changes inside the vagina generated by the participant’s muscle movements.

The blood analysis revealed hormonal shifts following the muscle-induced orgasms. After the 2.5-minute session, prolactin levels rose to 110 percent of the baseline measurement. Following the 10-minute session, prolactin levels increased even further, reaching 141 percent of the baseline.

The findings indicate that “NGSOs are real from a physiological and psychological standpoint, and that probably all women can be trained to induce them, regardless of their hormonal status (pre-versus post-menopausal),” Pfaus told PsyPost.

In contrast, the Pilates workout resulted in a 12 percent decrease in prolactin levels. This differentiation suggests that the hormonal spike was specific to the sexual release and not merely a result of physical exertion. While exercise can affect hormones, it did not mimic the prolactin surge associated with orgasm in this context.

The researchers also tracked testosterone levels during the sessions. Testosterone increased slightly after the 10-minute orgasm session and the Pilates workout. This aligns with known data suggesting that acute physical exercise can elevate androgen levels in women.

Data from the Lioness sensor provided a visual representation of the physical activity during the orgasms. The device recorded rhythmic contractions occurring at intervals of roughly 7 to 15 seconds throughout the session. These contractions appeared as spikes in muscle tension that matched the participant’s subjective experience of climax.

During the Lioness session, the participant reportedly experienced over thirty distinct peaks within ten minutes. The sensor data showed a pattern of “push and pull” contractions that built up tension leading to each spike. The researchers noted that the participant vocalized during these peaks, signaling the moment of release.

“It is likely that the pelvic floor muscles are tensing around the nerves that carry information from the clitoris, vagina, and cervix into the spinal cord, and that women who learn this are sensitizing the nerve fibers to the abdominal and pelvic floor stimulation,” Pfaus explained. “So it is a very real phenomenon, and one that offers new vistas for women with orgasm difficulties.”

“Likewise, we have recently conducted a similar experiment on hypnotically induced orgasms, which show the same increase in prolactin. These orgasms are more likely to be ‘top down’ than ‘bottom up,’ although all women and men who show them have abdominal and pelvic floor reactions as the orgasm occurs.”

“The practical significance is that probably all women have this ability and it is just a matter of learning how to control the abdominal and pelvic floor musculature. It means that orgasm is not something your partner ‘gives’ you, but something you control in your own body and brain.”

A primary limitation of this research is that it is a case study involving a single participant. While the results provide strong biological evidence for this specific individual, they may not universally apply to all women. The participant was highly trained in a specific technique, which may be difficult for the average person to replicate without instruction.

Despite the small sample size, the study challenges the common misconception that orgasms without genital touch are fake. “It is common to disbelieve women who can have NGSOs induced by fantasy or the kind of pelvic floor movements we observed here. Likewise, it is common to think that orgasms induced by hypnosis are a party trick, and that people having them are simply faking it for the hypnotist. You cannot increase your prolactin at will. It is an objective marker of orgasm, so it is not faked.”

The scientists suggest that future research should involve a larger group of participants to verify these findings across a broader population. The researchers express an interest in studying men and women who can induce orgasms through other non-contact methods, such as hypnosis. Expanding the participant pool would help determine if this ability is a general human trait or specific to certain individuals.

Another goal for future study is to use functional magnetic resonance imaging, or fMRI, to observe brain activity during these non-genitally stimulated orgasms. Comparing brain scans of these experiences with those of standard orgasms could reveal how the brain processes different types of sexual pleasure. Such imaging could map the neural pathways involved in generating orgasm through muscle movement alone.

Ultimately, the researchers hope to investigate whether teaching these pelvic floor techniques could help women who suffer from lifelong difficulties achieving orgasm. If women can learn to sensitize their pelvic nerves through exercise, it might offer a non-pharmaceutical treatment for sexual dysfunction.

“Studies on orgasm are very difficult to get approved by institutional research ethics boards,” Pfaus noted. “There is a general fear that bad things could happen studying something so personal and intimate. And this is true even if, for example, the person’s partner is stimulating their genitals in a totally private space. NGSOs of course do not require direct genital stimulation and occur when the participant is fully clothed. So, in addition to their clinical significance, NGSOs may open the door for more study of orgasm function (e.g., in fertility) and the neurobiology of orgasm in general.”

The study, “Non-Genitally Stimulated Orgasms Increase Plasma Prolactin in a Menopausal Woman,” was authored by James G. Pfaus, Roni Erez, Nitsan Erez, and Jan Novák.

A specific mental strategy appears to boost relationship problem-solving in a big way

15 February 2026 at 15:00

New research published in the Journal of Social and Personal Relationships provides evidence that a specific mental exercise can help couples resolve conflicts more effectively than simple positive thinking. The study indicates that a self-regulation strategy known as “mental contrasting” encourages partners to engage with the internal obstacles preventing them from solving their problems.

Romantic relationships inevitably involve conflict. How couples navigate these disagreements is a strong predictor of whether the relationship will last and how satisfied the partners will feel. Effective problem-solving usually involves constructive communication and emotional responsiveness, while ineffective management is characterized by defensiveness or avoidance.

While counseling is a traditional route for improving these skills, it can be time-consuming and expensive. As a result, psychologists have sought to identify effective, self-administered strategies that couples can use on their own to navigate difficulties.

“Almost every couple faces problems sooner or later. Sadly, most couples, especially those whose satisfaction is not (yet) critically affected, are unlikely participate in couple interventions programs due to the substantial time and money investment required. That’s why we wanted to test whether a brief, scalable, and self-guided exercise can have meaningful impact on couples’ problem-solving behavior,” said study author Henrik Jöhnk, a research associate at Zeppelin University.

The researchers focused on a strategy called mental contrasting. This technique is distinct from positive thinking, or “indulging.” When people indulge, they imagine a desired future without considering the reality that stands in the way. In mental contrasting, an individual identifies a wish and the best outcome of fulfilling that wish, but then immediately reflects on the main inner obstacle—such as an emotion, habit, or belief—that prevents them from realizing that future.

Prior studies have shown that mental contrasting helps individuals regulate their behavior by creating a strong mental link between the desired future and the obstacle that must be overcome. The researchers in this study wanted to determine if this internal cognitive process could translate into better interpersonal communication between two partners.

The study involved 105 mixed-gender couples living in Germany. The participants ranged in age from 19 to 60, with an average age of roughly 27 years. Most were in committed relationships, with an average duration of three and a half years. The study was conducted remotely using video conferencing software.

To begin the experiment, both partners in a couple independently listed topics that caused disagreements in their relationship. They then came together to agree on one specific problem they wanted to solve. Once a problem was selected, the partners separated into different virtual rooms to complete the experimental task.

The couples were randomly assigned to one of two conditions. In the mental contrasting condition, each partner was asked to imagine the most positive aspect of resolving their chosen problem. Following this, they were asked to identify and imagine their main inner obstacle that was holding them back from resolving it. In the indulging condition, participants also imagined the most positive aspect of the resolution, but instead of focusing on an obstacle, they were asked to imagine a second positive aspect. This condition mimicked standard positive thinking or daydreaming.

After these individual mental exercises, the partners rejoined in the same physical room and were recorded having a ten-minute discussion about their problem via Zoom. Researchers later coded these interactions, looking for specific behaviors. They measured “self-disclosure,” which is the act of revealing personal feelings, attitudes, and needs. They also measured “solution suggestions,” counting how often partners proposed specific ways to fix the problem. Two weeks after the experiment, the couples completed a follow-up survey to report on whether they had made progress in resolving the conflict.

The results showed that mental contrasting had a measurable impact on how couples interacted and how successful they were at solving their problems. Regarding the long-term outcome, couples who used mental contrasting reported greater problem resolution two weeks later compared to those who used indulging. This benefit was specifically observed for problems that the partners perceived as highly important. When the issue was of low importance, the type of mental exercise made less of a difference.

“For a brief, self-guided exercise, the effects are surprisingly strong,” Jöhnk told PsyPost. “In particular, couples who are still relatively satisfied may benefit from trying mental contrasting in order to identify new ways forward. At the same time, these effects should not be seen as comparable to those of established couple therapies, which typically involve multiple sessions over months or years. Mental contrasting is best understood as a tool and a complement—not an alternative—to existing interventions.”

The video analysis revealed that the intervention changed the behavior of men and women in distinct ways. Men in the mental contrasting condition engaged in significantly more self-disclosure than men in the indulging condition. Specifically, they were more likely to verbalize their feelings and explain the attitudes driving their behavior. In the indulging condition, men showed typical patterns of disclosing less than women. However, in the mental contrasting condition, men’s level of self-disclosure rose to match that of the women.

This suggests that reflecting on internal obstacles helped men overcome barriers to vulnerability. By recognizing that an emotion like anger or insecurity was the obstacle, they became more likely to express that emotion to their partner. This is significant because self-disclosure is a key component of intimacy and helps partners understand the root causes of a conflict.

Women responded to the intervention differently. Women in the mental contrasting condition suggested fewer solutions than those in the indulging group. This reduction in solution suggestions was particularly evident when the problem was rated as important. While offering fewer solutions might sound negative, the researchers interpret this as a positive shift toward quality over quantity.

“What surprised me most was that mental contrasting didn’t increase the number of solutions people suggested for their problems,” Jöhnk said. “Instead, it appeared to slow the process down: people (especially women in our study) were less likely to offer quick or premature fixes, which may actually support effective problem-solving.”

In many conflicts, rushing to offer solutions can be a way to bypass necessary emotional processing. By suggesting fewer solutions, the women may have been more selective and thoughtful, avoiding premature fixes that would not address the underlying issue. The data showed that in the mental contrasting condition, participants were more likely to suggest a solution immediately after engaging in self-disclosure, implying that the solutions offered were more grounded in the reality of their feelings.

The study provides evidence that focusing on obstacles, rather than ignoring them, fosters a more realistic and grounded approach to relationship maintenance. Indulging in positive fantasies can sometimes drain the energy needed for action or lead to disappointment when reality does not match the fantasy. Mental contrasting appears to mobilize individuals to tackle the hard work required for resolving serious issues.

“To resolve relationship problems, it’s not enough to just hope things will get better,” Jöhnk explained. “Our research shows that people benefit from also facing their own inner obstacles like anger, fear, or insecurity that often get in the way of constructive conversations and actual change. ”

But there are some limitations to this study. The sample consisted largely of young, educated couples who were relatively satisfied with their relationships. The dynamics of problem-solving might look very different in couples who are highly distressed or on the brink of separation. In those cases, the problems might be perceived as insurmountable, and mental contrasting might lead to disengagement rather than engagement.

Additionally, the study relied on a specific experimental setup using Zoom. While this allowed the researchers to observe couples in their own homes, the presence of a recording device and the structured nature of the task might have influenced behavior. The researchers also only analyzed verbal communication. Non-verbal cues, such as tone of voice, facial expressions, and body language, play a massive role in conflict and were not part of the behavioral coding.

“We are still at in the middle of investigating the role of mental contrasting in romantic relationships, but this line of research is now expanding, supported by funding from the German Research Foundation,” Jöhnk noted. “A next step is to examine whether and how mental contrasting may benefit highly distressed couples, whose problems are often difficult or even impossible to fully resolve. In particular, we aim to study how mental contrasting shapes the way couples think about and engage with their problems when quick solutions are unlikely.”

“Readers who are curious to learn more about mental contrasting can visit https://woopmylife.org, which offers free, evidence-based resources on mental contrasting and WOOP, a practical self-regulation strategy based on this research. For a deeper introduction, I also recommend Rethinking Positive Thinking by Gabriele Oettingen, who supervised this project and holds senior professorships at both New York University and Zeppelin University.”

The study, “Mental contrasting and problem-solving in romantic relationships: A dyadic behavioral observation study,” was authored by Henrik Jöhnk, Gabriele Oettingen, Kay Brauer, and A. Timur Sevincer.

Donald Trump is fueling a surprising shift in gun culture, new research suggests

14 February 2026 at 22:30

A new study published in Injury Epidemiology provides evidence that the 2024 United States presidential election prompted specific groups of Americans to change their behaviors regarding firearms. The findings suggest that individuals who feel threatened by the policies of the current administration, specifically Black adults and those with liberal political views, are reporting stronger urges to carry weapons and keep them easily accessible. This research highlights a potential shift in gun culture where decision-making is increasingly driven by political anxiety and a desire for protection.

Social scientists have previously observed that firearm purchasing patterns often fluctuate in response to major societal events, such as the onset of the COVID-19 pandemic or periods of civil unrest. However, there has been less research into how specific election results influence not just the buying of guns, but also daily habits like carrying a weapon or how it is stored within the home.

To understand these dynamics better, a team led by Michael Anestis from the New Jersey Gun Violence Research Center at Rutgers University sought to track these changes directly. The researchers aimed to determine if the intense rhetoric surrounding the 2024 election altered firearm safety practices among different demographics.

The researchers surveyed a nationally representative group of adults at two different points in time to capture a “before and after” snapshot. The first survey included 1,530 participants and took place between October 22 and November 3, 2024, immediately preceding the election. The team then followed up with 1,359 of the same individuals between January 7 and January 22, 2025. By maintaining the same group of participants, the scientists could directly compare intentions expressed before the election with reported behaviors and urges felt in the weeks following the results.

The data indicated that identifying as Black was associated with a increase in the urge to carry firearms specifically because of the election results. Black participants were also more likely than White participants to express an intention to purchase a firearm in the coming year or to remain undecided, rather than rejecting the idea of ownership. This aligns with broader trends suggesting that the demographics of gun ownership are diversifying.

Similarly, participants who identified with liberal political beliefs reported a stronger urge to carry firearms outside the home as a direct result of the election outcome. The study found that as political views became more liberal, individuals were over two times more likely to change their storage practices to make guns more quickly accessible. This suggests that for some, the perceived need for immediate defense has overridden standard safety recommendations regarding secure storage.

The researchers also examined how participants viewed the stability of the country. Those who perceived a serious threat to American democracy were more likely to store their guns in a way that allowed for quicker access. Individuals who expressed support for political violence showed a complex pattern. They were more likely to intend to buy guns but reported a decreased urge to carry them. This might imply that those who support such violence feel more secure in the current political environment, reducing their perceived need for constant protection outside the home.

Anestis, the executive director of the New Jersey Gun Violence Research Center and lead researcher, noted that the motivation for these changes is clear but potentially perilous.

“These findings highlight that communities that feel directly threatened by the policies and actions of the second Trump administration are reporting a greater drive to purchase firearms, carry them outside their home, and store them in a way that allows quick access and that these urges are a direct result of the presidential election,” Anestis said. “It may be that individuals feel that the government will not protect them or – worse yet – represents a direct threat to their safety, so they are trying to prepare themselves for self-defense.”

These findings appear to align with recent press reports describing a surge in firearm interest among groups not historically associated with gun culture. An NPR report from late 2025 featured accounts from individuals like “Charles,” a doctor who began training with a handgun due to fears for his family’s safety under the Trump administration.

A story from NBC News published earlier this week highlighted a sharp rise in requests for firearm training from women and people of color. Trainers across the country, including organizations like the Liberal Gun Club and Grassroots Defense, have reported that their classes are fully booked. This heightened interest often correlates with specific fears regarding federal law enforcement.

For example, recent news coverage mentions the high-profile shooting of Alex Pretti, a concealed carry permit holder in Minneapolis, by federal agents. Reports indicate that such incidents have stoked fears about constitutional rights violations. Both the academic study and these journalistic accounts paint a picture of defensive gun ownership rising among those who feel politically marginalized.

While the study provides evidence of shifting behaviors, there are limitations to consider. The number of people who actually purchased a gun during the short window between the two surveys was low, which limits the ability of the researchers to draw broad statistical conclusions about immediate purchasing habits.

Additionally, the study relied on self-reported data. This means the results depend on participants answering honestly about sensitive topics like weapon storage and their willingness to use force. Future research will need to examine whether these shifts in behavior result in long-term changes in injury rates or accidental shootings.

“Ultimately, it seems that groups less typically associated with firearm ownership – Black adults and those with liberal political beliefs, for instance – are feeling unsafe in the current environment and trying to find ways to protect themselves and their loved ones,” Anestis said.

However, he cautioned that the method of protection chosen could lead to unintended consequences.

“Although those beliefs are rooted in a drive for safety, firearm acquisition, carrying, and unsecure storage are all associated with the risk for suicide and unintentional injury, so I fear that the current environment is actually increasing the risk of harm,” he said. “Indeed, recent events in Minneapolis make me nervous that the environment fostered by the federal government is putting the safety of Americans in peril.”

The study, “Changes in firearm intentions and behaviors after the 2024 United States presidential election,” was authored by Michael D. Anestis, Allison E. Bond, Kimberly C. Burke, Sultan Altikriti, and Daniel C. Semenza.

This mental trait predicts individual differences in kissing preferences

14 February 2026 at 21:30

A new study published in Sexual and Relationship Therapy provides evidence that a person’s tendency to engage in sexual fantasy influences what they prioritize in a romantic kiss. The findings suggest that the mental act of imagining intimate scenarios is strongly linked to placing a higher value on physical arousal and contact during kissing. This research helps explain the psychological connection between cognitive states and physical intimacy.

From an evolutionary perspective, researchers have proposed three main reasons for romantic kissing. The first is “mate assessment,” which means kissing helps individuals subconsciously judge a potential partner’s health and genetic compatibility. The second is “pair bonding,” where kissing serves to maintain an emotional connection and commitment between partners in a long-term relationship.

The third proposed function is the “arousal hypothesis.” This theory suggests that the primary biological purpose of kissing is to initiate sexual arousal and prepare the body for intercourse. While this seems intuitive, previous scientific attempts to prove this hypothesis have failed to find a strong link. Past data did not show that kissing consistently acts as a catalyst for sexual arousal.

The researchers behind the current study argued that these previous attempts were looking at the problem too narrowly. Earlier work focused almost exclusively on the physical sensation of kissing, such as the sensitivity of the lips or the exchange of saliva. This approach largely ignored the mental and emotional state of the person doing the kissing. The researchers hypothesized that the physical act of kissing might not be arousing on its own without a specific cognitive component. They proposed that sexual fantasy serves as this missing link.

“People have tested three separate hypotheses to explain why we engage in romantic kissing as a species,” said study author Christopher D. Watkins, a senior lecturer in psychology at Abertay University. “At the time there had been no evidence supporting the arousal hypothesis for kissing – that kissing may act as an important catalyst for sex. This may be because these studies focussed on the sensation of kissing as the catalyst, when psychological explanations are also important (e.g., the mental motives for kissing which in turn makes intimacy feel pleasurable/desirable).”

To test this idea, the researchers designed an online study to measure the relationship between fantasy proneness and kissing preferences. They recruited a sample of 412 adults, primarily from the United Kingdom and Italy. After removing participants who did not complete all sections or meet the age requirements, the final analysis focused on 212 individuals. This group was diverse in terms of relationship status, with about half of the participants reporting that they were in a long-term relationship.

Participants completed a series of standardized questionnaires. The first was the “Good Kiss Questionnaire,” which asks individuals to rate the importance of various factors when deciding if someone is a good kisser. These factors included sensory details like the taste of the partner’s lips, the pleasantness of their breath, and the “wetness” of the kiss. The questionnaire also included items related to “contact and arousal,” asking how important physical touching and the feeling of sexual excitement were to the experience.

The scientists also administered the “Sexual Fantasy Questionnaire.” They specifically focused on the “intimacy” subscale, which measures how often a person engages in daytime fantasies about romantic interactions with a partner. This measure was distinct from fantasies that occur during sexual acts or while dreaming. It focused on the mental habit of imagining intimacy during everyday life.

To ensure their results were precise, the researchers included control measures. They measured “general creative experiences” to assess whether a person was simply imaginative in general. This allowed the scientists to determine if the results were driven specifically by sexual fantasy rather than just a vivid imagination. They also measured general sexual desire to see if the effects were independent of a person’s overall sex drive.

The results supported the researchers’ primary prediction. The analysis showed a positive correlation between daytime intimate fantasy and the importance placed on arousal and contact in a good kiss. Individuals who reported a higher tendency to fantasize about intimacy were much more likely to define a “good kiss” as one that includes high levels of physical contact and sexual arousal.

“Your tendency to think and fantasise about intimacy during the day is related to the qualities you associate with a good-quality kiss,” Watkins told PsyPost. “Specifically, the importance we attach to contact and arousal while kissing. As such, our mental preoccupations could facilitate arousal when in close contact with an intimate partner – explaining personal differences in how we approach partners during intimate encounters.”

This relationship held true even after the researchers statistically controlled for other variables. The link between fantasy and kissing preferences remained significant regardless of the participant’s general creativity levels. This suggests that the connection is specific to sexual and romantic cognition, not just a byproduct of having a creative mind.

Additionally, the finding was independent of general sexual desire. While people with higher sex drives did generally value arousal more, the specific habit of fantasizing contributed to this preference over and above general desire. This implies that the mental act of simulating intimacy creates a specific psychological context. This context appears to shape what a person expects and desires from the physical act of kissing.

The study also yielded secondary findings regarding kissing styles. The researchers looked at “reproductive potential,” which they measured by asking participants about their history of sexual partners relative to their peers. This is often used in evolutionary psychology as a proxy for mating strategy. The data showed that individuals with a history of more sexual partners placed greater importance on “technique” in a good kiss. Specifically, they valued synchronization, or whether the partner’s kissing style matched their own.

“One unplanned relationship found in the data was between the importance people placed on technique (e.g., synchronicity) in a good kiss and the extent to which people reported tending to have sex with different people across their relationship history (compared to average peer behavior),” Watkins said. “This may suggest that people who seek sexual variety also seek some form of similarity in partners while intimate (kissing style). This was a small effect though that we would like others to examine/replicate independently in their own studies.”

As with all research, there are some limitations. The research used a cross-sectional design, meaning it captured data from participants at a single point in time. As a result, the researchers cannot prove that fantasizing causes a change in kissing preferences. It is largely possible that the relationship works in the reverse direction, or that a third factor influences both.

The sample was also heavily skewed toward Western cultures, specifically the UK and Italy. Romantic kissing is not a universal human behavior and is observed in less than half of known cultures. Consequently, these findings may not apply to cultures where kissing is not a standard part of romantic or sexual rituals.

Future research could address these issues by using longitudinal designs. Scientists could follow couples over time to see how the relationship between fantasy and physical intimacy evolves. This would help clarify whether increasing intimate fantasy can lead to a more revitalized physical connection.

“We are looking to develop our testing instruments to explore other experiences related to kissing, and expand our studies on this topic – for example, by establishing clear cause and effect between our thoughts/fantasies and later kissing behaviors or other behaviors reported during close contact with romantic partners,” Watkins said.

The study, “Proclivity for sexual fantasy accounts for differences in the perceived components of a ‘good kiss’,” was authored by Milena V. Rota and Christopher D. Watkins.

Who lives a good single life? New data highlights the role of autonomy and attachment

14 February 2026 at 19:15

A new study published in the journal Personal Relationships suggests that single people who feel their basic psychological needs are met tend to experience higher life satisfaction and fewer depressive symptoms. The findings indicate that beyond these universal needs, having a secure attachment style and viewing singlehood as a personal choice rather than a result of external barriers are significant predictors of a satisfying single life.

The number of single adults has increased significantly in recent years, prompting psychologists to investigate what factors contribute to a high quality of life for this demographic. Historically, relationship research has focused heavily on the dynamics of couples, often treating singlehood merely as a transitional stage or a deficit. When researchers did study singles, they typically categorized them simply as those who chose to be single versus those who did not. This binary perspective fails to capture the complexity of the single experience.

The researchers behind the new study sought to understand the specific psychological characteristics that explain why some individuals thrive in singlehood while others struggle. By examining factors ranging from broad human needs to specific attitudes about relationships, the team aimed to clarify the internal and external forces that shape single well-being.

“Much of the research on single people has focused on deficits—that singles are less happy or lonely to partnered people,” said study author Jeewon Oh, an assistant professor at Syracuse University.

“We wanted to ask instead: When do single people thrive? We wanted to identify what actually predicts a good single life from understanding their individual differences. We know that people need to feel autonomous, competent, and related to others to flourish, but it wasn’t clear whether relationship-specific factors like attachment style or reasons for being single play an important role beyond satisfying these more basic needs.”

To investigate these questions, the scientists conducted two separate analyses. The first sample consisted of 445 adults recruited through Qualtrics Panels. These participants were older, with an average age of approximately 53 years, and were long-term singles who had been without a partner for an average of 20 years. This demographic provided a window into the experiences of those who have navigated singlehood for a significant portion of their adulthood.

The second sample was gathered to see if the findings would hold true for a different age group. This group included 545 undergraduate students from a university in the northeastern United States. These participants were much younger, with an average age of roughly 19 years. By using two distinct samples, the researchers hoped to distinguish between findings that might be unique to a specific life stage and those that apply to singles more generally.

The researchers used a series of surveys to assess several psychological constructs. First, they measured the satisfaction of basic psychological needs based on Self-Determination Theory. This theory posits that three core needs are essential for human well-being: autonomy, competence, and relatedness. Autonomy refers to a sense of volition and control over one’s own life choices. Competence involves feeling capable and effective in one’s activities. Relatedness is the feeling of being connected to and cared for by others.

In addition to basic needs, the study assessed attachment orientation. Attachment theory describes how people relate to close others, often based on early life experiences. The researchers looked at two dimensions: attachment anxiety and attachment avoidance. Attachment anxiety is characterized by a fear of rejection and a strong need for reassurance. Attachment avoidance involves a discomfort with intimacy and a preference for emotional distance.

The team also measured sociosexuality and reasons for being single. Sociosexuality refers to an individual’s openness to uncommitted sexual experiences, including their desires, attitudes, and behaviors regarding casual sex. For the reasons for being single, participants rated their agreement with statements categorized into domains such as valuing freedom, perceiving personal constraints, or feeling a lack of courtship ability.

The most consistent finding across both samples was the importance of basic psychological need satisfaction. Single individuals who felt their needs for autonomy, competence, and relatedness were being met reported significantly higher life satisfaction and satisfaction with their relationship status. They also reported fewer symptoms of depression.

This suggests that the foundation of a good life for singles is largely the same as it is for everyone else. It relies on feeling in control of one’s life, feeling capable, and having meaningful social connections, which for singles are often found in friendships and family rather than romantic partnerships.

Attachment style also emerged as a significant predictor of well-being. The data showed that higher levels of attachment anxiety were associated with more depressive symptoms. In the combined analysis of both samples, attachment anxiety also predicted lower satisfaction with singlehood. People with high attachment anxiety often crave intimacy and fear abandonment. This orientation may make singlehood particularly challenging, as the lack of a romantic partner might act as a constant source of distress.

The study found that the specific reasons a person attributes to their singlehood matter for their mental health. Participants who viewed their singlehood as a means to maintain their freedom and independence reported higher levels of satisfaction. These individuals appeared to be single because they valued the autonomy it provided.

In contrast, those who felt they were single due to constraints experienced worse outcomes. Constraints included factors such as lingering feelings for a past partner, a fear of being hurt, or perceived personal deficits. Viewing singlehood as a forced circumstance rather than a choice was linked to higher levels of depressive symptoms.

The researchers examined whether sociosexuality would predict well-being, hypothesizing that singles who are open to casual sex might enjoy singlehood more. However, the results indicated that sociosexuality did not provide additional explanatory power once basic needs and attachment were taken into account. While the desire for uncommitted sex was correlated with some outcomes in isolation, it was not a primary driver of well-being in the comprehensive models.

These findings suggest that a “sense of choice” is a multi-layered concept. It is not just about a simple decision to be single or not. Instead, it is reflected in how much autonomy a person feels generally, whether their attachment style allows them to feel secure without a partner, and whether they interpret their single status as an alignment with their values.

“The most important takeaway is that single people’s well-being consistently depends on having their basic psychological needs met—feeling autonomous, competent, and connected to others,” Oh told PsyPost. “However, beyond that, it also matters whether someone has an anxious attachment style, and whether they feel like they are single because it fits their values (vs. due to constraints). These individual differences are aligned with having a sense of choice over being single, which may be one key to a satisfying singlehood.”

The study has some limitations. The research relied on self-reported data collected at a single point in time. This cross-sectional design means that scientists cannot determine the direction of cause and effect. For example, it is possible that people who are already depressed are more likely to perceive their singlehood as a result of constraints, rather than the constraints causing the depression.

The demographic composition of the samples also limits generalizability. The participants were predominantly White and, in the older sample, mostly women. The experience of singlehood can vary greatly depending on gender, race, cultural background, and sexual orientation. The researchers noted that future studies should aim to include more diverse groups to see if these psychological patterns hold true across different populations.

Another limitation involved the measurement of reasons for being single. The scale used to assess these reasons had some statistical weaknesses, which suggests that the specific categories of “freedom” and “constraints” might need further refinement in future research. Despite this, the general pattern—that voluntary reasons link to happiness and involuntary reasons link to distress—aligns with previous scientific literature.

Future research could benefit from following single people over time. A longitudinal approach would allow scientists to observe how changes in need satisfaction or attachment security influence feelings about singlehood as people age. It would also be valuable to explore how other personality traits, such as extraversion or neuroticism, interact with these factors to shape the single experience.

The study, “Who Lives a Good Single Life? From Basic Need Satisfaction to Attachment, Sociosexuality, and Reasons for Being Single,” was authored by Jeewon Oh, Arina Stoianova, Tara Marie Bello, and Ashley De La Cruz.

Your attachment style predicts which activities boost romantic satisfaction

13 February 2026 at 19:00

New research provides evidence that the best way to spend time with a romantic partner depends on their specific emotional needs. A study published in Social Psychological and Personality Science suggests that people with avoidant attachment styles feel more satisfied when engaging in novel and exciting activities, while those with anxious attachment styles benefit more from familiar and comfortable shared experiences.

Psychological science identifies attachment insecurity as a significant barrier to relationship satisfaction. Individuals high in attachment avoidance often fear intimacy and prioritize independence, while those high in attachment anxiety fear abandonment and frequently seek reassurance.

Previous studies have shown that partners can mitigate these insecurities by adjusting their behavior, such as offering autonomy to avoidant partners or reassurance to anxious ones. However, less is known about how specific types of shared leisure activities function in this dynamic.

“This study was motivated by two main gaps. One was a gap in the attachment literature. Although attachment insecurity reliably predicts lower relationship satisfaction, these effects can be buffered, and most prior work has focused on partner behaviors. We wanted to know whether shared, everyday experiences could play a similar role,” said study author Amy Muise, a professor and York Research Chair in the Department of Psychology and director of the Sexual Health and Relationships (SHaRe) Lab at York University.

“We were also interested in testing the idea that novelty and excitement are universally good for relationships. Instead, we asked whether different types of shared experiences are more or less beneficial depending on people’s attachment-related needs.”

To explore these dynamics, the scientists conducted a meta-analysis across three separate daily diary studies. The total sample consisted of 390 couples from Canada and the United States. Participants were required to be in a committed relationship and living together or seeing each other frequently. The average relationship length varied slightly by study but ranged generally from seven to eight years.

For a period of 21 days, each partner independently completed nightly surveys. They reported their daily relationship satisfaction and the types of activities they shared with their partner that day. The researchers measured two distinct types of shared experiences. “Novel and exciting” experiences were defined as activities that felt new, challenging, or expanding, such as learning a skill or trying a new restaurant.

“Familiar and comfortable” experiences involved routine, calming, and predictable activities. Examples included watching a favorite TV show, cooking a standard meal together, or simply relaxing at home. The participants also rated their levels of attachment avoidance and anxiety at the beginning of the study. This design allowed the researchers to track how fluctuations in daily activities related to fluctuations in relationship satisfaction.

The data revealed that, in general, both types of shared experiences were linked to higher daily relationship satisfaction. “The effects are modest in size, which is typical for daily experience research because they reflect within-person changes in everyday life,” Muise told PsyPost. “These are not dramatic shifts in relationship quality, but small day-to-day effects that may accumulate over time.”

“Overall, both novel and familiar shared experiences were linked to greater relationship satisfaction, but the effect of familiar, comfortable experiences was larger (roughly two to three times larger) than novel, experiences overall.”

Importantly, the benefits differed depending on a person’s attachment style. For individuals high in attachment avoidance, engaging in novel and exciting activities provided a specific benefit.

On days when avoidant individuals reported more novelty and excitement than usual, the typical link between their avoidant style and lower relationship satisfaction was weakened. The researchers found that these exciting activities increased perceptions of “relational reward.” This means the avoidant partners felt a sense of intimacy and connection that did not feel threatening or smothering. Familiar and comfortable activities did not provide this same buffering effect for avoidant individuals.

In contrast, individuals high in attachment anxiety derived the most benefit from familiar and comfortable experiences. On days marked by high levels of familiarity and comfort, the usual association between attachment anxiety and lower relationship satisfaction disappeared entirely. The study suggests that these low-stakes, comforting interactions help reduce negative emotions for anxiously attached people.

Novel and exciting activities did not consistently buffer the relationship satisfaction of anxiously attached individuals. The researchers noted that while novelty is generally positive, it does not address the specific need for security that defines attachment anxiety. The calming nature of routine appears to be the key ingredient for soothing these specific fears.

“One thing that surprised us was how familiar and comfortable activities seemed to help people who are more anxiously attached,” Muise said. “We expected these experiences to work by lowering worries about rejection or judgment, but that wasn’t what we found. Instead, they seemed to help by lowering people’s overall negative mood.”

“This made us think more carefully about what comfort and routine might actually be doing emotionally. It’s possible that for people higher in attachment anxiety, familiar and comfortable time together helps them feel more secure, and that sense of security is what supports relationship satisfaction. We weren’t able to test that directly in this study, but it’s an important direction for future work.”

The researchers also examined how one person’s attachment style affected their partner’s satisfaction. The results showed that when a person had a highly avoidant partner, they reported higher satisfaction on days they shared novel and exciting experiences. Conversely, when a person had a highly anxious partner, they reported higher satisfaction on days filled with familiar and comfortable activities. This indicates that tailoring activities benefits both the insecure individual and their romantic partner.

“The main takeaway is that there is no single ‘right’ way to spend time together that works for all couples,” Muise explained. “What matters is whether shared experiences align with people’s emotional needs. For people who are more avoidantly attached, doing something novel or exciting together (something that feels new and fun rather than overtly intimate) can make the relationship feel more rewarding and satisfying.”

“For people who are more anxiously attached, familiar and comfortable time together seems especially important for maintaining satisfaction. These findings suggest that tailoring shared time, rather than maximizing novelty or excitement per se, may be a more effective way to support relationship well-being.”

While the findings offer practical insights, the study has certain limitations. The research relied on daily diary entries, which are correlational. This means that while the researchers can observe a link between specific activities and higher satisfaction, they cannot definitively prove that the activities caused the satisfaction. It is possible that feeling satisfied makes a couple more likely to engage in fun or comfortable activities.

“Another potential misinterpretation is that novelty is ‘bad’ for anxiously attached people or that comfort is ‘bad’ for avoidantly attached people,” Muise clarified. “That is not what we found. Both types of experiences were generally associated with higher satisfaction; the difference lies in when they are most helpful for buffering insecurity, not whether they are beneficial at all.”

Future research is needed to determine if these daily buffering effects lead to long-term improvements in attachment security. The scientists also hope to investigate who initiates these activities and whether the motivation behind them impacts their effectiveness. For now, the data suggests that checking in on a partner’s emotional needs might be the best guide for planning the next date night.

“One long-term goal is to understand whether these day-to-day buffering effects can lead to longer-term changes in attachment security,” Muise said. “If repeatedly engaging in the ‘right’ kinds of shared experiences could that have implications for how attachment insecurity evolves over time?”

“Another direction is to examine how these experiences are initiated. Who suggests the activity, and whether it feels voluntary or pressured, might matter, for whether certain experiences are associated with satisfaction.”

“One thing I really appreciate about this study is that it allowed us to look at both partners’ experiences,” Muise added. “The partner effects suggest that tailoring shared experiences doesn’t only benefit the person who is more insecure, it is also associated with how their partner feels about the relationship. Overall, engaging in shared experiences that was aligned with one partner’s attachment needs, has benefits for both partners.”

The study, “Novel and Exciting or Tried and True? Tailoring Shared Relationship Experiences to Insecurely Attached Partners,” was authored by Kristina M. Schrage, Emily A. Impett, Mustafa Anil Topal, Cheryl Harasymchuk, and Amy Muise.

Bias against AI art is so deep it changes how viewers perceive color and brightness

13 February 2026 at 15:00

New research suggests that simply labeling an artwork as created by artificial intelligence can reduce how much people enjoy and value it. This bias appears to affect not just how viewers interpret the meaning of the art, but even how they process basic visual features like color and brightness. The findings were published in the Psychology of Aesthetics, Creativity, and the Arts.

Artificial intelligence has rapidly become a common tool for visual artists. Artists use technologies ranging from text-to-image generators to robotic arms to produce new forms of imagery. Despite this widespread adoption, audiences often react negatively when they learn technology was involved in the creative process.

Alwin de Rooij, an assistant professor at Tilburg University and associate professor at Avans University of Applied Sciences, sought to understand the consistency of this negative reaction. De Rooij aimed to determine if this bias occurs across different psychological systems involved in viewing art. The researcher also wanted to see if this negative reaction is a permanent structural phenomenon or if it varies by context.

“AI-generated images can now be nearly indistinguishable from art made without AI, yet both public debate and scientific studies suggest that people may respond differently once they are told AI was involved,” de Rooij told PsyPost. “These reactions resemble earlier anxieties around new technologies in art, such as the introduction of photography in the nineteenth century, which is now a fully established art form. This raised the question of how consistent bias against AI in visual art is, and whether it might already be changing.”

To examine this, De Rooij conducted a meta-analysis. This statistical technique combines data from multiple independent studies to find overall trends that a single experiment might miss. The researcher performed a systematic search for experiments published between January 2017 and September 2024.

The analysis included studies where participants viewed visual art and were told it was made by AI. These responses were compared to responses for art labeled as human-made or art presented with no label. The researcher extracted 191 distinct effect sizes from the selected studies.

De Rooij categorized these measurements using a framework known as the Aesthetic Triad model. This model organizes the art experience into three specific systems. The first is the sensory-motor system, which deals with basic visual processing. The second is the knowledge-meaning system, which involves interpretation and context. The third is the emotion-valuation system, which covers subjective feelings and personal preferences.

The investigation revealed that knowing AI was used generally diminishes the aesthetic experience. A small but significant negative effect appeared within the sensory-motor system. This system involves the initial processing of visual features such as color, shape, and spatial relationships. When viewers believed an image was AI-generated, they tended to perceive these basic qualities less favorably.

A moderate negative effect appeared in the knowledge-meaning system. This aspect of the aesthetic experience relates to how people interpret an artwork’s intent. It also includes judgments about the skill required to make the piece. Participants consistently attributed less profundity and creativity to works labeled as artificial intelligence.

The researcher also found a small negative effect in the emotion-valuation system. This system governs subjective feelings of beauty, awe, and liking. Viewers tended to report lower emotional connection when they thought AI was responsible for the work. They also rated these works as less beautiful compared to identical works labeled as human-made.

“The main takeaway is that knowing AI was involved in making an artwork can change how we experience it, even when the artwork itself is identical,” de Rooij explained. “People tend to attribute less meaning and value to art once it is labeled as AI-made, not because it looks worse, but because it is interpreted differently. In some cases, this bias even feeds into basic visual judgments, such as how colorful or vivid an image appears. This shows that bias against AI is not just an abstract opinion about technology. It can deeply shape the aesthetic experience itself.”

But these negative responses were not uniform across all people. The researcher identified age as a significant factor in the severity of the bias. Older participants demonstrated a stronger negative reaction to AI art. Younger audiences showed much weaker negative effects.

This difference suggests a possible generational shift in how people perceive technology in art. Younger viewers may be less troubled by the integration of algorithms in the creative process. The style of the artwork also influenced viewer reactions.

Representational art, which depicts recognizable objects, reduced the negative bias regarding meaning compared to abstract art. However, representational art worsened the bias regarding emotional connection. The setting of the study mattered as well. Experiments conducted online produced stronger evidence of bias than those conducted in laboratories or real-world galleries.

“Another surprising finding was how unstable the bias is,” de Rooij said. “Rather than being a fixed reaction, it varies across audiences and contexts. As mentioned earlier, the bias tends to be stronger among older populations, but the results show it is also influenced by the style of the artworks and by how and where they are presented. In some settings, the bias becomes very weak or nearly disappears. This further supports the observation that, much like earlier reactions to new technologies in art, resistance to AI may be transitional rather than permanent.”

A key limitation involves how previous experiments presented artificial intelligence. Many studies framed the technology as an autonomous agent that created art independently. This description often conflicts with real-world artistic practice.

“The practical significance of these findings need to be critically examined,” de Rooij noted. “Many of the studies included in the meta-analysis frame AI as if it were an autonomous artist, which does not reflect artistic practice, where AI is typically used as a responsive material. The AI-as-artist framing evoke dystopian imaginaries about AI replacing human artists or threatening the humanity in art. As a result, some studies may elicit stronger negative responses to AI, but in a way that has no clear real-world counterpart.”

Future research should investigate the role of invisible human involvement in AI art. De Rooij plans to conduct follow-up studies.

“The next step is to study bias against AI in art in more realistic settings, such as galleries or museums, and in ways that better reflect how artists actually use AI in their creative practice,” de Rooij said. “This is a reaction to the finding that bias against AI seemed particularly strong in online studies, which merits verification of the bias in real-world settings. This proposed follow-up research has recently received funding from the Dutch Research Council, and the first results are expected in late 2026. We are excited about moving this work forward!”

The study, “Bias against artificial intelligence in visual art: A meta-analysis,” was authored by Alwin de Rooij.

Younger women find men with beards less attractive than older women do

13 February 2026 at 05:00

A new study published in Adaptive Human Behavior and Physiology suggests that a woman’s age and reproductive status may influence her preferences for male physical traits. The research indicates that postmenopausal women perceive certain masculine characteristics, such as body shape and facial features, differently than women who are still in their reproductive years. These findings offer evidence that biological shifts associated with menopause might alter the criteria women use to evaluate potential partners.

Scientists have recognized that physical features act as powerful biological signals in human communication. Secondary sexual characteristics are traits that appear during puberty and visually distinguish men from women. These include features such as broad shoulders, facial hair, jawline definition, and muscle mass.

Evolutionary psychology suggests that these traits serve as indicators of health and genetic quality. For instance, a muscular physique or a strong jawline often signals high testosterone levels and physical strength. Women of reproductive age typically prioritize these markers because they imply that a potential partner possesses “good genes” that could be passed to offspring.

However, researchers have historically focused most of their attention on the preferences of young women. Less is known about how these preferences might change as women age and lose their reproductive capability. The biological transition of menopause involves significant hormonal changes, including a decrease in estrogen levels.

This hormonal shift may correspond to a change in mating strategies. The “Grandmother Hypothesis” proposes that older women shift their focus from reproduction to investing in their existing family line. Consequently, they may no longer prioritize high-testosterone traits, which can be associated with aggression or short-term mating.

Instead, older women might prioritize traits that signal cooperation, reliability, and long-term companionship. To test this theory, a team of researchers from Poland designed a study to compare the preferences of women at different stages of life. The research team included Aurelia Starzyńska and Łukasz Pawelec from the Wroclaw University of Environmental and Life Sciences and the University of Warsaw, alongside Maja Pietras from Wroclaw Medical University and the University of Wroclaw.

The researchers recruited 122 Polish women to participate in an online survey. The participants ranged in age from 19 to 70 years old. Based on their survey responses regarding menstrual regularity and history, the researchers categorized the women into three groups.

The first group was premenopausal, consisting of women with regular reproductive functions. The second group was perimenopausal, including women experiencing the onset of menopausal symptoms and irregular cycles. The third group was postmenopausal, defined as women whose menstrual cycles had ceased for at least one year.

To assess preferences, the researchers created a specific set of visual stimuli. They started with photographs of a single 22-year-old male model. Using photo-editing applications, they digitally manipulated the images to create distinct variations in appearance.

The researchers modified the model’s face to appear either more feminized, intermediate, or heavily masculinized. They also altered the model’s facial hair to show a clean-shaven look, light stubble, or a full beard.

Body shape was another variable manipulated in the study. The scientists adjusted the hip-to-shoulder ratio to create three silhouette types: V-shaped, H-shaped, and A-shaped. Finally, they modified the model’s musculature to display non-muscular, moderately muscular, or strongly muscular builds.

Participants viewed these twelve modified images and rated them on a scale from one to ten. They evaluated the man in the photos based on three specific criteria. The first criterion was physical attractiveness.

The second and third criteria involved personality assessments. The women rated how aggressive they perceived the man to be. They also rated the man’s perceived level of social dominance.

The results showed that a woman’s reproductive status does influence her perception of attractiveness. One significant finding related to the shape of the male torso. Postmenopausal women rated the V-shaped body, which is typically characterized by broad shoulders and narrow hips, as less attractive than other shapes.

This contrasts with general evolutionary expectations where the V-shape is a classic indicator of male fitness. The data suggests that as women exit their reproductive years, the appeal of this strong biological signal may diminish.

Age also played a distinct role in how women viewed facial hair. The study found that older women rated men with medium to full beards as more attractive compared to younger women. This preference for beards increased with the age of the participant.

The researchers suggest that beards might signal maturity and social status rather than just raw genetic fitness. Younger women in the study showed a lower preference for beards. This might occur because facial hair can mask other facial features that young women use to assess mate quality.

The study produced complex results regarding facial masculinity. Chronological age showed a slight positive association with finding feminized faces attractive. This aligns with the idea that older women might prefer “softer” features associated with cooperation.

However, when isolating the specific biological factor of menopause, the results shifted. Postmenopausal women rated feminized faces as less attractive than premenopausal women did. This indicates that the relationship between aging and facial preference is not entirely linear.

Perceptions of aggression also varied by group. Postmenopausal women rated men with medium muscularity as more aggressive than men with other body types. This association was not present in the younger groups.

The researchers propose that older women might view visible musculature as a signal of potential threat rather than protection. Younger women, who are more likely to seek a partner for reproduction, may view muscles as a positive sign of health and defense.

Interestingly, the study found no significant connection between the physical traits and perceived social dominance. Neither the age of the women nor their menopausal status affected how they rated a man’s dominance. This suggests that while attractiveness and aggression are linked to physical cues, dominance might be evaluated through other means not captured in static photos.

The study, like all research, has limitations. One issue involved the method used to find participants, known as snowball sampling. In this process, existing participants recruit future subjects from among their own acquaintances. This method may have resulted in a sample that is not fully representative of the general population.

Reliance on online surveys also introduces a technology bias. Older women who are less comfortable with the internet may have been excluded from the study. This could skew the results for the postmenopausal group.

Another limitation involved the stimuli used. The photographs were all based on a single 22-year-old male model. This young age might not be relevant or appealing to women in their 50s, 60s, or 70s. Postmenopausal women might naturally prefer older men, and evaluating a man in his early twenties could introduce an age-appropriateness bias. The researchers acknowledge that future studies should use models of various ages to ensure more accurate ratings.

Despite these limitations, the study provides evidence that biological changes in women influence social perception. The findings support the concept that mating psychology evolves across the lifespan. As the biological need for “good genes” fades, women appear to adjust their criteria for what makes a man attractive.

The study, “The Perception of Women of Different Ages of Men’s Physical attractiveness, Aggression and Social Dominance Based on Male Secondary Sexual Characteristics,” was authored by Aurelia Starzyńska, Maja Pietras, and Łukasz Pawelec.

Genetic risk for depression predicts financial struggles, but the cause isn’t what scientists thought

13 February 2026 at 05:00

A new study published in the Journal of Psychopathology and Clinical Science offers a nuanced look at how genetic risk for depression interacts with social and economic life circumstances to influence mental health over time. The findings indicate that while people with a higher genetic liability for depression often experience financial and educational challenges, these challenges may not be directly caused by the genetic risk itself.

Scientists conducted the study to better understand the developmental pathways that lead to depressive symptoms. A major theory in psychology, known as the bioecological model, proposes that genetic predispositions do not operate in a vacuum. Instead, this model suggests that a person’s genetic makeup might shape the environments they select or experience. For example, a genetic tendency toward low mood or low energy might make it harder for an individual to complete higher education or maintain steady employment.

If this theory holds true, those missed opportunities could lead to financial strain or a lack of social resources. These environmental stressors would then feed back into the person’s life, potentially worsening their mental health. The researchers aimed to test whether this specific chain of events is supported by data. They sought to determine if genetic risk for depression predicts changes in depressive symptoms specifically by influencing socioeconomic factors like wealth, debt, and education.

To investigate these questions, the researchers utilized data from two massive, long-term projects in the United States. The first dataset came from the National Longitudinal Study of Adolescent Health, also known as Add Health. This sample included 5,690 participants who provided DNA samples. The researchers tracked these individuals from adolescence, starting around age 16, into early adulthood, ending around age 29.

The second dataset served as a replication effort to see if the findings would hold up in a different group. This sample came from the Wisconsin Longitudinal Study, or WLS, which included 8,964 participants. Unlike the younger cohort in Add Health, the WLS participants were tracked across a decade in mid-to-late life, roughly from age 53 to 64. Using two different age groups allowed the scientists to see if these patterns persisted across the lifespan.

For both groups, the researchers calculated a “polygenic index” for each participant. This is a personalized score that summarizes thousands of tiny genetic variations across the entire genome that are statistically associated with depressive symptoms. A higher score indicates a higher genetic probability of experiencing depression. The researchers then measured four specific socioeconomic resources: educational attainment, total financial assets, total debt, and access to health insurance.

In the initial phase of the analysis, the researchers looked at the population as a whole. This is called a “between-family” analysis because it compares unrelated individuals against one another. In the Add Health sample, they found that higher genetic risk for depression was indeed associated with increases in depressive symptoms over the 12-year period.

The data showed that this link was partially explained by the socioeconomic variables. Participants with higher genetic risk tended to have lower educational attainment, fewer assets, more debt, and more difficulty maintaining health insurance. These difficult life circumstances, in turn, were associated with rising levels of depression.

The researchers then repeated this between-family analysis in the older Wisconsin cohort. The results were largely consistent. Higher genetic risk predicted increases in depression symptoms over the decade. Once again, this association appeared to be mediated by the same social factors. Specifically, participants with higher genetic risk reported lower net worth and were more likely to have gone deeply into debt or experienced healthcare difficulties.

These results initially seemed to support the idea that depression genes cause real-world problems that then cause more depression. However, the researchers took a significant additional step to test for causality. They performed a “within-family” analysis using siblings included in the Wisconsin study.

Comparing siblings provides a much stricter test of cause and effect. Siblings share roughly 50 percent of their DNA and grow up in the same household, which controls for many environmental factors like parenting style and childhood socioeconomic status. If the genetic risk for depression truly causes a person to acquire more debt or achieve less education, the sibling with the higher polygenic score should have worse economic outcomes than the sibling with the lower score.

When the researchers applied this sibling-comparison model, the findings changed. Within families, the sibling with higher genetic risk did report more depressive symptoms. This confirms that the genetic score is picking up on a real biological vulnerability. However, the link between the depression genetic score and the socioeconomic factors largely disappeared.

The sibling with higher genetic risk for depression was not significantly more likely to have lower education, less wealth, or more debt than their co-sibling. This lack of association in the sibling model suggests that the genetic risk for depression does not directly cause these negative socioeconomic outcomes. Instead, the correlation seen in the general population is likely due to other shared factors.

One potential explanation for the discrepancy involves a concept called pleiotropy, where the same genes influence multiple traits. The researchers conducted sensitivity analyses that accounted for genetic scores related to educational attainment. They found that once they controlled for the genetics of education, the apparent link between depression genes and socioeconomic status vanished.

This suggests that the same genetic variations that influence how far someone goes in school might also be correlated with depression risk. It implies that low education or financial struggle is not necessarily a downstream consequence of depression risk, but rather that both depression and socioeconomic struggles may share common genetic roots or be influenced by broader family environments.

The study has some limitations. Both datasets were comprised almost entirely of individuals of European ancestry. This lack of diversity means the results may not apply to people of other racial or ethnic backgrounds. Additionally, the measures of debt and insurance were limited to the questions available in these pre-existing surveys. They may not have captured the full nuance of financial stress.

Furthermore, while sibling models help rule out family-wide environmental factors, they cannot account for every unique experience a person has. Future research is needed to explore how these genetic risks interact with specific life events, such as trauma or job loss, which were not the primary focus of this investigation. The researchers also note that debt and medical insurance difficulties are understudied in this field and deserve more detailed attention in future work.

The study, “Genotypic and Socioeconomic Risks for Depressive Symptoms in Two U.S. Cohorts Spanning Early to Older Adulthood,” was authored by David A. Sbarra, Sam Trejo, K. Paige Harden, Jeffrey C. Oliver, and Yann C. Klimentidis.

Evening screen use may be more relaxing than stimulating for teenagers

13 February 2026 at 03:00

A recent study published in the Journal of Sleep Research suggests that evening screen use might not be as physically stimulating for teenagers as many parents and experts have assumed. The findings provide evidence that most digital activities actually coincide with lower heart rates compared to non-screen activities like moving around the house or playing. This indicates that the common connection between screens and poor sleep is likely driven by the timing of device use rather than a state of high physical arousal.

Adolescence is a time when establishing healthy sleep patterns is essential for mental health and growth, yet many young people fall short of the recommended eight to ten hours of sleep. While screen use has been linked to shorter sleep times, the specific reasons why this happens are not yet fully understood.

Existing research has looked at several possibilities, such as the light from screens affecting hormones or the simple fact that screens take up time that could be spent sleeping. Some experts have also worried that the excitement from social media or gaming could keep the body in an active state that prevents relaxation. The new study was designed to investigate the physical arousal theory by looking at heart rate in real-world settings rather than in a laboratory.

“In our previous research, we found that screen use in bed was linked with shorter sleep, largely because teens were falling asleep later. But that left an open question: were screens simply delaying bedtime, or were they physiologically stimulating adolescents in a way that made it harder to fall asleep?” said study author Kim Meredith-Jones, a research associate professor at the University of Otago.

“In this study, we wanted to test whether evening screen use actually increased heart rate — a marker of physiological arousal — and whether that arousal explained delays in falling asleep. In other words, is it what teens are doing on screens that matters, or just the fact that screens are replacing sleep time?”

By using objective tools to track both what teens do on their screens and how their hearts respond, the team hoped to fill gaps in existing knowledge. They aimed to see if different types of digital content, such as texting versus scrolling, had different effects on the heart. Understanding these connections is important for creating better guidelines for digital health in young people.

The research team recruited a group of 70 adolescents from Dunedin, New Zealand, who were between 11 and nearly 15 years old. This sample was designed to be diverse, featuring 31 girls and 39 boys from various backgrounds. Approximately 33 percent of the participants identified as indigenous Māori, while others came from Pacific, Asian, or European backgrounds.

To capture a detailed look at their evening habits, the researchers used a combination of wearable technology and video recordings over four different nights. Each participant wore a high-resolution camera attached to a chest harness starting three hours before their usual bedtime. This camera recorded exactly what they were doing and what screens they were viewing until they entered their beds.

Once the participants were in bed, a stationary camera continued to record their activities until they fell asleep. This allowed the researchers to see if they used devices while under the covers and exactly when they closed their eyes. The video data was then analyzed by trained coders who categorized screen use into ten specific behaviors, such as watching videos, gaming, or using social media.

The researchers also categorized activities as either passive or interactive. Passive activities included watching, listening, reading, or browsing, while interactive activities included gaming, communication, and multitasking. Social media use was analyzed separately to see its specific impact on heart rate compared to other activities.

At the same time, the participants wore a Fitbit Inspire 2 on their dominant wrist to track their heart rate every few seconds. The researchers used this information to see how the heart reacted to each specific screen activity in real time. This objective measurement provided a more accurate picture than asking the teenagers to remember how they felt or what they did.

To measure sleep quality and duration, each youth also wore a motion-sensing device on their other wrist for seven consecutive days. This tool, known as an accelerometer, provided data on when they actually fell asleep and how many times they woke up. The researchers then used statistical models to see if heart rate patterns during screen time could predict these sleep outcomes.

The data revealed that heart rates were consistently higher during periods when the teenagers were not using screens. The average heart rate during non-screen activities was approximately 93 beats per minute, which likely reflects the physical effort of moving around or doing chores. In contrast, when the participants were using their devices, their average heart rate dropped to about 83 beats per minute.

This suggests that screen use is often a sedentary behavior that allows the body to stay relatively calm. When the participants were in bed, the difference was less extreme, but screen use still tended to accompany lower heart rates than other in-bed activities. These findings indicate that digital engagement may function as a way for teenagers to wind down after a long day.

The researchers also looked at how specific types of digital content affected the heart. Social media use was associated with the lowest heart rates, especially when the teenagers were already in bed. Gaming and multitasking between different apps also showed lower heart rate readings compared to other screen-based tasks.

“We were surprised to find that heart rates were lower during social media use,” Meredith-Jones told PsyPost. “Previous research has suggested that social media can be stressful or emotionally intense for adolescents, so we expected to see higher arousal. Instead, our findings suggest that in this context, teens may have been using social media as a way to unwind or switch off. That said, how we define and measure ‘social media use’ matters, and we’re now working on more refined ways to capture the context and type of engagement.”

On the other hand, activities involving communication, such as texting or messaging, were linked to higher heart rates. This type of interaction seemed to be less conducive to relaxation than scrolling through feeds or watching videos. Even so, the heart rate differences between these various digital activities were relatively small.

When examining sleep patterns, the researchers found that heart rate earlier in the evening had a different relationship with sleep than heart rate closer to bedtime. Higher heart rates occurring more than two hours before bed were linked to falling asleep earlier in the night. This may be because higher activity levels in the early evening help the body build up a need for rest.

However, the heart rate in the two hours before bed and while in bed had the opposite effect on falling asleep. For every increase of 10 beats per minute during this window, the participants took about nine minutes longer to drift off. This provides evidence that physical excitement right before bed can delay the start of sleep.

Notably, while a higher heart rate made it harder to fall asleep, it did not seem to reduce the total amount of sleep the teenagers got. It also did not affect how often they woke up during the night or the general quality of their rest. The researchers noted that a person would likely need a very large increase in heart rate to see a major impact on their sleep schedule.

“The effects were relatively small,” Meredith-Jones explained. “For example, our data suggest heart rate would need to increase by around 30 beats per minute to delay sleep onset by about 30 minutes. The largest differences we observed between screen activities were closer to 10 beats per minute, making it unlikely that typical screen use would meaningfully delay sleep through physiological arousal alone.”

“The key takeaway is that most screen use in the evening did not increase heart rate. In fact, many types of screen activity were associated with lower heart rates compared to non-screen time. Although higher heart rate before bed was linked with taking longer to fall asleep, the changes in heart rate we observed during screen use were generally small. Overall, most evening screen activities appeared more relaxing than arousing.”

One limitation of this study is that the researchers did not have a baseline heart rate for each participant while they were completely at rest. Without this information, it is difficult to say for certain if screens were actively lowering the heart rate or if the teens were just naturally calm. Individual differences in biology could account for some of the variations seen in the data.

“One strength of this study was our use of wearable cameras to objectively classify screen behaviours such as gaming, social media, and communication,” Meredith-Jones noted. “This approach provides much richer and more accurate data than self-report questionnaires or simple screen-time analytics. However, a limitation is that we did not measure each participant’s true resting heart rate, so we can’t definitively say whether higher heart rates reflected arousal above baseline or just individual differences. That’s an important area for refinement in future research.”

It is also important to note that the findings don’t imply that screens are always helpful for sleep. Even if they are not physically arousing, using a device late at night can still lead to sleep displacement. This happens when the time spent on a screen replaces time that would otherwise be spent sleeping, leading to tiredness the next day. On the other hand, one shouldn’t assume that screens always impede sleep, either.

“A common assumption is that all screen use is inherently harmful for sleep,” Meredith-Jones explained. “Our findings don’t support that blanket statement. In earlier work, we found that screen use in bed was associated with shorter sleep duration, but in this study, most screen use was not physiologically stimulating. That suggests timing and context matter, and that some forms of screen use may even serve as a wind-down activity before bed.”

Looking ahead, “we want to better distinguish between different types of screen use, for example, interactive versus passive engagement, or emotionally charged versus neutral communication,” Meredith-Jones said. “We’re also developing improved real-world measurement tools that can capture not just how long teens use screens, but what they’re doing, how they’re engaging, and in what context. That level of detail is likely to give us much clearer answers than simple ‘screen time’ totals.”

The study, “Screens, Teens, and Sleep: Is the Impact of Nighttime Screen Use on Sleep Driven by Physiological Arousal?” was authored by Kim A. Meredith-Jones, Jillian J. Haszard, Barbara C. Galland, Shay-Ruby Wickham, Bradley J. Brosnan, Takiwai Russell-Camp, and Rachael W. Taylor.

Methamphetamine increases motivation through brain processes separate from euphoria

12 February 2026 at 19:00

A study published in the journal Psychopharmacology has found that the increase in motivation people experience from methamphetamine is separate from the drug’s ability to produce a euphoric high. The findings suggest that these two common effects of stimulant drugs likely involve different underlying biological processes in the brain. This research indicates that a person might become more willing to work hard without necessarily feeling a greater sense of pleasure or well-being.

The researchers conducted the new study to clarify how stimulants affect human motivation and personal feelings. They intended to understand if the pleasurable high people experience while taking these drugs is the primary reason they become more willing to work for rewards. By separating these effects, the team aimed to gain insight into how drugs could potentially be used to treat motivation-related issues without causing addictive euphoria.

Another reason for the study was to investigate how individual differences in personality or brain chemistry change how a person responds to a stimulant. Scientists wanted to see if people who are naturally less motivated benefit more from these drugs than those who are already highly driven. The team also sought to determine if the drug makes tasks feel easier or if it simply makes the final reward seem more attractive to the user.

“Stimulant drugs like amphetamine are thought to produce ‘rewarding’ effects that contribute to abuse or dependence, by increasing levels of the neurotransmitter dopamine. Findings from animal models suggest that stimulant drugs, perhaps because of their effects on dopamine, increase motivation, or the animals’ willingness to exert effort,” explained study author Harriet de Wit, a professor at the University of Chicago.

“Findings from human studies suggest that stimulant drugs lead to repeated use because they produce subjective feelings of wellbeing. In the present study, we tested the effects of amphetamine in healthy volunteers, on both an effort task and self-reported euphoria.”

For their study, the researchers recruited a group of 96 healthy adults from the Chicago area. This group consisted of 48 men and 48 women between the ages of 18 and 35. Each volunteer underwent a rigorous screening process that included a physical exam, a heart health check, and a psychiatric interview to ensure they were healthy.

The study used a double-blind, placebo-controlled design to ensure the results were accurate and unbiased. This means that neither the participants nor the staff knew if a volunteer received the actual drug or an inactive pill on a given day. The participants attended two separate laboratory sessions where they received either 20 milligrams of methamphetamine or a placebo.

During these sessions, the participants completed a specific exercise called the Effort Expenditure for Rewards Task. This task required them to choose between an easy option for a small amount of money or a more difficult option for a larger reward. The researchers used this to measure how much physical effort a person was willing to put in to get a better payoff.

The easy task involved pressing a specific key on a keyboard 30 times with the index finger of the dominant hand within seven seconds. Successfully completing this task always resulted in a small reward of one dollar. This served as a baseline for the minimum amount of effort a person was willing to expend for a guaranteed but small gain.

The hard task required participants to press a different key 100 times using the pinky finger of their non-dominant hand within 21 seconds. The rewards for this more difficult task varied from about one dollar and 24 cents to over four dollars. This task was designed to be physically taxing and required a higher level of commitment to complete.

Before making their choice on each trial, participants were informed of the probability that they would actually receive the money if they finished the task. These probabilities were set at 12 percent, 50 percent, or 88 percent. This added a layer of risk to the decision, as a person might work hard for a reward but still receive nothing if the odds were not in their favor.

Throughout the four-hour sessions, the researchers measured the participants’ personal feelings and physical reactions at regular intervals. They used standardized questionnaires to track how much the participants liked the effects of the drug and how much euphoria they felt. They also monitored physical signs such as heart rate and blood pressure to ensure the safety of the volunteers.

Before the main sessions, the participants completed the task during an orientation to establish their natural effort levels. The researchers then divided the group in half based on these baseline scores. This allowed the team to compare people who were naturally inclined to work hard against those who were naturally less likely to choose the difficult task.

The results showed that methamphetamine increased the frequency with which people chose the hard task over the easy one across the whole group. This effect was most visible when the chances of winning the reward were in the low to medium range. The drug seemed to give participants a boost in motivation when the outcome was somewhat uncertain.

The data provides evidence that the drug had a much stronger impact on people who were naturally less motivated. Participants in the low baseline group showed a significantly larger increase in their willingness to choose the hard task compared to those in the high baseline group. For people who were already high achievers, the drug did not seem to provide much of an additional motivational boost.

To understand why the drug changed behavior, the researchers used a mathematical model to analyze the decision-making process. This model helped the team separate how much a person cares about the difficulty of a task from how much they value the reward itself. It provided a more detailed look at the internal trade-offs people make when deciding to work.

The model showed that methamphetamine specifically reduced a person’s sensitivity to the physical cost of effort. This suggests that the drug makes hard work feel less unpleasant or demanding than it normally would. Instead of making the reward seem more exciting, the drug appears to make the work itself feel less like a burden.

This change in effort sensitivity was primarily found in the participants who started with low motivation levels. For these individuals, the drug appeared to lower the mental or physical barriers that usually made them avoid the difficult option. In contrast, the drug did not significantly change the effort sensitivity of those who were already highly motivated.

Methamphetamine did not change how sensitive people were to the probability of winning the reward. This indicates that the drug affects the drive to work rather than changing how people calculate risks or perceive the odds of success. The volunteers still understood the chances of winning, but they were more willing to try anyway despite the difficulty.

As the researchers expected, the drug increased feelings of happiness and euphoria in the participants. It also caused the usual physical changes associated with stimulants, such as an increase in heart rate and blood pressure. Most participants reported that they liked the effects of the drug while they were performing the tasks.

A major finding of the study is that the boost in mood was not related to the boost in productivity. The participants who felt the highest levels of euphoria were not the same people who showed the greatest increase in hard task choices. “This suggests that different receptor actions of amphetamine mediate willingness to exert effort and feelings of wellbeing,” de Wit explained.

There was no statistical correlation between how much a person liked the drug and how much more effort they were willing to exert. This provides evidence that the brain processes that create pleasure from stimulants are distinct from those that drive motivated behavior. A person can experience the motivational benefits of a stimulant without necessarily feeling the intense pleasure that often leads to drug misuse.

The findings highlight that “drugs have numerous behavioral and cognitive actions, which may be mediated by different neurotransmitter actions,” de Wit told PsyPost. “The purpose of research in this area is to disentangle which effects are relevant to misuse or dependence liability, and which might have clinical benefits, and what brain processes underlie the effects.”

The results also highlight the importance of considering a person’s starting point when predicting how they will respond to a medication. Because the drug helped the least motivated people the most, it suggests that these treatments might be most effective for those with a clear deficit in drive.

The study, like all research, has some limitations. The participants were all healthy young adults, so it is not clear if the results would be the same for older people or those with existing health conditions. A more diverse group of volunteers would be needed to see if these patterns apply to the general population.

The study only tested a single 20-milligram dose of methamphetamine given by mouth. It is possible that different doses or different ways of taking the drug might change the relationship between mood and behavior. Using a range of doses in future studies would help researchers see if there is a point where the mood and effort effects begin to overlap.

Another limitation is that the researchers did not directly look at the chemical changes inside the participants’ brains. While they believe dopamine is involved, they did not use brain imaging technology to confirm this directly. Future research could use specialized scans to see exactly which brain regions are active when these changes in motivation occur.

“The results open the door to further studies to determine what brain mechanisms underlie the two behavioral effects,” de Wit said.

The study, “Effects of methamphetamine on human effort task performance are unrelated to its subjective effects,” was authored by Evan C. Hahn, Hanna Molla, Jessica A. Cooper, Joseph DeBrosse, and Harriet de Wit.

AI boosts worker creativity only if they use specific thinking strategies

12 February 2026 at 15:00

A new study published in the Journal of Applied Psychology suggests that generative artificial intelligence can boost creativity among employees in professional settings. But the research indicates that these tools increase innovative output only when workers use specific mental strategies to manage their own thought processes.

Generative artificial intelligence is a type of technology that can produce new content such as text, images, or computer code. Large language models like ChatGPT or Google’s Gemini use massive datasets to predict and generate human-like responses to various prompts. Organizations often implement these tools with the expectation that they will help employees come up with novel and useful ideas. Many leaders believe that providing access to advanced technology will automatically lead to a more innovative workforce.

However, recent surveys indicate that only a small portion of workers feel that these tools actually improve their creative work. The researchers conducted the new study to see if the technology truly helps and to identify which specific factors make it effective. They also wanted to see how these tools function in a real office environment where people manage multiple projects at once. Most previous studies on this topic took place in artificial settings using only one isolated task.

“When ChatGPT was released in November 2022, generative AI quickly became part of daily conversation. Many companies rushed to integrate generative AI tools into their workflows, often expecting that this would make employees more creative and, ultimately, give organizations a competitive advantage,” said study author Shuhua Sun, who holds the Peter W. and Paul A. Callais Professorship in Entrepreneurship at Tulane University’s A. B. Freeman School of Business.

“What struck us, though, was how little direct evidence existed to support those expectations, especially in real workplaces. Early proof-of-concept studies in labs and online settings began to appear, but their results were mixed. Even more surprisingly, there were almost no randomized field experiments examining how generative AI actually affects employee creativity on the job.”

“At the same time, consulting firms started releasing large-scale surveys on generative AI adoption. These reports showed that only a small percentage of employees felt that using generative AI made them more creative. Taken together with the mixed lab/online findings, this raised a simple but important question for us: If generative AI is supposed to enhance creativity, why does it seem to help only some employees and not others? What are those employees doing differently?”

“That question shaped the core of our project. So, instead of asking simply whether generative AI boosts creativity, we wanted to understand how it does so and for whom. Driven by these questions, we developed a theory and tested it using a randomized field experiment in a real organizational setting.”

The researchers worked with a technology consulting firm in China to conduct their field experiment. This company was an ideal setting because consulting work requires employees to find unique solutions for many different clients. The study included a total of 250 nonmanagerial employees from departments such as technology, sales, and administration. These participants had an average age of about 30 years and most held university degrees.

The researchers randomly split the workers into two groups. The first group received access to ChatGPT accounts and was shown how to use the tool for their daily tasks. The second group served as a control and did not receive access to the artificial intelligence software during the study. To make sure the experiment was fair, the company told the first group that the technology was meant to assist them rather than replace them.

The experiment lasted for about one week. During this time, the researchers tracked how often the treated group used their new accounts. At the end of the week, the researchers collected data from several sources to measure the impact of the tool. They used surveys to ask employees about their work experiences and their thinking habits.

They also asked the employees’ direct supervisors to rate their creative performance. These supervisors did not know which employees were using the artificial intelligence tool. Additionally, the researchers used two external evaluators to judge specific ideas produced by the employees. These evaluators looked at how novel and useful the ideas were without knowing who wrote them.

The researchers looked at cognitive job resources, which are the tools and mental space people need to handle complex work. This includes having enough information and the ability to switch between hard and easy tasks. They also measured metacognitive strategies. This term describes how people actively monitor and adjust their own thinking to reach a goal.

A person with high metacognitive strategies might plan out their steps before starting a task. They also tend to check their own progress and change their approach if they are not making enough headway. The study suggests that the artificial intelligence tool increased the cognitive resources available to employees. The tool helped them find information quickly and allowed them to manage their mental energy more effectively.

The results show that the employees who had access to the technology generally received higher creativity ratings from their supervisors. The external evaluators also gave higher scores for novelty to the ideas produced by this group. The evidence suggests that the tool was most effective when workers already used strong metacognitive strategies. These workers were able to use the technology to fill specific gaps in their knowledge.

For employees who did not use these thinking strategies, the tool did not significantly improve their creative output. These individuals appeared to be less effective at using the technology to gain new resources. The study indicates that the tool provides the raw material for creativity, but the worker must know how to direct the process. Specifically, workers who monitored their own mental state knew when to use the tool to take a break or switch tasks.

This ability to switch tasks is important because it prevents a person from getting stuck on a single way of thinking. When the technology handled routine parts of a job, it gave workers more mental space to focus on complex problem solving. The researchers found that the positive effect of the technology became significant once a worker’s use of thinking strategies reached a certain level. Below that threshold, the tool did not provide a clear benefit for creativity.

The cognitive approach to creativity suggests that coming up with new ideas is a mental process of searching through different areas of knowledge. People must find pieces of information and then combine them in ways that have not been tried before. This process can be very demanding because people have a limited amount of time and mental energy. Researchers call this the knowledge burden.

It takes a lot of effort to find, process, and understand new information from different fields. If a person spends all their energy just gathering facts, they might not have enough strength left to actually be creative. Artificial intelligence can help by taking over the task of searching for and summarizing information. This allows the human worker to focus on the high level task of combining those facts into something new.

Metacognition is essentially thinking about one’s own thinking. It involves a person being aware of what they know and what they do not know. When a worker uses metacognitive strategies, they act like a coach for their own brain. They ask themselves if their current plan is working or if they need to try a different path.

The study shows that this self-awareness is what allows a person to use artificial intelligence effectively. Instead of just accepting whatever the computer says, a strategic thinker uses the tool to test specific ideas. The statistical analysis revealed that the artificial intelligence tool provided workers with more room to think. This extra mental space came from having better access to knowledge and more chances to take mental breaks.

The researchers used a specific method called multilevel analysis to account for the way employees were organized within departments and teams. This helps ensure that the findings are not skewed by the influence of a single department or manager. The researchers also checked to see if other factors like past job performance or self-confidence played a role. Even when they accounted for these variables, the link between thinking strategies and the effective use of artificial intelligence remained strong.

The data showed that the positive impact of the tool on creativity was quite large for those who managed their thinking well. For those with low scores in that area, the tool had almost no impact on their creative performance. To test creativity specifically, the researchers asked participants to solve a real problem. They had to provide suggestions for protecting employee privacy in a digital office.

This task required at least 70 Chinese characters in response. It was designed to see if the participants could think of novel ways to prevent information leaks or excessive monitoring by leadership. The external raters then scored these responses based on how original and useful they were. This provided a more objective look at creativity than just asking a supervisor for their opinion.

“The main takeaway is that generative AI does not automatically make people more creative,” Sun told PsyPost. “Simply providing access to AI tools is not enough, and in many cases it yields little creative benefit. Our findings show that the creative value of AI depends on how people engage with it during the creative process. Individuals who actively monitor their own understanding, recognize what kind of help they need, and deliberately decide when and how to use AI are much more likely to benefit creatively.”

“In contrast, relying on AI in a more automatic or unreflective way tends to produce weaker creative outcomes. For the average person, the message is simple: AI helps creativity when it is used thoughtfully: Pausing to reflect on what you need, deciding when AI can be useful, and actively shaping its output iteratively are what distinguish creative gains from generic results.”

As with all research, there are some limitations to consider. The researchers relied on workers to report their own thinking strategies, which can sometimes be inaccurate. The study also took place in a single company within one specific country. People in different cultures might interact with artificial intelligence in different ways.

Future research could look at how long-term use of these tools affects human skills. There is a possibility that relying too much on technology could make people less independent over time. Researchers might also explore how team dynamics influence the way people use these tools. Some office environments might encourage better thinking habits than others.

It would also be helpful to see if the benefits of these tools continue to grow over several months or if they eventually level off. These questions will be important as technology continues to change the way we work. The findings suggest that simply buying new software is not enough to make a company more innovative. Organizations should also consider training their staff to be more aware of their own thinking processes.

Since the benefits of artificial intelligence depend on a worker’s thinking habits, generic software training might not be enough. Instead, programs might need to focus on how to analyze a task and how to monitor one’s own progress. These metacognitive skills are often overlooked in traditional professional development. The researchers note that these skills can be taught through short exercises. Some of these involve reflecting on past successes or practicing new ways to plan out a workday.

The study, “How and for Whom Using Generative AI Affects Creativity: A Field Experiment,” was authored by Shuhua Sun, Zhuyi Angelina Li, Maw-Der Foo, Jing Zhou, and Jackson G. Lu.

Scientists asked men to smell hundreds of different vulvar odors to test the “leaky-cue hypothesis”

12 February 2026 at 06:00

A new study published in Evolution and Human Behavior suggests that modern women may not chemically signal fertility through vulvar body odor, a trait commonly observed in other primates. The findings indicate that men are unable to detect when a woman is in the fertile phase of her menstrual cycle based solely on the scent of the vulvar region. This research challenges the idea that humans have retained these specific evolutionary mating signals.

In the animal kingdom, particularly among non-human primates like lemurs, baboons, and chimpanzees, females often broadcast their reproductive status to males. This is frequently done through olfactory signals, specifically odors from the genital region, which change chemically during the fertile window. These scents serve as information for males, helping them identify when a female is capable of conceiving. Because humans share a deep evolutionary history with these primates, scientists have debated whether modern women retain these chemical signals.

A concept known as the “leaky-cue hypothesis” proposes that women might unintentionally emit subtle physiological signs of fertility. While previous research has investigated potential signals in armpit odor, voice pitch, or facial attractiveness, results have been inconsistent.

The specific scent of the vulvar region has remained largely unexplored using modern, rigorous methods, despite its biological potential as a source of chemical communication. To address this gap, a team led by Madita Zetzsche from the Behavioural Ecology Research Group at Leipzig University and the Max Planck Institute for Evolutionary Anthropology conducted a detailed investigation.

The researchers recruited 28 women to serve as odor donors. These participants were between the ages of 20 and 30, did not use hormonal contraception, and had regular menstrual cycles. To ensure the accuracy of the fertility data, the team did not rely on simple calendar counting. Instead, they used high-sensitivity urinary tests to detect luteinizing hormone and analyzed saliva samples to measure levels of estradiol and progesterone. This allowed the scientists to pinpoint the exact day of ovulation for each participant.

To prevent external factors from altering body odor, the donors adhered to a strict lifestyle protocol. They followed a vegetarian or vegan diet and avoided foods with strong scents, such as garlic, onion, and asparagus, as well as alcohol and tobacco. The women provided samples at ten specific points during their menstrual cycle. These points were clustered around the fertile window to capture any rapid changes in odor that might occur just before or during ovulation.

The study consisted of two distinct parts: a chemical analysis and a perceptual test. For the chemical analysis, the researchers collected 146 vulvar odor samples from a subset of 16 women. They used a specialized portable pump to draw air from the vulvar region into stainless steel tubes containing polymers designed to trap volatile compounds. These are the lightweight chemical molecules that evaporate into the air and create scent.

The team analyzed these samples using gas chromatography–mass spectrometry. This is a laboratory technique that separates a mixture into its individual chemical components and identifies them. The researchers looked for changes in the chemical profile that corresponded to the women’s conception risk and hormone levels. They specifically sought to determine if the abundance of certain chemical compounds rose or fell in a pattern that tracked the menstrual cycle.

The chemical analysis revealed no consistent evidence that the overall scent profile changed in a way that would allow fertility to be tracked across the menstrual cycle. While some specific statistical models suggested a potential link between the risk of conception and levels of certain substances—such as an increase in acetic acid and a decrease in a urea-related compound—these findings were not stable. When the researchers ran robustness checks, such as excluding samples from donors who had slightly violated dietary rules, the associations disappeared. The researchers concluded that there is likely a low retention of chemical fertility cues in the vulvar odor of modern women.

In the second part of the study, 139 men participated as odor raters. To collect the scent for this experiment, the female participants wore cotton pads in their underwear overnight for approximately 12 hours. These pads were then frozen to preserve the scent and later presented to the male participants in glass vials. The men, who were unaware of the women’s fertility status, sniffed the samples and rated them on three dimensions: attractiveness, pleasantness, and intensity.

The perceptual results aligned with the chemical findings. The statistical analysis showed that the men’s ratings were not influenced by the women’s fertility status. The men did not find the odor of women in their fertile window to be more attractive or pleasant than the odor collected during non-fertile days. Neither the risk of conception nor the levels of reproductive hormones predicted how the men perceived the scents.

These null results were consistent even when the researchers looked at the data in different ways, such as examining specific hormone levels or the temporal distance to ovulation. The study implies that if humans ever possessed the ability to signal fertility through vulvar scent, this trait has likely diminished significantly over evolutionary time.

The researchers suggest several reasons for why these cues might have been lost or suppressed in humans. Unlike most primates that walk on four legs, humans walk upright. This bipedalism moves the genital region away from the nose of other individuals, potentially reducing the role of genital odor in social communication. Additionally, human cultural practices, such as wearing clothing and maintaining high levels of hygiene, may have further obscured any remaining chemical signals.

It is also possible that social odors in humans have shifted to other parts of the body, such as the armpits, although evidence for axillary fertility cues remains mixed. The researchers noted that while they found no evidence of fertility signaling in this context, it remains possible that such cues require more intimate contact or sexual arousal to be detected, conditions that were not replicated in the laboratory.

Additionally, the strict dietary and behavioral controls, while necessary for scientific rigor, might not reflect real-world conditions where diet varies. The sample size for the chemical analysis was also relatively small, which can make it difficult to detect very subtle effects.

Future research could investigate whether these cues exist in more naturalistic settings or investigate the role of the vaginal microbiome, which differs significantly between humans and non-human primates. The high levels of Lactobacillus bacteria in humans create a more acidic environment, which might alter the chemical volatility of potential fertility signals.

The study, “Understanding olfactory fertility cues in humans: chemical analysis of women’s vulvar odour and perceptual detection of these cues by men,” was authored by Madita Zetzsche, Marlen Kücklich, Brigitte M. Weiß, Julia Stern, Andrea C. Marcillo Lara, Claudia Birkemeyer, Lars Penke, and Anja Widdig.

❌
❌