Normal view

Today — 13 February 2026Main stream

Younger women find men with beards less attractive than older women do

13 February 2026 at 05:00

A new study published in Adaptive Human Behavior and Physiology suggests that a woman’s age and reproductive status may influence her preferences for male physical traits. The research indicates that postmenopausal women perceive certain masculine characteristics, such as body shape and facial features, differently than women who are still in their reproductive years. These findings offer evidence that biological shifts associated with menopause might alter the criteria women use to evaluate potential partners.

Scientists have recognized that physical features act as powerful biological signals in human communication. Secondary sexual characteristics are traits that appear during puberty and visually distinguish men from women. These include features such as broad shoulders, facial hair, jawline definition, and muscle mass.

Evolutionary psychology suggests that these traits serve as indicators of health and genetic quality. For instance, a muscular physique or a strong jawline often signals high testosterone levels and physical strength. Women of reproductive age typically prioritize these markers because they imply that a potential partner possesses “good genes” that could be passed to offspring.

However, researchers have historically focused most of their attention on the preferences of young women. Less is known about how these preferences might change as women age and lose their reproductive capability. The biological transition of menopause involves significant hormonal changes, including a decrease in estrogen levels.

This hormonal shift may correspond to a change in mating strategies. The “Grandmother Hypothesis” proposes that older women shift their focus from reproduction to investing in their existing family line. Consequently, they may no longer prioritize high-testosterone traits, which can be associated with aggression or short-term mating.

Instead, older women might prioritize traits that signal cooperation, reliability, and long-term companionship. To test this theory, a team of researchers from Poland designed a study to compare the preferences of women at different stages of life. The research team included Aurelia Starzyńska and Łukasz Pawelec from the Wroclaw University of Environmental and Life Sciences and the University of Warsaw, alongside Maja Pietras from Wroclaw Medical University and the University of Wroclaw.

The researchers recruited 122 Polish women to participate in an online survey. The participants ranged in age from 19 to 70 years old. Based on their survey responses regarding menstrual regularity and history, the researchers categorized the women into three groups.

The first group was premenopausal, consisting of women with regular reproductive functions. The second group was perimenopausal, including women experiencing the onset of menopausal symptoms and irregular cycles. The third group was postmenopausal, defined as women whose menstrual cycles had ceased for at least one year.

To assess preferences, the researchers created a specific set of visual stimuli. They started with photographs of a single 22-year-old male model. Using photo-editing applications, they digitally manipulated the images to create distinct variations in appearance.

The researchers modified the model’s face to appear either more feminized, intermediate, or heavily masculinized. They also altered the model’s facial hair to show a clean-shaven look, light stubble, or a full beard.

Body shape was another variable manipulated in the study. The scientists adjusted the hip-to-shoulder ratio to create three silhouette types: V-shaped, H-shaped, and A-shaped. Finally, they modified the model’s musculature to display non-muscular, moderately muscular, or strongly muscular builds.

Participants viewed these twelve modified images and rated them on a scale from one to ten. They evaluated the man in the photos based on three specific criteria. The first criterion was physical attractiveness.

The second and third criteria involved personality assessments. The women rated how aggressive they perceived the man to be. They also rated the man’s perceived level of social dominance.

The results showed that a woman’s reproductive status does influence her perception of attractiveness. One significant finding related to the shape of the male torso. Postmenopausal women rated the V-shaped body, which is typically characterized by broad shoulders and narrow hips, as less attractive than other shapes.

This contrasts with general evolutionary expectations where the V-shape is a classic indicator of male fitness. The data suggests that as women exit their reproductive years, the appeal of this strong biological signal may diminish.

Age also played a distinct role in how women viewed facial hair. The study found that older women rated men with medium to full beards as more attractive compared to younger women. This preference for beards increased with the age of the participant.

The researchers suggest that beards might signal maturity and social status rather than just raw genetic fitness. Younger women in the study showed a lower preference for beards. This might occur because facial hair can mask other facial features that young women use to assess mate quality.

The study produced complex results regarding facial masculinity. Chronological age showed a slight positive association with finding feminized faces attractive. This aligns with the idea that older women might prefer “softer” features associated with cooperation.

However, when isolating the specific biological factor of menopause, the results shifted. Postmenopausal women rated feminized faces as less attractive than premenopausal women did. This indicates that the relationship between aging and facial preference is not entirely linear.

Perceptions of aggression also varied by group. Postmenopausal women rated men with medium muscularity as more aggressive than men with other body types. This association was not present in the younger groups.

The researchers propose that older women might view visible musculature as a signal of potential threat rather than protection. Younger women, who are more likely to seek a partner for reproduction, may view muscles as a positive sign of health and defense.

Interestingly, the study found no significant connection between the physical traits and perceived social dominance. Neither the age of the women nor their menopausal status affected how they rated a man’s dominance. This suggests that while attractiveness and aggression are linked to physical cues, dominance might be evaluated through other means not captured in static photos.

The study, like all research, has limitations. One issue involved the method used to find participants, known as snowball sampling. In this process, existing participants recruit future subjects from among their own acquaintances. This method may have resulted in a sample that is not fully representative of the general population.

Reliance on online surveys also introduces a technology bias. Older women who are less comfortable with the internet may have been excluded from the study. This could skew the results for the postmenopausal group.

Another limitation involved the stimuli used. The photographs were all based on a single 22-year-old male model. This young age might not be relevant or appealing to women in their 50s, 60s, or 70s. Postmenopausal women might naturally prefer older men, and evaluating a man in his early twenties could introduce an age-appropriateness bias. The researchers acknowledge that future studies should use models of various ages to ensure more accurate ratings.

Despite these limitations, the study provides evidence that biological changes in women influence social perception. The findings support the concept that mating psychology evolves across the lifespan. As the biological need for “good genes” fades, women appear to adjust their criteria for what makes a man attractive.

The study, “The Perception of Women of Different Ages of Men’s Physical attractiveness, Aggression and Social Dominance Based on Male Secondary Sexual Characteristics,” was authored by Aurelia Starzyńska, Maja Pietras, and Łukasz Pawelec.

Genetic risk for depression predicts financial struggles, but the cause isn’t what scientists thought

13 February 2026 at 05:00

A new study published in the Journal of Psychopathology and Clinical Science offers a nuanced look at how genetic risk for depression interacts with social and economic life circumstances to influence mental health over time. The findings indicate that while people with a higher genetic liability for depression often experience financial and educational challenges, these challenges may not be directly caused by the genetic risk itself.

Scientists conducted the study to better understand the developmental pathways that lead to depressive symptoms. A major theory in psychology, known as the bioecological model, proposes that genetic predispositions do not operate in a vacuum. Instead, this model suggests that a person’s genetic makeup might shape the environments they select or experience. For example, a genetic tendency toward low mood or low energy might make it harder for an individual to complete higher education or maintain steady employment.

If this theory holds true, those missed opportunities could lead to financial strain or a lack of social resources. These environmental stressors would then feed back into the person’s life, potentially worsening their mental health. The researchers aimed to test whether this specific chain of events is supported by data. They sought to determine if genetic risk for depression predicts changes in depressive symptoms specifically by influencing socioeconomic factors like wealth, debt, and education.

To investigate these questions, the researchers utilized data from two massive, long-term projects in the United States. The first dataset came from the National Longitudinal Study of Adolescent Health, also known as Add Health. This sample included 5,690 participants who provided DNA samples. The researchers tracked these individuals from adolescence, starting around age 16, into early adulthood, ending around age 29.

The second dataset served as a replication effort to see if the findings would hold up in a different group. This sample came from the Wisconsin Longitudinal Study, or WLS, which included 8,964 participants. Unlike the younger cohort in Add Health, the WLS participants were tracked across a decade in mid-to-late life, roughly from age 53 to 64. Using two different age groups allowed the scientists to see if these patterns persisted across the lifespan.

For both groups, the researchers calculated a “polygenic index” for each participant. This is a personalized score that summarizes thousands of tiny genetic variations across the entire genome that are statistically associated with depressive symptoms. A higher score indicates a higher genetic probability of experiencing depression. The researchers then measured four specific socioeconomic resources: educational attainment, total financial assets, total debt, and access to health insurance.

In the initial phase of the analysis, the researchers looked at the population as a whole. This is called a “between-family” analysis because it compares unrelated individuals against one another. In the Add Health sample, they found that higher genetic risk for depression was indeed associated with increases in depressive symptoms over the 12-year period.

The data showed that this link was partially explained by the socioeconomic variables. Participants with higher genetic risk tended to have lower educational attainment, fewer assets, more debt, and more difficulty maintaining health insurance. These difficult life circumstances, in turn, were associated with rising levels of depression.

The researchers then repeated this between-family analysis in the older Wisconsin cohort. The results were largely consistent. Higher genetic risk predicted increases in depression symptoms over the decade. Once again, this association appeared to be mediated by the same social factors. Specifically, participants with higher genetic risk reported lower net worth and were more likely to have gone deeply into debt or experienced healthcare difficulties.

These results initially seemed to support the idea that depression genes cause real-world problems that then cause more depression. However, the researchers took a significant additional step to test for causality. They performed a “within-family” analysis using siblings included in the Wisconsin study.

Comparing siblings provides a much stricter test of cause and effect. Siblings share roughly 50 percent of their DNA and grow up in the same household, which controls for many environmental factors like parenting style and childhood socioeconomic status. If the genetic risk for depression truly causes a person to acquire more debt or achieve less education, the sibling with the higher polygenic score should have worse economic outcomes than the sibling with the lower score.

When the researchers applied this sibling-comparison model, the findings changed. Within families, the sibling with higher genetic risk did report more depressive symptoms. This confirms that the genetic score is picking up on a real biological vulnerability. However, the link between the depression genetic score and the socioeconomic factors largely disappeared.

The sibling with higher genetic risk for depression was not significantly more likely to have lower education, less wealth, or more debt than their co-sibling. This lack of association in the sibling model suggests that the genetic risk for depression does not directly cause these negative socioeconomic outcomes. Instead, the correlation seen in the general population is likely due to other shared factors.

One potential explanation for the discrepancy involves a concept called pleiotropy, where the same genes influence multiple traits. The researchers conducted sensitivity analyses that accounted for genetic scores related to educational attainment. They found that once they controlled for the genetics of education, the apparent link between depression genes and socioeconomic status vanished.

This suggests that the same genetic variations that influence how far someone goes in school might also be correlated with depression risk. It implies that low education or financial struggle is not necessarily a downstream consequence of depression risk, but rather that both depression and socioeconomic struggles may share common genetic roots or be influenced by broader family environments.

The study has some limitations. Both datasets were comprised almost entirely of individuals of European ancestry. This lack of diversity means the results may not apply to people of other racial or ethnic backgrounds. Additionally, the measures of debt and insurance were limited to the questions available in these pre-existing surveys. They may not have captured the full nuance of financial stress.

Furthermore, while sibling models help rule out family-wide environmental factors, they cannot account for every unique experience a person has. Future research is needed to explore how these genetic risks interact with specific life events, such as trauma or job loss, which were not the primary focus of this investigation. The researchers also note that debt and medical insurance difficulties are understudied in this field and deserve more detailed attention in future work.

The study, “Genotypic and Socioeconomic Risks for Depressive Symptoms in Two U.S. Cohorts Spanning Early to Older Adulthood,” was authored by David A. Sbarra, Sam Trejo, K. Paige Harden, Jeffrey C. Oliver, and Yann C. Klimentidis.

Evening screen use may be more relaxing than stimulating for teenagers

13 February 2026 at 03:00

A recent study published in the Journal of Sleep Research suggests that evening screen use might not be as physically stimulating for teenagers as many parents and experts have assumed. The findings provide evidence that most digital activities actually coincide with lower heart rates compared to non-screen activities like moving around the house or playing. This indicates that the common connection between screens and poor sleep is likely driven by the timing of device use rather than a state of high physical arousal.

Adolescence is a time when establishing healthy sleep patterns is essential for mental health and growth, yet many young people fall short of the recommended eight to ten hours of sleep. While screen use has been linked to shorter sleep times, the specific reasons why this happens are not yet fully understood.

Existing research has looked at several possibilities, such as the light from screens affecting hormones or the simple fact that screens take up time that could be spent sleeping. Some experts have also worried that the excitement from social media or gaming could keep the body in an active state that prevents relaxation. The new study was designed to investigate the physical arousal theory by looking at heart rate in real-world settings rather than in a laboratory.

“In our previous research, we found that screen use in bed was linked with shorter sleep, largely because teens were falling asleep later. But that left an open question: were screens simply delaying bedtime, or were they physiologically stimulating adolescents in a way that made it harder to fall asleep?” said study author Kim Meredith-Jones, a research associate professor at the University of Otago.

“In this study, we wanted to test whether evening screen use actually increased heart rate — a marker of physiological arousal — and whether that arousal explained delays in falling asleep. In other words, is it what teens are doing on screens that matters, or just the fact that screens are replacing sleep time?”

By using objective tools to track both what teens do on their screens and how their hearts respond, the team hoped to fill gaps in existing knowledge. They aimed to see if different types of digital content, such as texting versus scrolling, had different effects on the heart. Understanding these connections is important for creating better guidelines for digital health in young people.

The research team recruited a group of 70 adolescents from Dunedin, New Zealand, who were between 11 and nearly 15 years old. This sample was designed to be diverse, featuring 31 girls and 39 boys from various backgrounds. Approximately 33 percent of the participants identified as indigenous Māori, while others came from Pacific, Asian, or European backgrounds.

To capture a detailed look at their evening habits, the researchers used a combination of wearable technology and video recordings over four different nights. Each participant wore a high-resolution camera attached to a chest harness starting three hours before their usual bedtime. This camera recorded exactly what they were doing and what screens they were viewing until they entered their beds.

Once the participants were in bed, a stationary camera continued to record their activities until they fell asleep. This allowed the researchers to see if they used devices while under the covers and exactly when they closed their eyes. The video data was then analyzed by trained coders who categorized screen use into ten specific behaviors, such as watching videos, gaming, or using social media.

The researchers also categorized activities as either passive or interactive. Passive activities included watching, listening, reading, or browsing, while interactive activities included gaming, communication, and multitasking. Social media use was analyzed separately to see its specific impact on heart rate compared to other activities.

At the same time, the participants wore a Fitbit Inspire 2 on their dominant wrist to track their heart rate every few seconds. The researchers used this information to see how the heart reacted to each specific screen activity in real time. This objective measurement provided a more accurate picture than asking the teenagers to remember how they felt or what they did.

To measure sleep quality and duration, each youth also wore a motion-sensing device on their other wrist for seven consecutive days. This tool, known as an accelerometer, provided data on when they actually fell asleep and how many times they woke up. The researchers then used statistical models to see if heart rate patterns during screen time could predict these sleep outcomes.

The data revealed that heart rates were consistently higher during periods when the teenagers were not using screens. The average heart rate during non-screen activities was approximately 93 beats per minute, which likely reflects the physical effort of moving around or doing chores. In contrast, when the participants were using their devices, their average heart rate dropped to about 83 beats per minute.

This suggests that screen use is often a sedentary behavior that allows the body to stay relatively calm. When the participants were in bed, the difference was less extreme, but screen use still tended to accompany lower heart rates than other in-bed activities. These findings indicate that digital engagement may function as a way for teenagers to wind down after a long day.

The researchers also looked at how specific types of digital content affected the heart. Social media use was associated with the lowest heart rates, especially when the teenagers were already in bed. Gaming and multitasking between different apps also showed lower heart rate readings compared to other screen-based tasks.

“We were surprised to find that heart rates were lower during social media use,” Meredith-Jones told PsyPost. “Previous research has suggested that social media can be stressful or emotionally intense for adolescents, so we expected to see higher arousal. Instead, our findings suggest that in this context, teens may have been using social media as a way to unwind or switch off. That said, how we define and measure ‘social media use’ matters, and we’re now working on more refined ways to capture the context and type of engagement.”

On the other hand, activities involving communication, such as texting or messaging, were linked to higher heart rates. This type of interaction seemed to be less conducive to relaxation than scrolling through feeds or watching videos. Even so, the heart rate differences between these various digital activities were relatively small.

When examining sleep patterns, the researchers found that heart rate earlier in the evening had a different relationship with sleep than heart rate closer to bedtime. Higher heart rates occurring more than two hours before bed were linked to falling asleep earlier in the night. This may be because higher activity levels in the early evening help the body build up a need for rest.

However, the heart rate in the two hours before bed and while in bed had the opposite effect on falling asleep. For every increase of 10 beats per minute during this window, the participants took about nine minutes longer to drift off. This provides evidence that physical excitement right before bed can delay the start of sleep.

Notably, while a higher heart rate made it harder to fall asleep, it did not seem to reduce the total amount of sleep the teenagers got. It also did not affect how often they woke up during the night or the general quality of their rest. The researchers noted that a person would likely need a very large increase in heart rate to see a major impact on their sleep schedule.

“The effects were relatively small,” Meredith-Jones explained. “For example, our data suggest heart rate would need to increase by around 30 beats per minute to delay sleep onset by about 30 minutes. The largest differences we observed between screen activities were closer to 10 beats per minute, making it unlikely that typical screen use would meaningfully delay sleep through physiological arousal alone.”

“The key takeaway is that most screen use in the evening did not increase heart rate. In fact, many types of screen activity were associated with lower heart rates compared to non-screen time. Although higher heart rate before bed was linked with taking longer to fall asleep, the changes in heart rate we observed during screen use were generally small. Overall, most evening screen activities appeared more relaxing than arousing.”

One limitation of this study is that the researchers did not have a baseline heart rate for each participant while they were completely at rest. Without this information, it is difficult to say for certain if screens were actively lowering the heart rate or if the teens were just naturally calm. Individual differences in biology could account for some of the variations seen in the data.

“One strength of this study was our use of wearable cameras to objectively classify screen behaviours such as gaming, social media, and communication,” Meredith-Jones noted. “This approach provides much richer and more accurate data than self-report questionnaires or simple screen-time analytics. However, a limitation is that we did not measure each participant’s true resting heart rate, so we can’t definitively say whether higher heart rates reflected arousal above baseline or just individual differences. That’s an important area for refinement in future research.”

It is also important to note that the findings don’t imply that screens are always helpful for sleep. Even if they are not physically arousing, using a device late at night can still lead to sleep displacement. This happens when the time spent on a screen replaces time that would otherwise be spent sleeping, leading to tiredness the next day. On the other hand, one shouldn’t assume that screens always impede sleep, either.

“A common assumption is that all screen use is inherently harmful for sleep,” Meredith-Jones explained. “Our findings don’t support that blanket statement. In earlier work, we found that screen use in bed was associated with shorter sleep duration, but in this study, most screen use was not physiologically stimulating. That suggests timing and context matter, and that some forms of screen use may even serve as a wind-down activity before bed.”

Looking ahead, “we want to better distinguish between different types of screen use, for example, interactive versus passive engagement, or emotionally charged versus neutral communication,” Meredith-Jones said. “We’re also developing improved real-world measurement tools that can capture not just how long teens use screens, but what they’re doing, how they’re engaging, and in what context. That level of detail is likely to give us much clearer answers than simple ‘screen time’ totals.”

The study, “Screens, Teens, and Sleep: Is the Impact of Nighttime Screen Use on Sleep Driven by Physiological Arousal?” was authored by Kim A. Meredith-Jones, Jillian J. Haszard, Barbara C. Galland, Shay-Ruby Wickham, Bradley J. Brosnan, Takiwai Russell-Camp, and Rachael W. Taylor.

Yesterday — 12 February 2026Main stream

Methamphetamine increases motivation through brain processes separate from euphoria

12 February 2026 at 19:00

A study published in the journal Psychopharmacology has found that the increase in motivation people experience from methamphetamine is separate from the drug’s ability to produce a euphoric high. The findings suggest that these two common effects of stimulant drugs likely involve different underlying biological processes in the brain. This research indicates that a person might become more willing to work hard without necessarily feeling a greater sense of pleasure or well-being.

The researchers conducted the new study to clarify how stimulants affect human motivation and personal feelings. They intended to understand if the pleasurable high people experience while taking these drugs is the primary reason they become more willing to work for rewards. By separating these effects, the team aimed to gain insight into how drugs could potentially be used to treat motivation-related issues without causing addictive euphoria.

Another reason for the study was to investigate how individual differences in personality or brain chemistry change how a person responds to a stimulant. Scientists wanted to see if people who are naturally less motivated benefit more from these drugs than those who are already highly driven. The team also sought to determine if the drug makes tasks feel easier or if it simply makes the final reward seem more attractive to the user.

“Stimulant drugs like amphetamine are thought to produce ‘rewarding’ effects that contribute to abuse or dependence, by increasing levels of the neurotransmitter dopamine. Findings from animal models suggest that stimulant drugs, perhaps because of their effects on dopamine, increase motivation, or the animals’ willingness to exert effort,” explained study author Harriet de Wit, a professor at the University of Chicago.

“Findings from human studies suggest that stimulant drugs lead to repeated use because they produce subjective feelings of wellbeing. In the present study, we tested the effects of amphetamine in healthy volunteers, on both an effort task and self-reported euphoria.”

For their study, the researchers recruited a group of 96 healthy adults from the Chicago area. This group consisted of 48 men and 48 women between the ages of 18 and 35. Each volunteer underwent a rigorous screening process that included a physical exam, a heart health check, and a psychiatric interview to ensure they were healthy.

The study used a double-blind, placebo-controlled design to ensure the results were accurate and unbiased. This means that neither the participants nor the staff knew if a volunteer received the actual drug or an inactive pill on a given day. The participants attended two separate laboratory sessions where they received either 20 milligrams of methamphetamine or a placebo.

During these sessions, the participants completed a specific exercise called the Effort Expenditure for Rewards Task. This task required them to choose between an easy option for a small amount of money or a more difficult option for a larger reward. The researchers used this to measure how much physical effort a person was willing to put in to get a better payoff.

The easy task involved pressing a specific key on a keyboard 30 times with the index finger of the dominant hand within seven seconds. Successfully completing this task always resulted in a small reward of one dollar. This served as a baseline for the minimum amount of effort a person was willing to expend for a guaranteed but small gain.

The hard task required participants to press a different key 100 times using the pinky finger of their non-dominant hand within 21 seconds. The rewards for this more difficult task varied from about one dollar and 24 cents to over four dollars. This task was designed to be physically taxing and required a higher level of commitment to complete.

Before making their choice on each trial, participants were informed of the probability that they would actually receive the money if they finished the task. These probabilities were set at 12 percent, 50 percent, or 88 percent. This added a layer of risk to the decision, as a person might work hard for a reward but still receive nothing if the odds were not in their favor.

Throughout the four-hour sessions, the researchers measured the participants’ personal feelings and physical reactions at regular intervals. They used standardized questionnaires to track how much the participants liked the effects of the drug and how much euphoria they felt. They also monitored physical signs such as heart rate and blood pressure to ensure the safety of the volunteers.

Before the main sessions, the participants completed the task during an orientation to establish their natural effort levels. The researchers then divided the group in half based on these baseline scores. This allowed the team to compare people who were naturally inclined to work hard against those who were naturally less likely to choose the difficult task.

The results showed that methamphetamine increased the frequency with which people chose the hard task over the easy one across the whole group. This effect was most visible when the chances of winning the reward were in the low to medium range. The drug seemed to give participants a boost in motivation when the outcome was somewhat uncertain.

The data provides evidence that the drug had a much stronger impact on people who were naturally less motivated. Participants in the low baseline group showed a significantly larger increase in their willingness to choose the hard task compared to those in the high baseline group. For people who were already high achievers, the drug did not seem to provide much of an additional motivational boost.

To understand why the drug changed behavior, the researchers used a mathematical model to analyze the decision-making process. This model helped the team separate how much a person cares about the difficulty of a task from how much they value the reward itself. It provided a more detailed look at the internal trade-offs people make when deciding to work.

The model showed that methamphetamine specifically reduced a person’s sensitivity to the physical cost of effort. This suggests that the drug makes hard work feel less unpleasant or demanding than it normally would. Instead of making the reward seem more exciting, the drug appears to make the work itself feel less like a burden.

This change in effort sensitivity was primarily found in the participants who started with low motivation levels. For these individuals, the drug appeared to lower the mental or physical barriers that usually made them avoid the difficult option. In contrast, the drug did not significantly change the effort sensitivity of those who were already highly motivated.

Methamphetamine did not change how sensitive people were to the probability of winning the reward. This indicates that the drug affects the drive to work rather than changing how people calculate risks or perceive the odds of success. The volunteers still understood the chances of winning, but they were more willing to try anyway despite the difficulty.

As the researchers expected, the drug increased feelings of happiness and euphoria in the participants. It also caused the usual physical changes associated with stimulants, such as an increase in heart rate and blood pressure. Most participants reported that they liked the effects of the drug while they were performing the tasks.

A major finding of the study is that the boost in mood was not related to the boost in productivity. The participants who felt the highest levels of euphoria were not the same people who showed the greatest increase in hard task choices. “This suggests that different receptor actions of amphetamine mediate willingness to exert effort and feelings of wellbeing,” de Wit explained.

There was no statistical correlation between how much a person liked the drug and how much more effort they were willing to exert. This provides evidence that the brain processes that create pleasure from stimulants are distinct from those that drive motivated behavior. A person can experience the motivational benefits of a stimulant without necessarily feeling the intense pleasure that often leads to drug misuse.

The findings highlight that “drugs have numerous behavioral and cognitive actions, which may be mediated by different neurotransmitter actions,” de Wit told PsyPost. “The purpose of research in this area is to disentangle which effects are relevant to misuse or dependence liability, and which might have clinical benefits, and what brain processes underlie the effects.”

The results also highlight the importance of considering a person’s starting point when predicting how they will respond to a medication. Because the drug helped the least motivated people the most, it suggests that these treatments might be most effective for those with a clear deficit in drive.

The study, like all research, has some limitations. The participants were all healthy young adults, so it is not clear if the results would be the same for older people or those with existing health conditions. A more diverse group of volunteers would be needed to see if these patterns apply to the general population.

The study only tested a single 20-milligram dose of methamphetamine given by mouth. It is possible that different doses or different ways of taking the drug might change the relationship between mood and behavior. Using a range of doses in future studies would help researchers see if there is a point where the mood and effort effects begin to overlap.

Another limitation is that the researchers did not directly look at the chemical changes inside the participants’ brains. While they believe dopamine is involved, they did not use brain imaging technology to confirm this directly. Future research could use specialized scans to see exactly which brain regions are active when these changes in motivation occur.

“The results open the door to further studies to determine what brain mechanisms underlie the two behavioral effects,” de Wit said.

The study, “Effects of methamphetamine on human effort task performance are unrelated to its subjective effects,” was authored by Evan C. Hahn, Hanna Molla, Jessica A. Cooper, Joseph DeBrosse, and Harriet de Wit.

AI boosts worker creativity only if they use specific thinking strategies

12 February 2026 at 15:00

A new study published in the Journal of Applied Psychology suggests that generative artificial intelligence can boost creativity among employees in professional settings. But the research indicates that these tools increase innovative output only when workers use specific mental strategies to manage their own thought processes.

Generative artificial intelligence is a type of technology that can produce new content such as text, images, or computer code. Large language models like ChatGPT or Google’s Gemini use massive datasets to predict and generate human-like responses to various prompts. Organizations often implement these tools with the expectation that they will help employees come up with novel and useful ideas. Many leaders believe that providing access to advanced technology will automatically lead to a more innovative workforce.

However, recent surveys indicate that only a small portion of workers feel that these tools actually improve their creative work. The researchers conducted the new study to see if the technology truly helps and to identify which specific factors make it effective. They also wanted to see how these tools function in a real office environment where people manage multiple projects at once. Most previous studies on this topic took place in artificial settings using only one isolated task.

“When ChatGPT was released in November 2022, generative AI quickly became part of daily conversation. Many companies rushed to integrate generative AI tools into their workflows, often expecting that this would make employees more creative and, ultimately, give organizations a competitive advantage,” said study author Shuhua Sun, who holds the Peter W. and Paul A. Callais Professorship in Entrepreneurship at Tulane University’s A. B. Freeman School of Business.

“What struck us, though, was how little direct evidence existed to support those expectations, especially in real workplaces. Early proof-of-concept studies in labs and online settings began to appear, but their results were mixed. Even more surprisingly, there were almost no randomized field experiments examining how generative AI actually affects employee creativity on the job.”

“At the same time, consulting firms started releasing large-scale surveys on generative AI adoption. These reports showed that only a small percentage of employees felt that using generative AI made them more creative. Taken together with the mixed lab/online findings, this raised a simple but important question for us: If generative AI is supposed to enhance creativity, why does it seem to help only some employees and not others? What are those employees doing differently?”

“That question shaped the core of our project. So, instead of asking simply whether generative AI boosts creativity, we wanted to understand how it does so and for whom. Driven by these questions, we developed a theory and tested it using a randomized field experiment in a real organizational setting.”

The researchers worked with a technology consulting firm in China to conduct their field experiment. This company was an ideal setting because consulting work requires employees to find unique solutions for many different clients. The study included a total of 250 nonmanagerial employees from departments such as technology, sales, and administration. These participants had an average age of about 30 years and most held university degrees.

The researchers randomly split the workers into two groups. The first group received access to ChatGPT accounts and was shown how to use the tool for their daily tasks. The second group served as a control and did not receive access to the artificial intelligence software during the study. To make sure the experiment was fair, the company told the first group that the technology was meant to assist them rather than replace them.

The experiment lasted for about one week. During this time, the researchers tracked how often the treated group used their new accounts. At the end of the week, the researchers collected data from several sources to measure the impact of the tool. They used surveys to ask employees about their work experiences and their thinking habits.

They also asked the employees’ direct supervisors to rate their creative performance. These supervisors did not know which employees were using the artificial intelligence tool. Additionally, the researchers used two external evaluators to judge specific ideas produced by the employees. These evaluators looked at how novel and useful the ideas were without knowing who wrote them.

The researchers looked at cognitive job resources, which are the tools and mental space people need to handle complex work. This includes having enough information and the ability to switch between hard and easy tasks. They also measured metacognitive strategies. This term describes how people actively monitor and adjust their own thinking to reach a goal.

A person with high metacognitive strategies might plan out their steps before starting a task. They also tend to check their own progress and change their approach if they are not making enough headway. The study suggests that the artificial intelligence tool increased the cognitive resources available to employees. The tool helped them find information quickly and allowed them to manage their mental energy more effectively.

The results show that the employees who had access to the technology generally received higher creativity ratings from their supervisors. The external evaluators also gave higher scores for novelty to the ideas produced by this group. The evidence suggests that the tool was most effective when workers already used strong metacognitive strategies. These workers were able to use the technology to fill specific gaps in their knowledge.

For employees who did not use these thinking strategies, the tool did not significantly improve their creative output. These individuals appeared to be less effective at using the technology to gain new resources. The study indicates that the tool provides the raw material for creativity, but the worker must know how to direct the process. Specifically, workers who monitored their own mental state knew when to use the tool to take a break or switch tasks.

This ability to switch tasks is important because it prevents a person from getting stuck on a single way of thinking. When the technology handled routine parts of a job, it gave workers more mental space to focus on complex problem solving. The researchers found that the positive effect of the technology became significant once a worker’s use of thinking strategies reached a certain level. Below that threshold, the tool did not provide a clear benefit for creativity.

The cognitive approach to creativity suggests that coming up with new ideas is a mental process of searching through different areas of knowledge. People must find pieces of information and then combine them in ways that have not been tried before. This process can be very demanding because people have a limited amount of time and mental energy. Researchers call this the knowledge burden.

It takes a lot of effort to find, process, and understand new information from different fields. If a person spends all their energy just gathering facts, they might not have enough strength left to actually be creative. Artificial intelligence can help by taking over the task of searching for and summarizing information. This allows the human worker to focus on the high level task of combining those facts into something new.

Metacognition is essentially thinking about one’s own thinking. It involves a person being aware of what they know and what they do not know. When a worker uses metacognitive strategies, they act like a coach for their own brain. They ask themselves if their current plan is working or if they need to try a different path.

The study shows that this self-awareness is what allows a person to use artificial intelligence effectively. Instead of just accepting whatever the computer says, a strategic thinker uses the tool to test specific ideas. The statistical analysis revealed that the artificial intelligence tool provided workers with more room to think. This extra mental space came from having better access to knowledge and more chances to take mental breaks.

The researchers used a specific method called multilevel analysis to account for the way employees were organized within departments and teams. This helps ensure that the findings are not skewed by the influence of a single department or manager. The researchers also checked to see if other factors like past job performance or self-confidence played a role. Even when they accounted for these variables, the link between thinking strategies and the effective use of artificial intelligence remained strong.

The data showed that the positive impact of the tool on creativity was quite large for those who managed their thinking well. For those with low scores in that area, the tool had almost no impact on their creative performance. To test creativity specifically, the researchers asked participants to solve a real problem. They had to provide suggestions for protecting employee privacy in a digital office.

This task required at least 70 Chinese characters in response. It was designed to see if the participants could think of novel ways to prevent information leaks or excessive monitoring by leadership. The external raters then scored these responses based on how original and useful they were. This provided a more objective look at creativity than just asking a supervisor for their opinion.

“The main takeaway is that generative AI does not automatically make people more creative,” Sun told PsyPost. “Simply providing access to AI tools is not enough, and in many cases it yields little creative benefit. Our findings show that the creative value of AI depends on how people engage with it during the creative process. Individuals who actively monitor their own understanding, recognize what kind of help they need, and deliberately decide when and how to use AI are much more likely to benefit creatively.”

“In contrast, relying on AI in a more automatic or unreflective way tends to produce weaker creative outcomes. For the average person, the message is simple: AI helps creativity when it is used thoughtfully: Pausing to reflect on what you need, deciding when AI can be useful, and actively shaping its output iteratively are what distinguish creative gains from generic results.”

As with all research, there are some limitations to consider. The researchers relied on workers to report their own thinking strategies, which can sometimes be inaccurate. The study also took place in a single company within one specific country. People in different cultures might interact with artificial intelligence in different ways.

Future research could look at how long-term use of these tools affects human skills. There is a possibility that relying too much on technology could make people less independent over time. Researchers might also explore how team dynamics influence the way people use these tools. Some office environments might encourage better thinking habits than others.

It would also be helpful to see if the benefits of these tools continue to grow over several months or if they eventually level off. These questions will be important as technology continues to change the way we work. The findings suggest that simply buying new software is not enough to make a company more innovative. Organizations should also consider training their staff to be more aware of their own thinking processes.

Since the benefits of artificial intelligence depend on a worker’s thinking habits, generic software training might not be enough. Instead, programs might need to focus on how to analyze a task and how to monitor one’s own progress. These metacognitive skills are often overlooked in traditional professional development. The researchers note that these skills can be taught through short exercises. Some of these involve reflecting on past successes or practicing new ways to plan out a workday.

The study, “How and for Whom Using Generative AI Affects Creativity: A Field Experiment,” was authored by Shuhua Sun, Zhuyi Angelina Li, Maw-Der Foo, Jing Zhou, and Jackson G. Lu.

Scientists asked men to smell hundreds of different vulvar odors to test the “leaky-cue hypothesis”

12 February 2026 at 06:00

A new study published in Evolution and Human Behavior suggests that modern women may not chemically signal fertility through vulvar body odor, a trait commonly observed in other primates. The findings indicate that men are unable to detect when a woman is in the fertile phase of her menstrual cycle based solely on the scent of the vulvar region. This research challenges the idea that humans have retained these specific evolutionary mating signals.

In the animal kingdom, particularly among non-human primates like lemurs, baboons, and chimpanzees, females often broadcast their reproductive status to males. This is frequently done through olfactory signals, specifically odors from the genital region, which change chemically during the fertile window. These scents serve as information for males, helping them identify when a female is capable of conceiving. Because humans share a deep evolutionary history with these primates, scientists have debated whether modern women retain these chemical signals.

A concept known as the “leaky-cue hypothesis” proposes that women might unintentionally emit subtle physiological signs of fertility. While previous research has investigated potential signals in armpit odor, voice pitch, or facial attractiveness, results have been inconsistent.

The specific scent of the vulvar region has remained largely unexplored using modern, rigorous methods, despite its biological potential as a source of chemical communication. To address this gap, a team led by Madita Zetzsche from the Behavioural Ecology Research Group at Leipzig University and the Max Planck Institute for Evolutionary Anthropology conducted a detailed investigation.

The researchers recruited 28 women to serve as odor donors. These participants were between the ages of 20 and 30, did not use hormonal contraception, and had regular menstrual cycles. To ensure the accuracy of the fertility data, the team did not rely on simple calendar counting. Instead, they used high-sensitivity urinary tests to detect luteinizing hormone and analyzed saliva samples to measure levels of estradiol and progesterone. This allowed the scientists to pinpoint the exact day of ovulation for each participant.

To prevent external factors from altering body odor, the donors adhered to a strict lifestyle protocol. They followed a vegetarian or vegan diet and avoided foods with strong scents, such as garlic, onion, and asparagus, as well as alcohol and tobacco. The women provided samples at ten specific points during their menstrual cycle. These points were clustered around the fertile window to capture any rapid changes in odor that might occur just before or during ovulation.

The study consisted of two distinct parts: a chemical analysis and a perceptual test. For the chemical analysis, the researchers collected 146 vulvar odor samples from a subset of 16 women. They used a specialized portable pump to draw air from the vulvar region into stainless steel tubes containing polymers designed to trap volatile compounds. These are the lightweight chemical molecules that evaporate into the air and create scent.

The team analyzed these samples using gas chromatography–mass spectrometry. This is a laboratory technique that separates a mixture into its individual chemical components and identifies them. The researchers looked for changes in the chemical profile that corresponded to the women’s conception risk and hormone levels. They specifically sought to determine if the abundance of certain chemical compounds rose or fell in a pattern that tracked the menstrual cycle.

The chemical analysis revealed no consistent evidence that the overall scent profile changed in a way that would allow fertility to be tracked across the menstrual cycle. While some specific statistical models suggested a potential link between the risk of conception and levels of certain substances—such as an increase in acetic acid and a decrease in a urea-related compound—these findings were not stable. When the researchers ran robustness checks, such as excluding samples from donors who had slightly violated dietary rules, the associations disappeared. The researchers concluded that there is likely a low retention of chemical fertility cues in the vulvar odor of modern women.

In the second part of the study, 139 men participated as odor raters. To collect the scent for this experiment, the female participants wore cotton pads in their underwear overnight for approximately 12 hours. These pads were then frozen to preserve the scent and later presented to the male participants in glass vials. The men, who were unaware of the women’s fertility status, sniffed the samples and rated them on three dimensions: attractiveness, pleasantness, and intensity.

The perceptual results aligned with the chemical findings. The statistical analysis showed that the men’s ratings were not influenced by the women’s fertility status. The men did not find the odor of women in their fertile window to be more attractive or pleasant than the odor collected during non-fertile days. Neither the risk of conception nor the levels of reproductive hormones predicted how the men perceived the scents.

These null results were consistent even when the researchers looked at the data in different ways, such as examining specific hormone levels or the temporal distance to ovulation. The study implies that if humans ever possessed the ability to signal fertility through vulvar scent, this trait has likely diminished significantly over evolutionary time.

The researchers suggest several reasons for why these cues might have been lost or suppressed in humans. Unlike most primates that walk on four legs, humans walk upright. This bipedalism moves the genital region away from the nose of other individuals, potentially reducing the role of genital odor in social communication. Additionally, human cultural practices, such as wearing clothing and maintaining high levels of hygiene, may have further obscured any remaining chemical signals.

It is also possible that social odors in humans have shifted to other parts of the body, such as the armpits, although evidence for axillary fertility cues remains mixed. The researchers noted that while they found no evidence of fertility signaling in this context, it remains possible that such cues require more intimate contact or sexual arousal to be detected, conditions that were not replicated in the laboratory.

Additionally, the strict dietary and behavioral controls, while necessary for scientific rigor, might not reflect real-world conditions where diet varies. The sample size for the chemical analysis was also relatively small, which can make it difficult to detect very subtle effects.

Future research could investigate whether these cues exist in more naturalistic settings or investigate the role of the vaginal microbiome, which differs significantly between humans and non-human primates. The high levels of Lactobacillus bacteria in humans create a more acidic environment, which might alter the chemical volatility of potential fertility signals.

The study, “Understanding olfactory fertility cues in humans: chemical analysis of women’s vulvar odour and perceptual detection of these cues by men,” was authored by Madita Zetzsche, Marlen Kücklich, Brigitte M. Weiß, Julia Stern, Andrea C. Marcillo Lara, Claudia Birkemeyer, Lars Penke, and Anja Widdig.

Childhood trauma scores fail to predict violent misconduct in juvenile detention

11 February 2026 at 23:00

New research published in Aggression and Violent Behavior indicates that a history of childhood trauma may not effectively predict which incarcerated youth will engage in the most frequent and violent misconduct. The study suggests that while adverse childhood experiences explain why young people enter the justice system, current factors such as mental health status and gang affiliation are stronger predictors of behavior during incarceration.

Psychologists and criminologists identify childhood adversity as a primary driver of delinquency. Exposure to trauma often hinders emotional regulation and impulse control. This can lead adolescents to interpret social interactions as hostile and resort to aggression. Correctional systems frequently use the Adverse Childhood Experiences score, commonly known as the ACE score, to quantify this history. The traditional ACE score is a cumulative measure of ten specific categories of abuse, neglect, and household dysfunction.

There is a growing consensus that the original ten-item measure may be too narrow for justice-involved youth. It fails to account for systemic issues such as poverty, community violence, and discrimination. Consequently, scholars have proposed expanded measures to capture a broader range of adversities. D

Despite the widespread use of these scores, little research has isolated their ability to predict the behavior of the most serious offenders. Most studies examine general misconduct across all inmates. This study aimed to determine if trauma scores could identify the small fraction of youth responsible for the vast majority of violent and disruptive incidents within state facilities.

“While research has extensively documented that adverse childhood experiences (ACEs) increase the risk of juvenile delinquency, we knew much less about whether ACEs predict the most serious forms of institutional misconduct among already-incarcerated youth,” said study author Jessica M. Craig, an associate professor of criminal justice and director of graduate programs at the University of North Texas.

“We were particularly interested in whether an expanded ACEs measure—which includes experiences like witnessing community violence, homelessness, and extreme poverty beyond the traditional 10-item scale—would better predict which youth become chronic and violent misconduct offenders during incarceration. This matters because institutional misconduct can lead to longer confinement, additional legal consequences, and reduced access to rehabilitation programs.​”

For their study, the researchers analyzed data from a cohort of 4,613 serious and violent juvenile offenders. The sample included all youth adjudicated and incarcerated in state juvenile correctional facilities in Texas between 2009 and 2013 who had completed an initial intake assessment. The participants were predominantly male. Approximately 46 percent were Hispanic and 34 percent were Black. The average age at the time of incarceration was 16 years old.

The researchers utilized the Positive Achievement Change Tool to derive two distinct trauma scores for each individual. The first was the traditional ACE score. This metric summed exposure to ten indicators: physical, emotional, and sexual abuse; physical and emotional neglect; household substance abuse; mental illness in the home; parental separation or divorce; domestic violence against a mother; and the incarceration of a household member.

The second measure was an expanded ACE score. This metric included the original ten items plus four additional variables relevant to high-risk populations. These additions included a history of foster care or shelter placements, witnessing violence in the community, experiencing homelessness, and living in a family with income below the poverty level. The average youth in the sample had a traditional ACE score of roughly 3.3 and an expanded score of nearly 4.9.

The study did not treat misconduct as a simple average. The researchers sought to identify chronic perpetrators. They calculated the rate of total misconduct incidents and violent misconduct incidents for each youth. They then separated the offenders into groups representing the top 10 percent and the top 1 percent of misconduct perpetrators. This allowed the analysis to focus specifically on the individuals who pose the greatest challenge to institutional safety.

The researchers used statistical models to test whether higher trauma scores increased the likelihood of being in these high-rate groups. These models controlled for other potential influences, including prior criminal history, offense type, age, race, and substance abuse history.

The analysis yielded results that challenged the assumption that past trauma dictates future institutional violence. Neither the traditional ACE score nor the expanded ACE score served as a significant predictor for membership in the top 10 percent or top 1 percent of misconduct perpetrators. This finding held true for both general rule-breaking and specific acts of violence. The addition of variables like poverty and community violence to the trauma score did not improve its predictive power regarding institutional behavior.

“We were surprised that even the expanded ACEs measure—which included witnessing violence, foster care placement, homelessness, and poverty—failed to predict high-rate misconduct,” Craig told PsyPost. “Given that previous research suggested the traditional 10-item ACEs scale might underestimate adversity among justice-involved youth, we expected the expanded measure to show stronger predictive power.”​

While trauma history did not predict chronic misconduct, other personal and situational characteristics proved to be strong indicators. The most consistent predictor of violent behavior was a history of serious mental health problems. Youth with such histories had approximately 150 percent increased odds of falling into the top 1 percent of violent misconduct perpetrators compared to their peers. This effect size suggests that current psychological stability is a primary determinant of safety within the facility.

Age and social connections also played significant roles. The data indicated that older youth were substantially less likely to engage in chronic misconduct. Specifically, those who were older at the time of incarceration were about 50 to 60 percent less likely to be in the high-rate misconduct groups. Gang affiliation was another robust predictor. Youth with gang ties were significantly more likely to be among the most frequent violators of institutional rules. This points to the influence of peer dynamics and the prison social structure on individual behavior.

“These are substantively meaningful effects that have real implications for correctional programming and supervision strategies,” Craig said.

The study provides evidence that the factors driving entry into the justice system may differ from the factors driving behavior once inside. While childhood adversity sets a trajectory toward delinquency, the structured environment of a correctional facility introduces new variables. The researchers suggest that the “survival coping” mechanisms youth develop in response to trauma might manifest differently depending on their immediate environment and mental state.

“Contrary to expectations, we found that neither traditional nor expanded ACEs measures significantly predicted which youth became the most frequent perpetrators of institutional misconduct,” Craig explained. “Instead, factors like age at incarceration, gang affiliation, and mental health history were much stronger predictors.”

“This suggests that while childhood trauma remains critically important for understanding how youth enter the justice system, managing their behavior during incarceration may require greater focus on their current mental health needs, developmental stage, and institutional factors rather than trauma history alone.​”

These findings imply that correctional administrators should look beyond a cumulative trauma score when assessing risk. Screening processes that emphasize current mental health conditions and gang involvement may offer more utility for preventing violence than those focusing solely on historical adversity. Effective management of high-risk populations appears to require targeted mental health interventions and strategies to disrupt gang activity.

There are some limitations to consider. The data came from a single state, which may limit the ability to generalize the findings to other jurisdictions with different correctional cultures or demographics.

The study also relied on cumulative scores that count the presence of adverse events but do not measure their severity, frequency, or timing. It is possible that specific types of trauma, such as physical abuse, have different impacts than others, such as parental divorce. A simple sum of these events might obscure specific patterns that do predict violence.

“It’s important to emphasize that our findings don’t diminish the significance of childhood trauma in understanding juvenile justice involvement overall,” Craig said. “ACEs remain crucial for understanding pathways into the system and should absolutely be addressed through trauma-informed programming. However, when it comes to predicting institutional violence specifically among already deeply-entrenched offenders, personal characteristics and current mental health status appear more salient than historical trauma exposure.​”

“Future research should examine whether specific patterns or combinations of traumatic experiences—rather than cumulative scores—might better predict institutional violence. We’d also like to investigate whether trauma-informed treatment programs, when youth actually receive them during incarceration, can reduce misconduct even when trauma history alone doesn’t predict it. Additionally, examining the timing and severity of ACEs, rather than just their presence or absence, could clarify the trauma-violence relationship.”

The study, “Looking back: The impact of childhood adversity on institutional misconduct among a cohort of serious and violent institutionalized delinquents,” was authored by Jessica M. Craig, Haley Zettler, and Chad R. Trulson.

Before yesterdayMain stream

High rates of screen time linked to specific differences in toddler vocabulary

11 February 2026 at 20:00

New research published in the journal Developmental Science provides evidence that the amount of time toddlers spend watching videos is associated with the specific types of words they learn, distinct from the total number of words they know. The findings indicate that higher levels of digital media consumption are linked to a vocabulary containing a smaller proportion of body part words and a larger proportion of words related to people and furniture.

The widespread integration of digital media into family life has prompted questions about its influence on early child development. Current estimates suggest that many children under the age of two spend roughly two hours per day interacting with screens, primarily watching videos or television.

Previous research has often focused on the relationship between screen time and the overall size of a child’s vocabulary. These earlier studies generally established that high exposure to low-quality programming correlates with a lower total number of words spoken by the child.

However, language acquisition is a multifaceted process. Children do not learn all words in the same manner. The acquisition of certain types of words relies heavily on specific environmental inputs.

“There is no doubt that use of digital media by young children has been on the rise in the past few years, and growing evidence suggest that this has impacts on their language learning, especially during the first few years of life,” said study author Sarah C. Kucker, an assistant professor of psychology at Southern Methodist University.

“For instance, we know that children who watch high rates of low-quality television/videos tend to have smaller vocabularies and less advanced language skills (this is work by my own lab, but also many others such as Brushe et al., 2025; Madigan et al., 2024). However, we also know that some forms of media do not have negative effects and can, in fact, be useful for language when the media is high-quality, socially-interactive, and educational in nature (work by Sundqvist as well Jing et al., 2024).”

“On top of this, we know that children’s language development and specifically their vocabulary learning is not an all-or-nothing, but rather that children learn different types of words at different times and in different ways – e.g. learning words for body parts is easier when you can touch the body part when named, and names for people (mama, dada) are learned earlier than most other nouns,” Kucker continued.

“When we put this together it means that we shouldn’t be looking at digital media’s influence on language as just an all-or-nothing, or blanket good-or-bad, but rather take a more nuanced look. So we did just that by looking at the types of words children are learning and the association with the time they spend with digital media.”

For their study, the researchers recruited 388 caregivers of children aged 17 to 30 months. This age range represents a period of rapid language expansion often referred to as the vocabulary spurt. Participants were recruited through online research platforms and in-person visits to a university laboratory. The researchers combined these groups into a single dataset for analysis.

Caregivers completed a comprehensive survey known as the Media Assessment Questionnaire. This instrument asked parents to report the number of minutes their child spent using various forms of technology, such as television, tablets, and video chat.

The researchers collected data for both typical weekdays and weekends. They used these reports to calculate a weighted daily average of screen time for each child. The data revealed that video and television viewing was the most common media activity. On average, the children in the sample watched videos for approximately 110 minutes per day.

To measure language development, caregivers completed the MacArthur-Bates Communicative Development Inventory. This is a standardized checklist containing hundreds of words commonly learned by young children. Parents marked the words their child could say.

This tool allowed the researchers to calculate the total size of each child’s noun vocabulary. It also enabled them to break down the vocabulary into specific semantic categories. These categories included animals, vehicles, toys, food and drink, clothing, body parts, small household items, furniture and rooms, outside things, places to go, and people.

The researchers also analyzed the vocabulary data through a different lens. They classified nouns based on the features that define their categories. Specifically, they looked at shape-based nouns and material-based nouns.

Shape-based nouns usually refer to solid objects defined by their physical form, such as “ball” or “cup.” Material-based nouns often refer to nonsolid substances or items defined by what they are made of, such as “applesauce” or “chalk.” This distinction is significant in developmental psychology because physical handling of objects is thought to help children learn these concepts.

The researchers found that children with higher rates of video viewing produced a smaller proportion of body part words. In a typical toddler’s vocabulary, words like “nose,” “feet,” or “ears” are often among the first learned. However, as screen time increased, the density of these words in the child’s repertoire decreased relative to other word types.

In contrast, the researchers found a positive association between video time and words related to people. This category includes proper names, titles like “teacher” or “grandma,” and general terms like “baby.” Children who watched more videos tended to have a vocabulary composition that was more heavily weighted toward these social labels.

A similar positive association was found for the category of furniture and rooms. Heavy media users were more likely to produce words such as “couch,” “TV,” or “kitchen” relative to their peers with lower media use.

“While we expected that children with high media use would have fewer body part words in their vocabulary, we were surprised to find that children with high media knew relatively more people words and furniture words,” Kucker told PsyPost. “We suspect this may have to do with the content of the media highlighting those terms, or perhaps the physical context in which children are using media (e.g. while sitting on a couch or when working with mom), but the tools to capture this information are currently limited.”

The researchers found no significant relationship between video watching and the other semantic categories measured, such as animals, toys, or food. Additionally, the researchers found no evidence that video exposure altered the balance between shape-based and material-based nouns. The proportion of words related to solid objects versus nonsolid substances remained stable regardless of screen time habits.

The research highlights that the impact of digital media is not uniformly negative or positive. The findings suggest that screen time changes the landscape of early learning in specific ways.

“Most caregivers have heard the advice to avoid screen time with their young children,” Kucker said. “However, the reality is that that is very difficult to do 100% of the time in today’s tech-based world. What this study shows is that a high amount of low-quality videos/TV is associated with lower overall vocabulary sizes in 2-year-old children, but that that videos/TV may not impact all types of words equally.”

“For instance, children with more video/TV time have fewer names for body parts, but seem to learn most other nouns at relatively equal levels, potentially because some videos/TV do a good job teaching children some basics.”

“So do try to limit children’s screen time, but don’t fret about avoiding it completely,” Kucker explained. “Instead, consider the content and context for when the media is being used and why – high-quality, educational use, or those that are social (e.g. FaceTime, Zoom), may not be detrimental as long as children are still getting rich interactive play outside of the screen.”

As with all research, there are some limitations to consider. The data relied on caregiver reports, which can introduce memory errors or bias.

The study was also cross-sectional, meaning it captured a snapshot of the children’s lives rather than following them over time. It is not possible to determine causality from this data alone. For example, it is unknown if watching videos causes the change in vocabulary or if families with different communication styles rely more on media.

“We are currently looking at more longitudinal impacts of digital media on children’s language over time as well as individual differences across children, such as considering personality and temperament,” Kucker noted.

Additionally, the study focused primarily on the duration of screen time. It did not fully capture the specific content of the videos the children watched or the nature of the interactions parents had with their children during viewing. The researchers noted that educational content and co-viewing with a parent can mitigate potential negative effects.

“Not all media is bad!” Kucker said. “Media’s effect on children is nuanced and interacts with the rest of their experiences. I always like to tell parents that if your child watches an educational show for a few minutes so you can have a few minutes of quiet, that may be helping you to then be a better parent later which will more than offset that few minutes of media time.”

“Children who get rich, social experiences are often still developing in very strong ways even if they have a bit of high-quality screen time here and there. Just considering the content and context of the media is key!”

“We have a lot of work left still to do and understand in this area, and much of the support for this work has come from various grants and foundations, such as NIH and NSF,” Kucker added. “Without those funding avenues, this work couldn’t be done.”

The study, “Videos and Vocabulary – How Digital Media Use Impacts the Types of Words Children Know,” was authored by Sarah C. Kucker, Rachel F. Barr, and Lynn K. Perry.

Psychology study sheds light on the phenomenon of waifus and husbandos

11 February 2026 at 17:00

A new study published in Psychology of Popular Media suggests that human romantic attraction to fictional characters may operate through the same psychological mechanisms that drive relationships between real people. The research offers insight into how individuals form deep attachments to non-existent partners in an increasingly digital world.

The concept of falling in love with an artificial being is not a modern invention, the researchers behind the new study noted. The ancient Greek narrative of Pygmalion describes a sculptor who creates a statue so beautiful that he falls in love with it. This theme of attributing human qualities and agency to inanimate creations has persisted throughout history.

In the contemporary landscape, this phenomenon is often observed within the anime fan community. Fans of Japanese animation sometimes utilize specific terminology to describe characters they hold in special regard. The terms “waifu” and “husbando” are derived from the English words for wife and husband. These labels imply a desire for a significant, often romantic, relationship with the character if they were to exist in reality.

The researchers conducted the new study to better understand the nature of relationships with “virtual agents.” A virtual agent is any character that exists solely on a screen but projects a sense of agency or independence to the audience. As technology advances, these characters are becoming more interactive and realistic. The authors sought to determine if the reasons people connect with these characters align with evolutionary theories regarding human mating strategies.

“Given the popularity of AI agents and chatbots, we were interested in people who have attraction to fictional characters,” said study author Connor Leshner, a PhD candidate in the Department of Psychology at Trent University.

“Through years of research, we have access to a large and charitable sample of anime fans, and it is a norm within this community to have relationships (sometimes real, sometimes now) with fictional characters. We mainly wanted to understand whether a large group of people have the capacity for relationships with fictional characters, because, if they do, then a logical future study would be studying relationships with something like AI.”

To investigate this, the research team recruited a large sample of self-identified anime fans. Participants were gathered from various online platforms, including specific communities on the website Reddit. The final sample consisted of 977 individuals who indicated that they currently had a waifu or husbando.

The demographic makeup of the sample was predominantly male. Approximately 78 percent of the respondents identified as men, while the remainder identified as women. The average age of the participants was roughly 26 years old, and more than half were from the United States. This provided a snapshot of a specific, highly engaged subculture.

The researchers employed a quantitative survey to assess the participants’ feelings and motivations. They asked participants to rate their agreement with various statements on a seven-point scale. The survey measured four potential reasons for choosing a specific character. These reasons were physical appearance, personality, the character’s role in the story, and the character’s similarity to the participant.

The researchers also sought to categorize the type of connection the fan felt toward the character. The three categories measured were emotional connection, sexual attraction, and feelings of genuine love.

The results provided evidence supporting the idea that fictional attraction mirrors real-world attraction. The data showed a positive association between a character’s physical appearance and the participant’s sexual attraction to them. This suggests that visual appeal is a primary driver for sexual interest in virtual agents, much as it is in human interaction.

However, physical appearance was not the only factor at play. The researchers found that a character’s personality was a strong predictor of emotional connection. Additionally, participants who felt that a character was similar to themselves were more likely to report a deep emotional bond. This indicates that shared traits and relatable behaviors foster feelings of closeness even when the partner is not real.

A central focus of the study was the influence of gender on these connections. The analysis revealed distinct differences between how men and women engaged with their chosen characters. Men were significantly more likely to report feelings of sexual attraction toward their waifus or husbandos. This aligns with prior research on male mating strategies that emphasizes visual and sexual stimuli.

Women, in contrast, reported higher levels of emotional connection with their fictional partners. While they also valued personality, their bonds were characterized more by affection and emotional intimacy than by sexual desire. This finding supports the hypothesis that women apply criteria focused on emotional compatibility even when the relationship is entirely imagined.

The researchers also explored the concept of “genuine love” for these characters. They found that feelings of love were predicted by a combination of factors. Physical appearance, personality, and similarity to the self all contributed to the sensation of being in love. This suggests that for a fan to feel love, the character must appeal to them on multiple levels simultaneously.

“People do have the capacity for these relationships,” Leshner told PsyPost. “Sometimes they are based in physical attraction, especially for men, while others are based on platonic, personality-based attraction, especially for women. Overall, people can feel a deep, intimate connection with people who don’t exist on our plane of reality, and I think that’s neat.”

The findings were not particularly surprising. “Everything matches what you’d expect from related theories, like evolutionary mating strategy where men want physical or sexual relationships, while women find more appeal in the platonic, long-term relationship,” Leshner said. “We have ongoing research that helps contextualize these findings more, but until that’s published, we cannot say much more.”

One potential predictor that did not yield significant results was the character’s role in the media. The “mere exposure effect” suggests that people tend to like things simply because they are familiar with them. The researchers tested if characters with larger roles, such as protagonists who appear on screen frequently, were more likely to be chosen. The data did not support this link.

The specific narrative function of the character did not predict sexual attraction, emotional connection, or love. A supporting character with limited screen time appeared just as capable of inspiring deep affection as a main hero. This implies that the specific attributes of the character matter more than their prominence in the story.

These findings carry implications that extend beyond the anime community. As artificial intelligence and robotics continue to develop, human interactions with non-human entities will likely become more common. The study suggests that people are capable of forming complex, multifaceted relationships with entities that do not physically exist.

“Anime characters don’t have agency, nor do they have consciousness, so the extent to which the average person might have a serious relationship with an anime characters is probably limited,” Leshner told PsyPost. “With that said, the same can is true of AI, and the New York Times published a huge article on human-AI romantic relationships. So maybe these relationships are more appealing than we really capture here.”

There are limitations to the study. The research relied on cross-sectional data, which means it captured a single moment in time. This design prevents researchers from proving that specific character traits caused the attraction. It is possible that attraction causes a participant to perceive traits differently.

Additionally, the sample was heavily skewed toward Western, male participants. Cultural differences in how relationships are viewed could influence these results. The anime fandom in Japan, for instance, might exhibit different patterns of attachment than those observed in the United States. Future research would benefit from a more diverse, global pool of participants.

Despite these limitations, the study provides a foundation for understanding the future of human connection. It challenges the notion that relationships with fictional characters are fundamentally different from real relationships. The psychological needs and drives that lead someone to download a soulmate appear to be remarkably human.

“People might either find these relationships weird, or might say that AI is significantly different from what we show here,” Leshner added. “My first response is that these relationships aren’t weird, and we’ve been discussing similar relationships for centuries. The article opens with a reference to Pygmalion, which is a Greek story about a guy falling in love with a statue. At minimum, it’s a repeated idea in our culture.”

“To my second point about the similarities between AI and anime characters, I think about it like this: AI might seem more human, but it’s just Bayesian statistics with extra steps. If you watch an anime all the way through, you can spend up to hundreds of hours with characters who have their own human struggles, triumphs, loves and losses. To be drawn toward that story and character is, to me, functionally similar to talking to an AI chatbot. The only difference is that an AI chatbot can feel more responsive, and might have more options for customization.”

“I think this research is foundational to the future of relationships, but I don’t think people know enough about anime characters, or really media or parasocial relationships broadly, to see things the same way,” Leshner continued. “I’m going to keep going down this road to understand the parallels with AI and modern technologies, but I fully believe that this is an uphill battle for recognition.”

“I hope this work inspires people to look into why people might be attracted to anime characters more broadly. It feels like the average anime character is made to be conventionally attractive in a way that is not true of most animation. It might still be weird to someone with no knowledge of the field if they engage in this quick exercise, but I have the utmost confidence that the average person might say, ‘Well, although it is not for me, I can understand it better now.'”

The study, “You would not download a soulmate: Attributes of fictional characters that inspire intimate connection,” was authored by Connor Leshner, Stephen Reysen, Courtney N. Plante, Sharon E. Roberts, and Kathleen C. Gerbasi.

Scientists: A common vaccine appears to have a surprising impact on brain health

11 February 2026 at 15:00

A new scientific commentary suggests that annual influenza vaccination could serve as a practical and accessible strategy to help delay or prevent the onset of dementia in older adults. By mitigating the risk of severe cardiovascular events and reducing systemic inflammation, the seasonal flu shot may offer neurological protection that extends well beyond respiratory health. This perspective article was published in the journal Aging Clinical and Experimental Research.

Dementia poses a significant and growing challenge to aging societies worldwide, creating an urgent need for scalable prevention strategies. While controlling midlife risk factors like high blood pressure remains a primary focus, medical experts are looking for additional tools that can be easily integrated into existing healthcare routines.

Lorenzo Blandi from the Vita-Salute San Raffaele University and Marco Del Riccio from the University of Florence authored this analysis to highlight the potential of influenza vaccination as a cognitive preservation tool. They argue that the current medical understanding of the flu shot is often too limited. The researchers propose that by preventing the cascade of physical damage caused by influenza, vaccination can help maintain the brain’s vascular and cellular health.

The rationale for this perspective stems from the observation that influenza is not merely a respiratory illness. It is a systemic infection that can cause severe complications throughout the body. The authors note that influenza infection is associated with a marked increase in the risk of heart attacks and strokes in the days following illness.

These vascular events are known to contribute to cumulative brain injury. Consequently, Blandi and Del Riccio sought to synthesize existing evidence linking vaccination to improved cognitive outcomes. They posit that preventing these viral insults could modify the trajectory of dementia risk in the elderly population.

To support their argument, the authors detail evidence from four major epidemiological studies that demonstrate a link between receiving the flu shot and a lower incidence of dementia. The first piece of evidence cited is a 2023 meta-analysis. This massive review aggregated data from observational cohort studies involving approximately 2.09 million adults.

The participants in these studies were followed for periods ranging from four to thirteen years. The analysis found that individuals who received influenza vaccinations had a 31 percent lower risk of developing incident dementia compared to those who did not.

The second key study referenced was a claims-based cohort study. This research utilized propensity-score matching, a statistical technique designed to create comparable groups by accounting for various baseline characteristics. The researchers analyzed data from 935,887 matched pairs of older adults who were at least 65 years old.

The results showed that those who had received an influenza vaccination had a 40 percent lower relative risk of developing Alzheimer’s disease over a follow-up period of roughly four years. The study calculated an absolute risk reduction of 3.4 percent, suggesting that for every 29 people vaccinated, one case of Alzheimer’s might be prevented during that timeframe.

The third study highlighted in the perspective used data from the Veterans Health Administration. This study was significant because it used time-to-event models to address potential biases related to when vaccinations occurred.

The researchers found that vaccinated older adults had a hazard ratio for dementia of 0.86. This statistic indicates a risk reduction of roughly 14 percent. The data also revealed a dose-response relationship. This means that the protective signal was strongest among participants who received multiple vaccine doses across different years and seasons, rather than just a single shot.

The fourth and final study cited was a prospective analysis of the UK Biobank. This study modeled vaccination as an exposure that varies over time, allowing for a nuanced view of cumulative effects.

The researchers observed a reduced risk for all-cause dementia, with a hazard ratio of 0.83. The reduction in risk was even more pronounced for vascular dementia, showing a hazard ratio of 0.58. Similar to the veterans’ study, this analysis supported the idea of a dose-response relationship. The accumulation of vaccinations over time appeared to correlate with better cognitive outcomes.

Blandi and Del Riccio explain several biological mechanisms that could account for these protective effects. The primary pathway involves the prevention of vascular damage. Influenza infection is a potent trigger for inflammation and blood clotting.

Research shows that the risk of acute myocardial infarction can be six times greater in the first week after a flu infection. By preventing the flu, the vaccine likely prevents these specific vascular assaults. Since vascular health is closely tied to brain health, avoiding these events helps preserve cognitive reserve. The cumulative burden of small strokes or reduced blood flow to the brain is a major predictor of cognitive decline.

In addition to vascular protection, the authors discuss the role of neuroinflammation. Studies in animal models have shown that influenza viruses can trigger activation of microglia, which are the immune cells of the brain. This activation can lead to the loss of synapses and memory decline, even if the virus itself does not enter the brain.

Systemic inflammation caused by the flu can cross into the nervous system. The authors suggest that vaccination may dampen these inflammatory surges. There is also a hypothesis known as “trained immunity,” where vaccines might program the immune system to respond more efficiently to threats, reducing off-target damage to the brain.

Based on this evidence, the authors propose several policy changes and organizational strategies. They argue that public health messaging needs to be reconceptualized. Instead of framing the flu shot solely as a way to avoid a winter cold, health officials should present it as a measure to reduce heart attacks, strokes, and potential cognitive decline. This approach addresses the priorities of older adults, who often fear dementia and loss of independence more than respiratory illness.

The authors also recommend specific clinical practices. They suggest that health systems should prioritize the use of high-dose or adjuvanted vaccines for adults over the age of 65. These formulations are designed to overcome the weaker immune response often seen in aging bodies.

Additionally, the authors advocate for making vaccination a default part of hospital discharge procedures. When an older adult is leaving the hospital after a cardiac or pulmonary event, vaccination should be a standard component of their care plan. This would help close the gap between the known benefits of the vaccine and the currently low rates of uptake in many regions.

Despite the promising data, Blandi and Del Riccio acknowledge certain limitations in the current body of evidence. The majority of the data comes from observational studies. This type of research can identify associations but cannot definitively prove causality.

There is always a possibility of “healthy user bias,” where people who choose to get vaccinated are already more health-conscious and have better lifestyle habits than those who do not. While the studies cited used advanced statistical methods to control for these factors, residual confounding can still exist.

The authors also note that studies based on medical claims data can suffer from inaccuracies in how dementia is diagnosed and recorded. Furthermore, the precise biological mechanisms remain a hypothesis that requires further validation. The authors call for future research to include pragmatic randomized trials that specifically measure cognitive endpoints. They suggest that future studies should track biological markers of neuroinflammation in vaccinated versus unvaccinated groups to confirm the proposed mechanisms.

The study, “From breath to brain: influenza vaccination as a pragmatic strategy for dementia prevention,” was authored by Lorenzo Blandi and Marco Del Riccio.

Does sexual activity before exercise harm athletic performance?

10 February 2026 at 21:00

New research published in the journal Physiology & Behavior provides evidence that sexual activity shortly before high-intensity exercise does not harm athletic performance. The study suggests that masturbation-induced orgasm 30 minutes prior to exertion may actually enhance exercise duration and reaction time. These findings challenge long-standing beliefs regarding the necessity of sexual abstinence before athletic competition.

The motivation for the new study stems from a persistent debate in the sports world. Coaches and athletes have frequently adhered to the idea that sexual activity drains energy and reduces aggression. This belief has led to common recommendations for abstinence in the days leading up to major events. Diego Fernández-Lázaro from the University of Valladolid led a research team to investigate whether these restrictions are scientifically justified.

Previous scientific literature on this topic has been inconsistent or limited in scope. Many prior studies focused on sexual activity occurring the night before competition, leaving a gap in knowledge regarding immediate effects. Fernández-Lázaro and his colleagues aimed to examine the physiological and performance outcomes of sexual activity that occurs less than an hour before maximal effort.

To conduct the investigation, the researchers recruited 21 healthy, well-trained male athletes. The participants included basketball players, long-distance runners, and boxers. The average age of the volunteers was 22 years. The study utilized a randomized crossover design to ensure robust comparisons. This means that every participant completed both the experimental condition and the control condition.

In the control condition, participants abstained from any sexual activity for at least seven days. On the day of testing, they watched a neutral documentary film for 15 minutes before beginning the exercise assessments. In the experimental condition, the participants engaged in masturbation to orgasm in a private setting 30 minutes before the tests. They viewed a standardized erotic film to facilitate this process. Afterward, they watched the same neutral documentary to standardize the rest period.

The researchers employed two primary physical tests to measure performance. The first was an isometric handgrip strength test using a dynamometer. The second was an incremental cycling test performed on a stationary bike. The cycling test began at a set resistance and increased in difficulty every minute until the participant could no longer continue. This type of test is designed to measure aerobic capacity and time to exhaustion.

In addition to physical performance, the team collected blood samples to analyze various biomarkers. They looked for changes in hormones such as testosterone, cortisol, and luteinizing hormone. They also measured markers of muscle damage, including creatine kinase and lactate dehydrogenase. Inflammatory markers like C-reactive protein were also assessed to see if sexual activity placed additional stress on the body.

The results indicated that sexual activity did not have a negative impact on physical capabilities. The participants demonstrated a small but statistically significant increase in the total duration of the cycling test following sexual activity compared to the abstinence condition. This improvement represented a 3.2 percent increase in performance time.

The researchers also observed changes in handgrip strength. The mean strength values were slightly higher in the group that had engaged in sexual activity. This suggests that the neuromuscular system remained fully functional and perhaps slightly primed for action.

Physiological monitoring revealed that heart rates were higher during the exercise sessions that followed sexual activity. This elevation in heart rate aligns with the activation of the sympathetic nervous system. This system is responsible for the “fight or flight” response that prepares the body for physical exertion.

Hormonal analysis provided further insight into the body’s response. The study found that concentrations of both testosterone and cortisol were higher after sexual activity. Testosterone is an anabolic hormone associated with strength and aggression. Cortisol is a stress hormone that helps mobilize energy stores. The simultaneous rise in both hormones indicates a state of physiological activation rather than a state of fatigue.

The study also examined markers of muscle damage to see if the combination of sex and exercise caused more tissue stress. The findings showed that levels of lactate dehydrogenase were actually lower in the sexual activity condition. This specific enzyme leaks into the blood when muscle cells are damaged or stressed. The reduction suggests that the pre-exercise sexual activity did not exacerbate muscle stress and may have had a protective or neutral effect.

Other markers of muscle damage, such as creatine kinase and myoglobin, showed no significant differences between the two conditions. Similarly, inflammatory markers like interleukin-6 remained stable. This implies that the short-term physiological stress of sexual activity does not compound the stress caused by the exercise itself.

These findings diverge from some historical perspectives and specific past studies. For example, a study by Kirecci and colleagues reported that sexual intercourse within 24 hours of exercise reduced lower limb strength. The current study contradicts that conclusion by showing maintained or improved strength. The difference may lie in the specific timing or the nature of the sexual activity, as the current study focused on masturbation rather than partnered intercourse.

The results align more closely with a body of research summarized by Zavorsky and others. Those reviews generally concluded that sexual activity the night before competition has little to no impact on performance. The current study builds on that foundation by narrowing the window to just 30 minutes. It provides evidence that even immediate pre-competition sexual activity is not detrimental.

The researchers propose that the observed effects are likely due to a “priming” mechanism. Sexual arousal activates the sympathetic nervous system and triggers the release of catecholamines. This physiological cascade resembles a warm-up. It increases heart rate and alertness, which may translate into better readiness for immediate physical exertion.

The psychological aspect of the findings is also worth noting. The participants did not report any difference in their perceived rate of exertion between the two conditions. This means the exercise did not feel harder after sexual activity, even though their heart rates were higher. This consistency suggests that motivation and psychological fatigue were not negatively affected.

There are limitations to this study that affect how the results should be interpreted. The sample consisted entirely of young, well-trained men. Consequently, the findings may not apply to female athletes, older adults, or those with lower fitness levels. The physiological responses to sexual activity can vary across these different demographics.

The study restricted sexual activity to masturbation to maintain experimental control. Partnered sexual intercourse involves different physical demands and psychological dynamics. Intercourse often requires more energy expenditure and involves oxytocin release related to bonding, which might influence sedation or relaxation differently than masturbation.

The sample size of 21 participants is relatively small, although adequate for a crossover design of this nature. Larger studies would be needed to confirm these results and explore potential nuances. The study also relied on a one-week washout period between trials. While this is standard, residual psychological effects from the first session cannot be entirely ruled out.

Future research should aim to include female participants to determine if similar hormonal and performance patterns exist. It would also be beneficial to investigate different time intervals between sexual activity and exercise. Understanding the effects of partnered sex versus masturbation remains a key area for further exploration.

The study provides evidence that the “abstinence myth” may be unfounded for many athletes. The data indicates that sexual activity 30 minutes before exercise does not induce fatigue or muscle damage. Instead, it appears to trigger a neuroendocrine response that supports physical performance. Athletes and coaches may need to reconsider strict abstinence policies based on these physiological observations.

The study, “Sexual activity before exercise influences physiological response and sports performance in high-level trained men athletes,” was authored by Diego Fernández-Lázaro, Manuel Garrosa, Gema Santamaría, Enrique Roche, José María Izquierdo, Jesús Seco-Calvo, and Juan Mielgo-Ayuso.

Neuroimaging data reveals a “common currency” for effective communication

10 February 2026 at 20:00

A new study published in PNAS Nexus has found that specific patterns of brain activity can predict the success of persuasive messages across a wide variety of contexts. By analyzing neuroimaging data from over 500 individuals, researchers identified that neural responses in regions associated with reward and social processing are consistent indicators of how effective a message will be. These findings suggest that the human brain utilizes a common set of mechanisms to evaluate persuasive content.

Diverse fields such as marketing, political science, and public health rely heavily on the ability to influence attitudes and behaviors through mass media. Practitioners and scientists have long sought to understand exactly what makes a message persuasive enough to change a mind or prompt an action.

Previous research on this topic has typically been isolated within specific disciplines, preventing the development of a unified theory that applies across different topics. This fragmentation makes it difficult to know if the psychological drivers behind a successful anti-smoking ad are the same as those driving a popular movie trailer. The authors of the current study aimed to bridge this gap by applying a standardized analytical framework to a large collection of existing datasets.

“Persuasive messages—like those used in marketing, politics, or public health campaigns—play a key role in shaping attitudes and influencing behavior. But what exactly makes these messages effective, and do the same processes apply across different contexts? We don’t fully know, because research on persuasion tends to stay within individual disciplines, with little cross-talk,” explained the corresponding authors, Christin Scholz, Hang-Yee Chan, and Emily B. Falk.

“If we could identify common processes, different fields could work together more efficiently to understand what really drives persuasion. In this study, we examine neuroimaging data collected in response to a variety of persuasive messages. MRI brain images offer a way to observe and compare patterns of brain activity across different contexts. By conducting a mega-analysis of 16 datasets, we aimed to uncover broader patterns in how the brain responds to persuasive messages—patterns that individual studies might overlook.”

The research team conducted a mega-analysis, which differs from a traditional meta-analysis by aggregating and re-processing the raw data from multiple studies rather than simply summarizing their published results. They pooled functional magnetic resonance imaging (fMRI) data from 16 distinct experiments conducted by the co-authors. This combined dataset included 572 participants who were exposed to a total of 739 different persuasive messages.

The scope of the messages was broad, covering topics such as public health promotion, crowdfunding projects, commercial products, and video, text, or image-based advertisements. The total dataset comprised 21,688 individual experimental trials. In each of the original studies, participants lay inside an MRI scanner while viewing the messages. The scanner recorded changes in blood flow to various parts of the brain, which serves as a proxy for neural activity.

After viewing the content, the participants provided their own evaluations of the messages. They typically answered survey questions about how much they liked the message or whether they intended to change their behavior. The researchers categorized these self-reported measures as “message effectiveness in individuals.”

To assess the real-world impact of the content, the team also gathered data on how independent, larger groups of people responded to the same messages. These measures were termed “message effectiveness at scale.” This category included objective behavioral metrics like click-through rates on web banners, the amount of money donated to a campaign, or total view counts on video platforms.

The researchers then used linear mixed-effects models to test if brain activity in specific regions could predict both the individuals’ ratings and the large-scale behavioral outcomes. They focused their analysis on two primary neural systems: the reward system and the mentalizing system. The reward system is involved in anticipating value and pleasure, while the mentalizing system helps individuals understand the thoughts and feelings of others.

The statistical analysis revealed that activity in brain networks associated with reward processing was positively linked to message effectiveness. When participants showed higher engagement in the ventral tegmental area and nucleus accumbens, they were more likely to rate the messages as effective. These regions are deep structures in the brain that are typically involved in processing personal value and motivation. The study indicates that this neural signal of value is a consistent predictor of how well a message is received by the viewer.

The researchers also identified a strong connection between message success and activity in the brain’s mentalizing system. This network includes the medial prefrontal cortex and the temporal poles. These areas are active when people think about themselves or attempt to interpret the mental states of other people. The analysis showed that messages triggering this social processing network were more likely to be effective both for the person watching and for larger audiences.

A significant finding emerged when the researchers compared brain data to the real-world success of the messages at the population level. They found that neural activity in the mentalizing system predicted population-level outcomes, such as how often a video was shared. This predictive power held true even after accounting for the participants’ stated opinions in surveys. This suggests that the brain registers social relevance in ways that individuals may not consciously articulate.

The study refers to this phenomenon as “neuroforecasting.” This concept posits that neural activity in a small group of people can forecast the behavior of a much larger population. The findings support the idea that specific brain responses are more generalizable to the public than subjective self-reports. While people might say they like a message, their neural activity related to social processing appears to be a better indicator of whether that message will resonate with others.

“On average, the specific brain activity we tracked explained a small but robust portion of why messages were effective, roughly translating to what researchers call a small effect size (Cohen’s d = 0.22) at the population level,” the researchers told PsyPost. “We found this effect when looking at our large set of over 700 diverse messages as a whole. You could understand these neural markers as a ‘common currency’ that helps explain persuasion across many different real-world domains. However, the effect sizes also vary across message domains. Explaining that variance is an important task for the field going forward.”

“In a way, it is surprising that we were able to find any commonality in the neural processes related to message effectiveness across the messages we included. These messages did not only vary in their persuasive goals (from selling products, to recruiting volunteers, to promoting smoking cessation), but also in their format (videos, text, and more), and in the way their effectiveness was evaluated (click-through rates of online campaigns, self-report surveys, etc.).”

“This introduces a lot of noise in the analysis. Yet, we were still able to pick up on some common, underlying processes that support persuasion. This suggests that the ways in which we change our minds and behavior are, at least in part, similar across a variety of domains.”

Beyond the initial hypotheses regarding reward and social processing, an exploratory review of the whole brain uncovered additional patterns. Activity in regions linked to language processing and emotion also correlated with message success at scale. This implies that successful messages tend to engage the brain’s linguistic and emotional centers more deeply than less effective content. These exploratory findings suggest that emotion may play a larger role in mass-market success than previously identified in smaller studies.

“While we hypothesized that reward and social systems would be central, we were surprised to find through exploratory analysis that language processing and emotional brain responses also played significant roles in message success,” Scholz, Chan, and Falk said.

“Interestingly, our results suggested that neural signals related to emotion were particularly strong indicators of message effectiveness at scale—meaning for large groups—rather than just for individuals. We also found it notable that social processing activity in the brain provided ‘hidden&’ information about a message’s success that participants didn’t realize they were feeling or mention in their self-reports.”

As with all research, there are some limitations. Most of the data came from participants in Western, Educated, Industrial, Rich, and Democratic societies. Cultural norms heavily influence communication and social processing, so these neural markers might differ in other populations. The study is also correlational, meaning it observes associations but cannot prove that brain activity directly causes the messages to be effective.

Technical differences between the original studies also presented challenges for the analysis. The sixteen datasets used varied scanning parameters, equipment, and experimental protocols. While the mega-analysis approach helps smooth out some noise, these inconsistencies make it difficult to identify specific factors that might strengthen or weaken the observed effects.

“These neural markers should be seen as a first step toward experimental work,” the researchers noted. “We need more work, for instance, to interpret the exact psychological and thought processes that are responsible for creating the neural patterns we observed. A brain scanner is not a ‘mind-reading’ tool.”

Future work is needed to move from prediction to explanation. The researchers propose designing experiments that specifically manipulate message content to target the identified brain regions. Such studies could verify whether activating the reward or social processing systems intentionally leads to better outcomes.

“A major goal is to move from observing these brain patterns to conducting experiments that specifically design messages to activate these reward and social mechanisms to see if they become more effective,” Scholz, Chan, and Falk explained. “We also need to diversify our samples to include a broader range of global populations to ensure our findings apply to everyone. Finally, we hope to coordinate as a field to standardize how neuroimaging data is collected across different domains to make future large-scale collaborations even more powerful.”

“This project was a massive collaborative effort involving 16 functional MRI datasets, over 500 participants, and more than 700 unique messages. Because we believe in the importance of open science, we have made our data and analysis code publicly available so other researchers can build on these findings. We hope this study serves as a bridge between neuroscience, communication, and public policy to create more effective and beneficial messaging for society.”

The study, “Brain activity explains message effectiveness: A mega-analysis of 16 neuroimaging studies,” was authored by Christin Scholz, Hang-Yee Chan, Jeesung Ahn, Maarten A. S. Boksem, Nicole Cooper, Jason C. Coronel, Bruce P. Doré, Alexander Genevsky, Richard Huskey, Yoona Kang, Brian Knutson, Matthew D. Lieberman, Matthew Brook O’Donnell, Anthony Resnick, Ale Smidts, Vinod Venkatraman, Khoi Vo, René Weber, Carolyn Yoon, and Emily B. Falk.

Holding racist attitudes predicts increased psychological distress over time

10 February 2026 at 15:00

New research published in the journal Comprehensive Psychiatry challenges the common belief that mental illness is a primary driver of racist attitudes. The findings suggest that the relationship actually works in the opposite direction, with prejudiced beliefs predicting an increase in psychological distress over time. The study also highlights social connectedness as a significant factor, indicating that a lack of social connection may fuel both prejudice and mental health struggles.

Psychologists and social scientists have historically sought to understand the roots of extreme prejudice. A frequent explanation in both academic literature and media coverage is that racism is a symptom of poor mental health. This narrative often surfaces after events of mass violence, where the perpetrator’s actions are attributed to psychological instability rather than ideological conviction. For example, counterterrorism strategies frequently list mental health issues as a key risk factor for radicalization.

Tegan Cruwys, a researcher at the School of Medicine and Psychology at The Australian National University, led a team to investigate the validity of this assumption. The researchers argued that attributing racism to mental illness is problematic for several reasons. It has poor predictive power and risks stigmatizing people with mental health disorders who are not prejudiced.

The research team sought to test the reverse possibility. They wanted to see if holding racist views might actually be toxic to the person holding them. They also hypothesized that a third variable, such as social isolation, might be the true cause of both prejudiced attitudes and psychological decline.

To test these ideas, the researchers analyzed data from three separate longitudinal studies conducted in Australia. Longitudinal studies involve surveying the same group of people at multiple points in time. This design allows scientists to observe which changes occur first and provides better evidence for the direction of cause and effect than one-time surveys. Each of the three studies was large, nationally representative, and spanned a period of approximately six months.

The first study took place during the early stages of the COVID-19 pandemic in 2020. It included 2,361 adults. The researchers measured racism using an adapted scale that assessed social distancing preferences. Participants were asked how much physical distance they would prefer to keep from members of various ethnic outgroups compared to their own family or friends. They also rated their feelings toward these groups on a “warmth” thermometer.

Psychological distress was measured using a standard clinical tool that assesses symptoms of depression and anxiety. Social connectedness was evaluated by asking participants how often they felt lonely or left out.

The second study was conducted in 2023 leading up to the Australian Indigenous Voice referendum. This was a national vote on whether to recognize Aboriginal and Torres Strait Islander peoples in the constitution. The sample included 3,860 participants.

In this study, racism was measured by asking participants to rate how they believed Indigenous peoples were treated in Australia. Scores indicating a belief that Indigenous people receive “special treatment” were interpreted as indicative of prejudice. Psychological distress was measured using a five-item screening questionnaire often used to detect mental ill-health in the general population. Social connectedness was operationalized as the level of trust participants placed in institutions such as the government, police, and scientists.

The third study also occurred during the Voice referendum period and included 2,424 non-Indigenous Australians. The team measured attitudes using a specific scale designed to gauge views on Indigenous Australians. Psychological well-being was assessed using a five-item survey from the World Health Organization. In this dataset, social connectedness was defined by how strongly participants identified with various social groups, including their family, neighborhood, and country.

The results from all three studies showed a consistent pattern. When the researchers looked at the data from a single point in time, the link between racism and psychological distress was weak and inconsistent. This lack of a strong immediate connection suggests that simply having a mental health condition does not automatically make a person more likely to hold racist views.

However, the longitudinal analysis revealed a different story. In all three datasets, an increase in racist beliefs consistently preceded and predicted an increase in psychological distress. In the first study, participants whose racist attitudes intensified over the six months were more likely to experience worsening anxiety and depression. In the third study, psychological distress increased markedly over time only among those participants who held higher levels of racist attitudes.

The second study provided a nuanced view of this trend. During the timeframe of the second study, psychological distress was generally declining across the population. However, this improvement was not evenly shared. Participants who reported the lowest levels of racism showed the steepest decline in distress. In contrast, those with the highest levels of racism experienced a much more modest improvement in their mental health.

The researchers also tested the reverse pathway to see if psychological distress predicted a later increase in racism. The evidence for this was mixed. While two of the studies showed some association, it was not consistent across all contexts. In the third study, psychological distress did not predict any change in racist attitudes over time.

A key component of the study was the investigation of social connectedness. The analysis showed that social connection served as a protective factor against both racism and psychological distress. In the first study, participants who felt less socially connected over time saw increases in both racist attitudes and mental health symptoms.

In fact, when the researchers statistically accounted for the role of social connectedness, the direct link between racism and distress often disappeared or weakened. This suggests that the feeling of being excluded or alienated may be a “common cause” that drives people toward both prejudice and poor mental health.

The researchers propose that prejudiced attitudes may be psychologically harmful because they are inherently threatening. Racism often involves viewing other groups as a danger to one’s own safety, culture, or resources. Living with this constant sense of threat can induce a state of hypervigilance and anxiety that erodes mental well-being over time.

These findings have implications for how society addresses both prejudice and mental health. The results challenge the idea that treating mental illness will automatically reduce racism or extremism. Instead, the study suggests that prejudice itself is a risk factor for mental decline. It implies that interventions designed to foster social connection and community inclusion could have a dual benefit. By helping people feel more connected to society, it may be possible to simultaneously improve mental health outcomes and reduce the prevalence of prejudiced attitudes.

There are limitations to this research that should be noted. The measures of racism varied across the three studies to fit the specific social context of the time. This makes it difficult to compare the absolute levels of prejudice between the different samples. Additionally, the study relied on self-reported data, which can be influenced by a participant’s desire to present themselves in a favorable light. The research was conducted in Australia, so the specific social dynamics may differ in other countries or cultural contexts.

It is also important to avoid interpreting these findings as an explanation for violent extremism. The study surveyed the general population rather than radicalized individuals or members of hate groups. While prejudice is a predictor of radicalization, the psychological dynamics of violent offenders may differ from those of the general public.

Future research is needed to determine if these patterns hold true for other forms of prejudice, such as sexism or homophobia. The researchers also suggest that future studies should test whether practical interventions that boost social connectedness can effectively interrupt the cycle of prejudice and distress. The study indicates that mental health is not a fixed trait but is responsive to our social attitudes and our sense of belonging.

The study, “What goes around comes around? Holding racist attitudes predicts increased psychological distress over time,” was authored by Tegan Cruwys, Olivia Evans, Michael J. Platow, Iain Walker, Katherine J. Reynolds, Christienne Javier, Catherine Haslam, S. Alexander Haslam, and Hema Preya Selvanathan.

Peri-orgasmic phenomena: Women report diverse symptoms ranging from laughter to foot pain

9 February 2026 at 22:00

A recent survey investigation indicates that many women experience unexpected physical and emotional reactions during sexual climax, ranging from uncontrollable laughter to foot pain. These occurrences, known as peri-orgasmic phenomena, appear to be diverse and often happen inconsistently rather than with every orgasmic experience. The findings were published in the Journal of Women’s Health.

Medical understanding of the female orgasm typically focuses on standard physiological release and emotional satisfaction. Physiologically, an orgasm is generally defined as a brief episode of physical release that responds to sexual stimulation. Emotionally, it is usually perceived as a subjective peak of reaction to that stimulation. However, anecdotal reports and isolated case studies have historically hinted at a broader range of experiences that fall outside this expected norm.

Existing medical literature on these unusual symptoms is limited and relies heavily on individual patient reports rather than broader data collection. The authors of this new paper sought to categorize these unique physical and emotional symptoms more systematically. They aimed to determine which specific symptoms women experience and how frequently these sensations occur. Additionally, the team wanted to identify the context in which these phenomena are most likely to manifest, such as during partnered sex or solo masturbation.

“My co-author had written a paper on this topic. Before conducting this survey, occurrences of peri-orgasmic symptoms during orgasm were only acknowledged in the medical literature as rare case reports,” said Lauren F. Streicher, a professor of obstetrics and gynecology at the Feinberg School of Medicine at Northwestern University.

To gather information, the authors created a short educational video explaining peri-orgasmic phenomena. They posted this content on various social media platforms to recruit individuals who identified with having these experiences. The video defined the phenomena as weird physical or emotional occurrences, such as ear pain or crying, that happen specifically during an orgasm. Viewers who recognized these symptoms in their own lives were invited to participate in an anonymous online survey.

The questionnaire consisted of six items designed to capture demographic data and specific details about orgasmic reactions. A total of 3,800 individuals viewed the recruitment video during the study period. From this audience, 86 women aged 18 and older completed the survey to report their personal experiences. The researchers collected data regarding the types of symptoms, their consistency, and the sexual scenarios in which they appeared.

The analysis revealed that emotional reactions were the most commonly reported type of peri-orgasmic phenomenon. Eighty-eight percent of the respondents indicated they experienced emotional symptoms during climax. Among these emotional responses, crying was the most prevalent, affecting 63 percent of the participants. This finding aligns with existing concepts of postcoital dysphoria, although the prevalence in this specific sample was notable.

Forty-three percent of the women reported feelings of sadness or an urge to cry even during a positive sexual experience. An equal number of women, 43 percent, reported laughing during orgasm. This high rate of laughter contrasts with the scarcity of such reports in previous medical journals. A small minority, comprising 4 percent of the group, reported experiencing hallucinations during the event.

Physical symptoms were also widely represented in the survey results. Sixty-one percent of respondents reported bodily sensations unrelated to standard sexual physiology. The most frequent physical complaint was headache, which was noted by 33 percent of the women. These headaches varied in description, but their association with the moment of climax was clear.

Muscle weakness occurred in 24 percent of the cases reported in the study. This sensation is clinically referred to as cataplexy when it occurs in patients with narcolepsy. However, in this sample, it appeared as an isolated symptom associated with sexual release. Foot pain or tingling was another notable physical symptom, affecting 19 percent of the participants.

Less common physical reactions included facial pain or tingling, which was reported by 6 percent of the group. Sneezing was observed in 4 percent of the respondents. Yawning occurred in 3 percent of the cases. Ear pain or other ear sensations and nosebleeds were each reported by 2 percent of the women.

The data showed that these symptoms often overlap within the same individual. Fifty-two percent of the women experienced more than one type of symptom. Twenty-one percent of the respondents reported having both physical and emotional reactions. Some women reported clusters of symptoms, such as crying and laughing together or headaches accompanied by crying.

Regarding consistency, the study found that these phenomena do not necessarily happen every time a person reaches climax. Sixty-nine percent of the participants stated that they experienced these symptoms only sometimes. In contrast, 17 percent reported that the symptoms occurred consistently with every orgasm. This variability suggests a multifaceted nature to these responses.

The researchers also examined whether the method of sexual stimulation influenced the likelihood of these events. The majority of respondents, 51 percent, experienced these symptoms exclusively during sexual activity with a partner. Only 9 percent reported symptoms specifically during masturbation. The use of a vibrator was associated with these symptoms in 14 percent of the cases.

“The findings from this survey indicate that, although the precise prevalence is still unknown, such phenomena are not as rare as previously believed,” Streicher told PsyPost. “The survey also broadens our understanding of symptom types and prevalence, highlighting both emotional and physical manifestations. Notably, this is the first survey to discover that individuals are more likely to experience these symptoms during partnered sexual activity compared to masturbation. This observation suggests a possible emotional component to the etiology, even though the underlying cause remains unknown.”

The researchers postulate that the presence of a partner may evoke more complex psychological and physiological responses. This might hint at the involvement of an emotional component in triggering these phenomena. A heightened emotional state during sexual activity with a partner may potentially activate different neurophysiological pathways. Solo sexual activity might not trigger these same pathways to the same extent.

The study discusses potential biological mechanisms for some of these physical symptoms. Regarding headaches, the authors note that the hypothalamus is intensely stimulated during orgasm. This brain region is also involved in certain types of cluster headaches. It is possible that the modulation of circuits around the hypothalamus during climax plays a role in generating or relieving head pain.

The reports of foot pain are analyzed through the lens of neuroanatomy. The researchers reference theories suggesting that the somatosensory-evoked potentials of the foot and female genitalia are in close proximity in the brain. It is hypothesized that this closeness could lead to “cross-wiring” or referred sensations. Previous case studies have documented women feeling orgasmic sensations in their feet, which supports this neurological theory.

The high prevalence of laughing reported in this sample stands out against the backdrop of existing medical literature. Previous scientific publications have rarely documented laughter as a direct response to orgasm. This survey provides evidence that laughing may be a more common peri-orgasmic phenomenon than clinical case reports have previously suggested. The authors note that the etiologies behind this laughter, as well as the feelings of sadness, remain medically unknown.

But as with all research, there are limitations. The sample size was relatively small, with only 86 women responding out of thousands of viewers. This low response rate makes it difficult to estimate the actual prevalence of these phenomena in the general population. The recruitment method via social media may have introduced selection bias.

The respondents were predominantly older, with a significant portion over the age of 45. This age skew reflects the specific demographic that follows the primary author on social media platforms. The results may not fully represent the experiences of younger women. Additionally, the data relies entirely on self-reporting, which depends on the participants’ memory and interpretation of their symptoms.

Future investigations would benefit from larger and more diverse sample groups to validate these preliminary numbers. Researchers suggest that understanding the underlying physiological mechanisms requires more rigorous clinical study. Detailed physiological monitoring during sexual activity could provide objective data to support these self-reports. Further research could also explore why these symptoms appear more frequently with partners than during solo acts.

The researchers emphasize that recognizing these symptoms is a step toward normalizing the experience for women. “If they experience one of these phenomena, it should not be interpreted as an indication of underlying psychological or physical pathology,” Streicher said.

The study, “Emotional and Physical Symptoms in Women with Peri-Orgasmic Phenomena,” was authored by Lauren F. Streicher and James A. Simon.

Evolutionary motives of fear and coercion shape political views on wealth redistribution

9 February 2026 at 21:00

Recent psychological research suggests that political views on wealth redistribution are driven by deep-seated evolutionary motives rather than just economic logic. New evidence indicates that the fear of conflict and a desire for equal outcomes are powerful predictors of support for government transfer payments. These findings imply that social policies are often supported as a way to appease potential aggressors or to enforce group conformity.

The Role of Egalitarianism and Coercion

Researchers Chien-An Lin and Timothy C. Bates of the University of Edinburgh sought to expand the understanding of why individuals support economic redistribution. Their work builds upon the “three-person two-situation” model. This evolutionary framework previously identified three primary motives: self-interest, compassion for the needy, and malicious envy toward the wealthy.

In a study published in the journal Personality and Individual Differences in 2024, they aimed to determine if a specific preference for equal outcomes could explain support for redistribution better than existing models. They also investigated whether the willingness to use force to achieve these outcomes played a role.

Lin and Bates conducted two separate investigations to test their hypotheses. In Study 1, they recruited 403 participants from the United Kingdom using the Prolific Academic platform. The sample was representative of the UK population regarding ethnicity and gender.

The researchers measured attitudes using several established psychological scales. They assessed support for economic redistribution and the three traditional motives of self-interest, compassion, and envy. They also introduced measures for “Egalitarian Fairness” and “Instrumental Harm.”

Egalitarian Fairness was defined as a motive to divide resources so that no individual wishes to switch their share with another. Instrumental Harm assessed the belief that the ends justify the means, even if it requires harming innocent people. Additionally, the researchers developed a new scale to measure “Support for Coercive Redistribution.”

This new scale included items assessing willingness to punish those who question redistribution. It also asked about using force to reveal hidden wealth. The results of Study 1 provided evidence that Egalitarian Fairness predicts support for redistribution independently of other motives.

The data indicated that this fairness motive accounts for unique variance in political views. It operates alongside self-interest, compassion, and envy.

The study also revealed a connection between Instrumental Harm and the willingness to use coercion. Individuals who scored high on Instrumental Harm were more likely to support forcible redistribution. Malicious envy also predicted this support for coercion. The researchers found that compassion did not reduce the support for coercive measures.

To validate these findings, Lin and Bates conducted Study 2 with a fresh sample of 402 UK participants. This replication aimed to confirm the initial results and test for discriminant validity against other forms of fairness. They measured “Procedural Fairness” and “Distributional Fairness” to see if they yielded different results.

The second study confirmed the findings of the first. Egalitarian Fairness reliably increased support for redistribution. The motive for coercion was again predicted by Instrumental Harm, envy, and self-interest.

The study showed that Procedural Fairness had no significant link to redistribution support. This suggests that the desire for redistribution is specifically about outcomes rather than the rules of the game. The final motivational model accounted for over 40% of the variance in support for redistribution.

Fear of Violent Dispossession

Following this line of inquiry, Bates and Daniel Sznycer of Oklahoma State University investigated a different evolutionary driver: fear. They proposed that support for redistribution might stem from a “Bismarckian” strategy of appeasement. This theory suggests people give up resources to avoid the greater cost of being attacked or robbed.

Otto von Bismarck is a 19th-century German Chancellor credited with establishing the first modern welfare state. Bismarck was a conservative leader who implemented social protections such as health insurance and pensions, yet his primary motivation was not compassion. He intended these reforms to undermine the appeal of radical socialist movements.

Their paper, titled “Bismarckian welfare revisited,” was published in the journal Evolution and Human Behavior. The researchers argued that the human mind evolved to navigate asymmetric conflicts. In this view, appeasement is a biological adaptation to avoid injury when facing a desperate or formidable opponent.

They hypothesized that a “Fear of Violent Dispossession” would predict support for progressive taxation. This fear arises when individuals perceive that others value their resources more highly than they do. It leads to a strategy of ceding resources to preempt violence.

Sznycer and Bates conducted three studies to test this hypothesis. Study 1 involved 303 participants from the UK. They developed a “Fear of Violent Dispossession” scale with items such as “I worry that economic hardship could lead to violence directed at people like me.”

The results showed a strong positive association between this specific fear and support for redistribution. The effect remained significant even when controlling for compassion, envy, and self-interest. This suggests that fear acts as a distinct pathway to political support for welfare.

Study 2 sought to replicate these findings in a different cultural context. The researchers recruited a nationally representative sample of 804 participants from the United States. This study included controls for political orientation and party support.

The data from the US sample mirrored the UK findings. Fear of Violent Dispossession was a strong predictor of support for redistribution. This association held true regardless of whether the participant identified as liberal or conservative.

Study 3 was a pre-registered replication using another representative US sample of 804 participants. This study included a measure of “Coercive Egalitarianism” to see if the fear motive remained robust. The results confirmed the previous patterns.

The analysis indicated that fear of dispossession predicts redistribution support over and above coercive egalitarianism. It also outperformed the motive of proportionality. The researchers concluded that appeasement is a key psychological mechanism underlying modern welfare views.

Fear and Broader Progressive Policies

In a related single-author paper published in Personality and Individual Differences, Bates extended this framework. He investigated whether this fear of dispossession explains support for broader progressive policies beyond taxation. These policies included affirmative action, diversity quotas, and support for social justice movements.

Bates theorized that “progressive policy” acts as a broad mechanism for transferring power and control. He hypothesized that the same fear driving economic redistribution would drive support for these social regulations. He also looked at the motive of self-interest in relation to these policies.

Study 1 in this paper involved 502 US participants. The sample was representative regarding age, sex, and political party. Bates developed a “Support for Progressive Policy” scale covering issues like DEI training, decolonization, and boardroom diversity.

The results demonstrated that these diverse policy preferences form a single, coherent psychological construct. As predicted, Fear of Violent Dispossession predicted support for these progressive policies. Individuals who feared losing what they have were more likely to support regulations that transfer influence to others.

The study also found a strong link between self-interest and progressive policy support. Participants who expected their own economic situation to improve under these policies were much more likely to support them. This suggests a dual motivation of fear and personal gain.

Bates also tested a hypothesis regarding appeasement of powerful groups. He asked participants about their willingness to yield to strong adversaries, such as foreign powers or cartels. The data showed that Fear of Violent Dispossession predicted a general tendency to appease strong groups.

Study 2 was a pre-registered replication with 500 US participants. It aimed to confirm the findings while controlling for socioeconomic status. The results were consistent with the first study.

Fear of Violent Dispossession remained a robust predictor of support for progressive policy. The study found that this fear motivates individuals to cede resources to both the needy and the powerful. It challenges the idea that progressive views are solely driven by compassion or moral ideals.

Limitations and Future Directions

These three papers provide a new perspective on political psychology, but they have limitations. The data in all studies were correlational. This means researchers cannot definitively claim that fear causes the policy support, only that they are linked.

The measures relied on self-reports. Participants might answer in ways they believe are socially acceptable. Future research should use experimental designs to induce fear or compassion to see if policy views change in real-time.

Another limitation is the reliance on Western samples from the UK and US. It is unknown if these motives operate identically in non-Western cultures. Cultural norms regarding fear and sharing might influence these biological drives.

Future studies could investigate how these motives interact with dark personality traits. Research could look at whether individuals high in Machiavellianism exploit this fear in others to advance their own interests. Additionally, further work is needed to distinguish this specific fear of dispossession from general anxiety.

The findings suggest that political debates are shaped by ancient mechanisms of survival. Recognizing the roles of fear, envy, and coercion may help explain why political polarization is so persistent. It appears that economic and social policies are often viewed through the lens of potential conflict.

The study, “Support for redistribution is shaped by motives of egalitarian division and coercive redistribution,” was authored by Chien-An Lin and Timothy C. Bates.

The study, “Fear of violent dispossession motivates support for progressive policy,” was authored by Timothy C. Bates.

The study, “Bismarckian welfare revisited: Fear of being violently dispossessed motivates support for redistribution,” was authored by Daniel Sznycer and Timothy C. Bates.

❌
❌