Reading view

Donald Trump is fueling a surprising shift in gun culture, new research suggests

A new study published in Injury Epidemiology provides evidence that the 2024 United States presidential election prompted specific groups of Americans to change their behaviors regarding firearms. The findings suggest that individuals who feel threatened by the policies of the current administration, specifically Black adults and those with liberal political views, are reporting stronger urges to carry weapons and keep them easily accessible. This research highlights a potential shift in gun culture where decision-making is increasingly driven by political anxiety and a desire for protection.

Social scientists have previously observed that firearm purchasing patterns often fluctuate in response to major societal events, such as the onset of the COVID-19 pandemic or periods of civil unrest. However, there has been less research into how specific election results influence not just the buying of guns, but also daily habits like carrying a weapon or how it is stored within the home.

To understand these dynamics better, a team led by Michael Anestis from the New Jersey Gun Violence Research Center at Rutgers University sought to track these changes directly. The researchers aimed to determine if the intense rhetoric surrounding the 2024 election altered firearm safety practices among different demographics.

The researchers surveyed a nationally representative group of adults at two different points in time to capture a “before and after” snapshot. The first survey included 1,530 participants and took place between October 22 and November 3, 2024, immediately preceding the election. The team then followed up with 1,359 of the same individuals between January 7 and January 22, 2025. By maintaining the same group of participants, the scientists could directly compare intentions expressed before the election with reported behaviors and urges felt in the weeks following the results.

The data indicated that identifying as Black was associated with a increase in the urge to carry firearms specifically because of the election results. Black participants were also more likely than White participants to express an intention to purchase a firearm in the coming year or to remain undecided, rather than rejecting the idea of ownership. This aligns with broader trends suggesting that the demographics of gun ownership are diversifying.

Similarly, participants who identified with liberal political beliefs reported a stronger urge to carry firearms outside the home as a direct result of the election outcome. The study found that as political views became more liberal, individuals were over two times more likely to change their storage practices to make guns more quickly accessible. This suggests that for some, the perceived need for immediate defense has overridden standard safety recommendations regarding secure storage.

The researchers also examined how participants viewed the stability of the country. Those who perceived a serious threat to American democracy were more likely to store their guns in a way that allowed for quicker access. Individuals who expressed support for political violence showed a complex pattern. They were more likely to intend to buy guns but reported a decreased urge to carry them. This might imply that those who support such violence feel more secure in the current political environment, reducing their perceived need for constant protection outside the home.

Anestis, the executive director of the New Jersey Gun Violence Research Center and lead researcher, noted that the motivation for these changes is clear but potentially perilous.

“These findings highlight that communities that feel directly threatened by the policies and actions of the second Trump administration are reporting a greater drive to purchase firearms, carry them outside their home, and store them in a way that allows quick access and that these urges are a direct result of the presidential election,” Anestis said. “It may be that individuals feel that the government will not protect them or – worse yet – represents a direct threat to their safety, so they are trying to prepare themselves for self-defense.”

These findings appear to align with recent press reports describing a surge in firearm interest among groups not historically associated with gun culture. An NPR report from late 2025 featured accounts from individuals like “Charles,” a doctor who began training with a handgun due to fears for his family’s safety under the Trump administration.

A story from NBC News published earlier this week highlighted a sharp rise in requests for firearm training from women and people of color. Trainers across the country, including organizations like the Liberal Gun Club and Grassroots Defense, have reported that their classes are fully booked. This heightened interest often correlates with specific fears regarding federal law enforcement.

For example, recent news coverage mentions the high-profile shooting of Alex Pretti, a concealed carry permit holder in Minneapolis, by federal agents. Reports indicate that such incidents have stoked fears about constitutional rights violations. Both the academic study and these journalistic accounts paint a picture of defensive gun ownership rising among those who feel politically marginalized.

While the study provides evidence of shifting behaviors, there are limitations to consider. The number of people who actually purchased a gun during the short window between the two surveys was low, which limits the ability of the researchers to draw broad statistical conclusions about immediate purchasing habits.

Additionally, the study relied on self-reported data. This means the results depend on participants answering honestly about sensitive topics like weapon storage and their willingness to use force. Future research will need to examine whether these shifts in behavior result in long-term changes in injury rates or accidental shootings.

“Ultimately, it seems that groups less typically associated with firearm ownership – Black adults and those with liberal political beliefs, for instance – are feeling unsafe in the current environment and trying to find ways to protect themselves and their loved ones,” Anestis said.

However, he cautioned that the method of protection chosen could lead to unintended consequences.

“Although those beliefs are rooted in a drive for safety, firearm acquisition, carrying, and unsecure storage are all associated with the risk for suicide and unintentional injury, so I fear that the current environment is actually increasing the risk of harm,” he said. “Indeed, recent events in Minneapolis make me nervous that the environment fostered by the federal government is putting the safety of Americans in peril.”

The study, “Changes in firearm intentions and behaviors after the 2024 United States presidential election,” was authored by Michael D. Anestis, Allison E. Bond, Kimberly C. Burke, Sultan Altikriti, and Daniel C. Semenza.

This mental trait predicts individual differences in kissing preferences

A new study published in Sexual and Relationship Therapy provides evidence that a person’s tendency to engage in sexual fantasy influences what they prioritize in a romantic kiss. The findings suggest that the mental act of imagining intimate scenarios is strongly linked to placing a higher value on physical arousal and contact during kissing. This research helps explain the psychological connection between cognitive states and physical intimacy.

From an evolutionary perspective, researchers have proposed three main reasons for romantic kissing. The first is “mate assessment,” which means kissing helps individuals subconsciously judge a potential partner’s health and genetic compatibility. The second is “pair bonding,” where kissing serves to maintain an emotional connection and commitment between partners in a long-term relationship.

The third proposed function is the “arousal hypothesis.” This theory suggests that the primary biological purpose of kissing is to initiate sexual arousal and prepare the body for intercourse. While this seems intuitive, previous scientific attempts to prove this hypothesis have failed to find a strong link. Past data did not show that kissing consistently acts as a catalyst for sexual arousal.

The researchers behind the current study argued that these previous attempts were looking at the problem too narrowly. Earlier work focused almost exclusively on the physical sensation of kissing, such as the sensitivity of the lips or the exchange of saliva. This approach largely ignored the mental and emotional state of the person doing the kissing. The researchers hypothesized that the physical act of kissing might not be arousing on its own without a specific cognitive component. They proposed that sexual fantasy serves as this missing link.

“People have tested three separate hypotheses to explain why we engage in romantic kissing as a species,” said study author Christopher D. Watkins, a senior lecturer in psychology at Abertay University. “At the time there had been no evidence supporting the arousal hypothesis for kissing – that kissing may act as an important catalyst for sex. This may be because these studies focussed on the sensation of kissing as the catalyst, when psychological explanations are also important (e.g., the mental motives for kissing which in turn makes intimacy feel pleasurable/desirable).”

To test this idea, the researchers designed an online study to measure the relationship between fantasy proneness and kissing preferences. They recruited a sample of 412 adults, primarily from the United Kingdom and Italy. After removing participants who did not complete all sections or meet the age requirements, the final analysis focused on 212 individuals. This group was diverse in terms of relationship status, with about half of the participants reporting that they were in a long-term relationship.

Participants completed a series of standardized questionnaires. The first was the “Good Kiss Questionnaire,” which asks individuals to rate the importance of various factors when deciding if someone is a good kisser. These factors included sensory details like the taste of the partner’s lips, the pleasantness of their breath, and the “wetness” of the kiss. The questionnaire also included items related to “contact and arousal,” asking how important physical touching and the feeling of sexual excitement were to the experience.

The scientists also administered the “Sexual Fantasy Questionnaire.” They specifically focused on the “intimacy” subscale, which measures how often a person engages in daytime fantasies about romantic interactions with a partner. This measure was distinct from fantasies that occur during sexual acts or while dreaming. It focused on the mental habit of imagining intimacy during everyday life.

To ensure their results were precise, the researchers included control measures. They measured “general creative experiences” to assess whether a person was simply imaginative in general. This allowed the scientists to determine if the results were driven specifically by sexual fantasy rather than just a vivid imagination. They also measured general sexual desire to see if the effects were independent of a person’s overall sex drive.

The results supported the researchers’ primary prediction. The analysis showed a positive correlation between daytime intimate fantasy and the importance placed on arousal and contact in a good kiss. Individuals who reported a higher tendency to fantasize about intimacy were much more likely to define a “good kiss” as one that includes high levels of physical contact and sexual arousal.

“Your tendency to think and fantasise about intimacy during the day is related to the qualities you associate with a good-quality kiss,” Watkins told PsyPost. “Specifically, the importance we attach to contact and arousal while kissing. As such, our mental preoccupations could facilitate arousal when in close contact with an intimate partner – explaining personal differences in how we approach partners during intimate encounters.”

This relationship held true even after the researchers statistically controlled for other variables. The link between fantasy and kissing preferences remained significant regardless of the participant’s general creativity levels. This suggests that the connection is specific to sexual and romantic cognition, not just a byproduct of having a creative mind.

Additionally, the finding was independent of general sexual desire. While people with higher sex drives did generally value arousal more, the specific habit of fantasizing contributed to this preference over and above general desire. This implies that the mental act of simulating intimacy creates a specific psychological context. This context appears to shape what a person expects and desires from the physical act of kissing.

The study also yielded secondary findings regarding kissing styles. The researchers looked at “reproductive potential,” which they measured by asking participants about their history of sexual partners relative to their peers. This is often used in evolutionary psychology as a proxy for mating strategy. The data showed that individuals with a history of more sexual partners placed greater importance on “technique” in a good kiss. Specifically, they valued synchronization, or whether the partner’s kissing style matched their own.

“One unplanned relationship found in the data was between the importance people placed on technique (e.g., synchronicity) in a good kiss and the extent to which people reported tending to have sex with different people across their relationship history (compared to average peer behavior),” Watkins said. “This may suggest that people who seek sexual variety also seek some form of similarity in partners while intimate (kissing style). This was a small effect though that we would like others to examine/replicate independently in their own studies.”

As with all research, there are some limitations. The research used a cross-sectional design, meaning it captured data from participants at a single point in time. As a result, the researchers cannot prove that fantasizing causes a change in kissing preferences. It is largely possible that the relationship works in the reverse direction, or that a third factor influences both.

The sample was also heavily skewed toward Western cultures, specifically the UK and Italy. Romantic kissing is not a universal human behavior and is observed in less than half of known cultures. Consequently, these findings may not apply to cultures where kissing is not a standard part of romantic or sexual rituals.

Future research could address these issues by using longitudinal designs. Scientists could follow couples over time to see how the relationship between fantasy and physical intimacy evolves. This would help clarify whether increasing intimate fantasy can lead to a more revitalized physical connection.

“We are looking to develop our testing instruments to explore other experiences related to kissing, and expand our studies on this topic – for example, by establishing clear cause and effect between our thoughts/fantasies and later kissing behaviors or other behaviors reported during close contact with romantic partners,” Watkins said.

The study, “Proclivity for sexual fantasy accounts for differences in the perceived components of a ‘good kiss’,” was authored by Milena V. Rota and Christopher D. Watkins.

Who lives a good single life? New data highlights the role of autonomy and attachment

A new study published in the journal Personal Relationships suggests that single people who feel their basic psychological needs are met tend to experience higher life satisfaction and fewer depressive symptoms. The findings indicate that beyond these universal needs, having a secure attachment style and viewing singlehood as a personal choice rather than a result of external barriers are significant predictors of a satisfying single life.

The number of single adults has increased significantly in recent years, prompting psychologists to investigate what factors contribute to a high quality of life for this demographic. Historically, relationship research has focused heavily on the dynamics of couples, often treating singlehood merely as a transitional stage or a deficit. When researchers did study singles, they typically categorized them simply as those who chose to be single versus those who did not. This binary perspective fails to capture the complexity of the single experience.

The researchers behind the new study sought to understand the specific psychological characteristics that explain why some individuals thrive in singlehood while others struggle. By examining factors ranging from broad human needs to specific attitudes about relationships, the team aimed to clarify the internal and external forces that shape single well-being.

“Much of the research on single people has focused on deficits—that singles are less happy or lonely to partnered people,” said study author Jeewon Oh, an assistant professor at Syracuse University.

“We wanted to ask instead: When do single people thrive? We wanted to identify what actually predicts a good single life from understanding their individual differences. We know that people need to feel autonomous, competent, and related to others to flourish, but it wasn’t clear whether relationship-specific factors like attachment style or reasons for being single play an important role beyond satisfying these more basic needs.”

To investigate these questions, the scientists conducted two separate analyses. The first sample consisted of 445 adults recruited through Qualtrics Panels. These participants were older, with an average age of approximately 53 years, and were long-term singles who had been without a partner for an average of 20 years. This demographic provided a window into the experiences of those who have navigated singlehood for a significant portion of their adulthood.

The second sample was gathered to see if the findings would hold true for a different age group. This group included 545 undergraduate students from a university in the northeastern United States. These participants were much younger, with an average age of roughly 19 years. By using two distinct samples, the researchers hoped to distinguish between findings that might be unique to a specific life stage and those that apply to singles more generally.

The researchers used a series of surveys to assess several psychological constructs. First, they measured the satisfaction of basic psychological needs based on Self-Determination Theory. This theory posits that three core needs are essential for human well-being: autonomy, competence, and relatedness. Autonomy refers to a sense of volition and control over one’s own life choices. Competence involves feeling capable and effective in one’s activities. Relatedness is the feeling of being connected to and cared for by others.

In addition to basic needs, the study assessed attachment orientation. Attachment theory describes how people relate to close others, often based on early life experiences. The researchers looked at two dimensions: attachment anxiety and attachment avoidance. Attachment anxiety is characterized by a fear of rejection and a strong need for reassurance. Attachment avoidance involves a discomfort with intimacy and a preference for emotional distance.

The team also measured sociosexuality and reasons for being single. Sociosexuality refers to an individual’s openness to uncommitted sexual experiences, including their desires, attitudes, and behaviors regarding casual sex. For the reasons for being single, participants rated their agreement with statements categorized into domains such as valuing freedom, perceiving personal constraints, or feeling a lack of courtship ability.

The most consistent finding across both samples was the importance of basic psychological need satisfaction. Single individuals who felt their needs for autonomy, competence, and relatedness were being met reported significantly higher life satisfaction and satisfaction with their relationship status. They also reported fewer symptoms of depression.

This suggests that the foundation of a good life for singles is largely the same as it is for everyone else. It relies on feeling in control of one’s life, feeling capable, and having meaningful social connections, which for singles are often found in friendships and family rather than romantic partnerships.

Attachment style also emerged as a significant predictor of well-being. The data showed that higher levels of attachment anxiety were associated with more depressive symptoms. In the combined analysis of both samples, attachment anxiety also predicted lower satisfaction with singlehood. People with high attachment anxiety often crave intimacy and fear abandonment. This orientation may make singlehood particularly challenging, as the lack of a romantic partner might act as a constant source of distress.

The study found that the specific reasons a person attributes to their singlehood matter for their mental health. Participants who viewed their singlehood as a means to maintain their freedom and independence reported higher levels of satisfaction. These individuals appeared to be single because they valued the autonomy it provided.

In contrast, those who felt they were single due to constraints experienced worse outcomes. Constraints included factors such as lingering feelings for a past partner, a fear of being hurt, or perceived personal deficits. Viewing singlehood as a forced circumstance rather than a choice was linked to higher levels of depressive symptoms.

The researchers examined whether sociosexuality would predict well-being, hypothesizing that singles who are open to casual sex might enjoy singlehood more. However, the results indicated that sociosexuality did not provide additional explanatory power once basic needs and attachment were taken into account. While the desire for uncommitted sex was correlated with some outcomes in isolation, it was not a primary driver of well-being in the comprehensive models.

These findings suggest that a “sense of choice” is a multi-layered concept. It is not just about a simple decision to be single or not. Instead, it is reflected in how much autonomy a person feels generally, whether their attachment style allows them to feel secure without a partner, and whether they interpret their single status as an alignment with their values.

“The most important takeaway is that single people’s well-being consistently depends on having their basic psychological needs met—feeling autonomous, competent, and connected to others,” Oh told PsyPost. “However, beyond that, it also matters whether someone has an anxious attachment style, and whether they feel like they are single because it fits their values (vs. due to constraints). These individual differences are aligned with having a sense of choice over being single, which may be one key to a satisfying singlehood.”

The study has some limitations. The research relied on self-reported data collected at a single point in time. This cross-sectional design means that scientists cannot determine the direction of cause and effect. For example, it is possible that people who are already depressed are more likely to perceive their singlehood as a result of constraints, rather than the constraints causing the depression.

The demographic composition of the samples also limits generalizability. The participants were predominantly White and, in the older sample, mostly women. The experience of singlehood can vary greatly depending on gender, race, cultural background, and sexual orientation. The researchers noted that future studies should aim to include more diverse groups to see if these psychological patterns hold true across different populations.

Another limitation involved the measurement of reasons for being single. The scale used to assess these reasons had some statistical weaknesses, which suggests that the specific categories of “freedom” and “constraints” might need further refinement in future research. Despite this, the general pattern—that voluntary reasons link to happiness and involuntary reasons link to distress—aligns with previous scientific literature.

Future research could benefit from following single people over time. A longitudinal approach would allow scientists to observe how changes in need satisfaction or attachment security influence feelings about singlehood as people age. It would also be valuable to explore how other personality traits, such as extraversion or neuroticism, interact with these factors to shape the single experience.

The study, “Who Lives a Good Single Life? From Basic Need Satisfaction to Attachment, Sociosexuality, and Reasons for Being Single,” was authored by Jeewon Oh, Arina Stoianova, Tara Marie Bello, and Ashley De La Cruz.

Your attachment style predicts which activities boost romantic satisfaction

New research provides evidence that the best way to spend time with a romantic partner depends on their specific emotional needs. A study published in Social Psychological and Personality Science suggests that people with avoidant attachment styles feel more satisfied when engaging in novel and exciting activities, while those with anxious attachment styles benefit more from familiar and comfortable shared experiences.

Psychological science identifies attachment insecurity as a significant barrier to relationship satisfaction. Individuals high in attachment avoidance often fear intimacy and prioritize independence, while those high in attachment anxiety fear abandonment and frequently seek reassurance.

Previous studies have shown that partners can mitigate these insecurities by adjusting their behavior, such as offering autonomy to avoidant partners or reassurance to anxious ones. However, less is known about how specific types of shared leisure activities function in this dynamic.

“This study was motivated by two main gaps. One was a gap in the attachment literature. Although attachment insecurity reliably predicts lower relationship satisfaction, these effects can be buffered, and most prior work has focused on partner behaviors. We wanted to know whether shared, everyday experiences could play a similar role,” said study author Amy Muise, a professor and York Research Chair in the Department of Psychology and director of the Sexual Health and Relationships (SHaRe) Lab at York University.

“We were also interested in testing the idea that novelty and excitement are universally good for relationships. Instead, we asked whether different types of shared experiences are more or less beneficial depending on people’s attachment-related needs.”

To explore these dynamics, the scientists conducted a meta-analysis across three separate daily diary studies. The total sample consisted of 390 couples from Canada and the United States. Participants were required to be in a committed relationship and living together or seeing each other frequently. The average relationship length varied slightly by study but ranged generally from seven to eight years.

For a period of 21 days, each partner independently completed nightly surveys. They reported their daily relationship satisfaction and the types of activities they shared with their partner that day. The researchers measured two distinct types of shared experiences. “Novel and exciting” experiences were defined as activities that felt new, challenging, or expanding, such as learning a skill or trying a new restaurant.

“Familiar and comfortable” experiences involved routine, calming, and predictable activities. Examples included watching a favorite TV show, cooking a standard meal together, or simply relaxing at home. The participants also rated their levels of attachment avoidance and anxiety at the beginning of the study. This design allowed the researchers to track how fluctuations in daily activities related to fluctuations in relationship satisfaction.

The data revealed that, in general, both types of shared experiences were linked to higher daily relationship satisfaction. “The effects are modest in size, which is typical for daily experience research because they reflect within-person changes in everyday life,” Muise told PsyPost. “These are not dramatic shifts in relationship quality, but small day-to-day effects that may accumulate over time.”

“Overall, both novel and familiar shared experiences were linked to greater relationship satisfaction, but the effect of familiar, comfortable experiences was larger (roughly two to three times larger) than novel, experiences overall.”

Importantly, the benefits differed depending on a person’s attachment style. For individuals high in attachment avoidance, engaging in novel and exciting activities provided a specific benefit.

On days when avoidant individuals reported more novelty and excitement than usual, the typical link between their avoidant style and lower relationship satisfaction was weakened. The researchers found that these exciting activities increased perceptions of “relational reward.” This means the avoidant partners felt a sense of intimacy and connection that did not feel threatening or smothering. Familiar and comfortable activities did not provide this same buffering effect for avoidant individuals.

In contrast, individuals high in attachment anxiety derived the most benefit from familiar and comfortable experiences. On days marked by high levels of familiarity and comfort, the usual association between attachment anxiety and lower relationship satisfaction disappeared entirely. The study suggests that these low-stakes, comforting interactions help reduce negative emotions for anxiously attached people.

Novel and exciting activities did not consistently buffer the relationship satisfaction of anxiously attached individuals. The researchers noted that while novelty is generally positive, it does not address the specific need for security that defines attachment anxiety. The calming nature of routine appears to be the key ingredient for soothing these specific fears.

“One thing that surprised us was how familiar and comfortable activities seemed to help people who are more anxiously attached,” Muise said. “We expected these experiences to work by lowering worries about rejection or judgment, but that wasn’t what we found. Instead, they seemed to help by lowering people’s overall negative mood.”

“This made us think more carefully about what comfort and routine might actually be doing emotionally. It’s possible that for people higher in attachment anxiety, familiar and comfortable time together helps them feel more secure, and that sense of security is what supports relationship satisfaction. We weren’t able to test that directly in this study, but it’s an important direction for future work.”

The researchers also examined how one person’s attachment style affected their partner’s satisfaction. The results showed that when a person had a highly avoidant partner, they reported higher satisfaction on days they shared novel and exciting experiences. Conversely, when a person had a highly anxious partner, they reported higher satisfaction on days filled with familiar and comfortable activities. This indicates that tailoring activities benefits both the insecure individual and their romantic partner.

“The main takeaway is that there is no single ‘right’ way to spend time together that works for all couples,” Muise explained. “What matters is whether shared experiences align with people’s emotional needs. For people who are more avoidantly attached, doing something novel or exciting together (something that feels new and fun rather than overtly intimate) can make the relationship feel more rewarding and satisfying.”

“For people who are more anxiously attached, familiar and comfortable time together seems especially important for maintaining satisfaction. These findings suggest that tailoring shared time, rather than maximizing novelty or excitement per se, may be a more effective way to support relationship well-being.”

While the findings offer practical insights, the study has certain limitations. The research relied on daily diary entries, which are correlational. This means that while the researchers can observe a link between specific activities and higher satisfaction, they cannot definitively prove that the activities caused the satisfaction. It is possible that feeling satisfied makes a couple more likely to engage in fun or comfortable activities.

“Another potential misinterpretation is that novelty is ‘bad’ for anxiously attached people or that comfort is ‘bad’ for avoidantly attached people,” Muise clarified. “That is not what we found. Both types of experiences were generally associated with higher satisfaction; the difference lies in when they are most helpful for buffering insecurity, not whether they are beneficial at all.”

Future research is needed to determine if these daily buffering effects lead to long-term improvements in attachment security. The scientists also hope to investigate who initiates these activities and whether the motivation behind them impacts their effectiveness. For now, the data suggests that checking in on a partner’s emotional needs might be the best guide for planning the next date night.

“One long-term goal is to understand whether these day-to-day buffering effects can lead to longer-term changes in attachment security,” Muise said. “If repeatedly engaging in the ‘right’ kinds of shared experiences could that have implications for how attachment insecurity evolves over time?”

“Another direction is to examine how these experiences are initiated. Who suggests the activity, and whether it feels voluntary or pressured, might matter, for whether certain experiences are associated with satisfaction.”

“One thing I really appreciate about this study is that it allowed us to look at both partners’ experiences,” Muise added. “The partner effects suggest that tailoring shared experiences doesn’t only benefit the person who is more insecure, it is also associated with how their partner feels about the relationship. Overall, engaging in shared experiences that was aligned with one partner’s attachment needs, has benefits for both partners.”

The study, “Novel and Exciting or Tried and True? Tailoring Shared Relationship Experiences to Insecurely Attached Partners,” was authored by Kristina M. Schrage, Emily A. Impett, Mustafa Anil Topal, Cheryl Harasymchuk, and Amy Muise.

Bias against AI art is so deep it changes how viewers perceive color and brightness

New research suggests that simply labeling an artwork as created by artificial intelligence can reduce how much people enjoy and value it. This bias appears to affect not just how viewers interpret the meaning of the art, but even how they process basic visual features like color and brightness. The findings were published in the Psychology of Aesthetics, Creativity, and the Arts.

Artificial intelligence has rapidly become a common tool for visual artists. Artists use technologies ranging from text-to-image generators to robotic arms to produce new forms of imagery. Despite this widespread adoption, audiences often react negatively when they learn technology was involved in the creative process.

Alwin de Rooij, an assistant professor at Tilburg University and associate professor at Avans University of Applied Sciences, sought to understand the consistency of this negative reaction. De Rooij aimed to determine if this bias occurs across different psychological systems involved in viewing art. The researcher also wanted to see if this negative reaction is a permanent structural phenomenon or if it varies by context.

“AI-generated images can now be nearly indistinguishable from art made without AI, yet both public debate and scientific studies suggest that people may respond differently once they are told AI was involved,” de Rooij told PsyPost. “These reactions resemble earlier anxieties around new technologies in art, such as the introduction of photography in the nineteenth century, which is now a fully established art form. This raised the question of how consistent bias against AI in visual art is, and whether it might already be changing.”

To examine this, De Rooij conducted a meta-analysis. This statistical technique combines data from multiple independent studies to find overall trends that a single experiment might miss. The researcher performed a systematic search for experiments published between January 2017 and September 2024.

The analysis included studies where participants viewed visual art and were told it was made by AI. These responses were compared to responses for art labeled as human-made or art presented with no label. The researcher extracted 191 distinct effect sizes from the selected studies.

De Rooij categorized these measurements using a framework known as the Aesthetic Triad model. This model organizes the art experience into three specific systems. The first is the sensory-motor system, which deals with basic visual processing. The second is the knowledge-meaning system, which involves interpretation and context. The third is the emotion-valuation system, which covers subjective feelings and personal preferences.

The investigation revealed that knowing AI was used generally diminishes the aesthetic experience. A small but significant negative effect appeared within the sensory-motor system. This system involves the initial processing of visual features such as color, shape, and spatial relationships. When viewers believed an image was AI-generated, they tended to perceive these basic qualities less favorably.

A moderate negative effect appeared in the knowledge-meaning system. This aspect of the aesthetic experience relates to how people interpret an artwork’s intent. It also includes judgments about the skill required to make the piece. Participants consistently attributed less profundity and creativity to works labeled as artificial intelligence.

The researcher also found a small negative effect in the emotion-valuation system. This system governs subjective feelings of beauty, awe, and liking. Viewers tended to report lower emotional connection when they thought AI was responsible for the work. They also rated these works as less beautiful compared to identical works labeled as human-made.

“The main takeaway is that knowing AI was involved in making an artwork can change how we experience it, even when the artwork itself is identical,” de Rooij explained. “People tend to attribute less meaning and value to art once it is labeled as AI-made, not because it looks worse, but because it is interpreted differently. In some cases, this bias even feeds into basic visual judgments, such as how colorful or vivid an image appears. This shows that bias against AI is not just an abstract opinion about technology. It can deeply shape the aesthetic experience itself.”

But these negative responses were not uniform across all people. The researcher identified age as a significant factor in the severity of the bias. Older participants demonstrated a stronger negative reaction to AI art. Younger audiences showed much weaker negative effects.

This difference suggests a possible generational shift in how people perceive technology in art. Younger viewers may be less troubled by the integration of algorithms in the creative process. The style of the artwork also influenced viewer reactions.

Representational art, which depicts recognizable objects, reduced the negative bias regarding meaning compared to abstract art. However, representational art worsened the bias regarding emotional connection. The setting of the study mattered as well. Experiments conducted online produced stronger evidence of bias than those conducted in laboratories or real-world galleries.

“Another surprising finding was how unstable the bias is,” de Rooij said. “Rather than being a fixed reaction, it varies across audiences and contexts. As mentioned earlier, the bias tends to be stronger among older populations, but the results show it is also influenced by the style of the artworks and by how and where they are presented. In some settings, the bias becomes very weak or nearly disappears. This further supports the observation that, much like earlier reactions to new technologies in art, resistance to AI may be transitional rather than permanent.”

A key limitation involves how previous experiments presented artificial intelligence. Many studies framed the technology as an autonomous agent that created art independently. This description often conflicts with real-world artistic practice.

“The practical significance of these findings need to be critically examined,” de Rooij noted. “Many of the studies included in the meta-analysis frame AI as if it were an autonomous artist, which does not reflect artistic practice, where AI is typically used as a responsive material. The AI-as-artist framing evoke dystopian imaginaries about AI replacing human artists or threatening the humanity in art. As a result, some studies may elicit stronger negative responses to AI, but in a way that has no clear real-world counterpart.”

Future research should investigate the role of invisible human involvement in AI art. De Rooij plans to conduct follow-up studies.

“The next step is to study bias against AI in art in more realistic settings, such as galleries or museums, and in ways that better reflect how artists actually use AI in their creative practice,” de Rooij said. “This is a reaction to the finding that bias against AI seemed particularly strong in online studies, which merits verification of the bias in real-world settings. This proposed follow-up research has recently received funding from the Dutch Research Council, and the first results are expected in late 2026. We are excited about moving this work forward!”

The study, “Bias against artificial intelligence in visual art: A meta-analysis,” was authored by Alwin de Rooij.

Younger women find men with beards less attractive than older women do

A new study published in Adaptive Human Behavior and Physiology suggests that a woman’s age and reproductive status may influence her preferences for male physical traits. The research indicates that postmenopausal women perceive certain masculine characteristics, such as body shape and facial features, differently than women who are still in their reproductive years. These findings offer evidence that biological shifts associated with menopause might alter the criteria women use to evaluate potential partners.

Scientists have recognized that physical features act as powerful biological signals in human communication. Secondary sexual characteristics are traits that appear during puberty and visually distinguish men from women. These include features such as broad shoulders, facial hair, jawline definition, and muscle mass.

Evolutionary psychology suggests that these traits serve as indicators of health and genetic quality. For instance, a muscular physique or a strong jawline often signals high testosterone levels and physical strength. Women of reproductive age typically prioritize these markers because they imply that a potential partner possesses “good genes” that could be passed to offspring.

However, researchers have historically focused most of their attention on the preferences of young women. Less is known about how these preferences might change as women age and lose their reproductive capability. The biological transition of menopause involves significant hormonal changes, including a decrease in estrogen levels.

This hormonal shift may correspond to a change in mating strategies. The “Grandmother Hypothesis” proposes that older women shift their focus from reproduction to investing in their existing family line. Consequently, they may no longer prioritize high-testosterone traits, which can be associated with aggression or short-term mating.

Instead, older women might prioritize traits that signal cooperation, reliability, and long-term companionship. To test this theory, a team of researchers from Poland designed a study to compare the preferences of women at different stages of life. The research team included Aurelia Starzyńska and Łukasz Pawelec from the Wroclaw University of Environmental and Life Sciences and the University of Warsaw, alongside Maja Pietras from Wroclaw Medical University and the University of Wroclaw.

The researchers recruited 122 Polish women to participate in an online survey. The participants ranged in age from 19 to 70 years old. Based on their survey responses regarding menstrual regularity and history, the researchers categorized the women into three groups.

The first group was premenopausal, consisting of women with regular reproductive functions. The second group was perimenopausal, including women experiencing the onset of menopausal symptoms and irregular cycles. The third group was postmenopausal, defined as women whose menstrual cycles had ceased for at least one year.

To assess preferences, the researchers created a specific set of visual stimuli. They started with photographs of a single 22-year-old male model. Using photo-editing applications, they digitally manipulated the images to create distinct variations in appearance.

The researchers modified the model’s face to appear either more feminized, intermediate, or heavily masculinized. They also altered the model’s facial hair to show a clean-shaven look, light stubble, or a full beard.

Body shape was another variable manipulated in the study. The scientists adjusted the hip-to-shoulder ratio to create three silhouette types: V-shaped, H-shaped, and A-shaped. Finally, they modified the model’s musculature to display non-muscular, moderately muscular, or strongly muscular builds.

Participants viewed these twelve modified images and rated them on a scale from one to ten. They evaluated the man in the photos based on three specific criteria. The first criterion was physical attractiveness.

The second and third criteria involved personality assessments. The women rated how aggressive they perceived the man to be. They also rated the man’s perceived level of social dominance.

The results showed that a woman’s reproductive status does influence her perception of attractiveness. One significant finding related to the shape of the male torso. Postmenopausal women rated the V-shaped body, which is typically characterized by broad shoulders and narrow hips, as less attractive than other shapes.

This contrasts with general evolutionary expectations where the V-shape is a classic indicator of male fitness. The data suggests that as women exit their reproductive years, the appeal of this strong biological signal may diminish.

Age also played a distinct role in how women viewed facial hair. The study found that older women rated men with medium to full beards as more attractive compared to younger women. This preference for beards increased with the age of the participant.

The researchers suggest that beards might signal maturity and social status rather than just raw genetic fitness. Younger women in the study showed a lower preference for beards. This might occur because facial hair can mask other facial features that young women use to assess mate quality.

The study produced complex results regarding facial masculinity. Chronological age showed a slight positive association with finding feminized faces attractive. This aligns with the idea that older women might prefer “softer” features associated with cooperation.

However, when isolating the specific biological factor of menopause, the results shifted. Postmenopausal women rated feminized faces as less attractive than premenopausal women did. This indicates that the relationship between aging and facial preference is not entirely linear.

Perceptions of aggression also varied by group. Postmenopausal women rated men with medium muscularity as more aggressive than men with other body types. This association was not present in the younger groups.

The researchers propose that older women might view visible musculature as a signal of potential threat rather than protection. Younger women, who are more likely to seek a partner for reproduction, may view muscles as a positive sign of health and defense.

Interestingly, the study found no significant connection between the physical traits and perceived social dominance. Neither the age of the women nor their menopausal status affected how they rated a man’s dominance. This suggests that while attractiveness and aggression are linked to physical cues, dominance might be evaluated through other means not captured in static photos.

The study, like all research, has limitations. One issue involved the method used to find participants, known as snowball sampling. In this process, existing participants recruit future subjects from among their own acquaintances. This method may have resulted in a sample that is not fully representative of the general population.

Reliance on online surveys also introduces a technology bias. Older women who are less comfortable with the internet may have been excluded from the study. This could skew the results for the postmenopausal group.

Another limitation involved the stimuli used. The photographs were all based on a single 22-year-old male model. This young age might not be relevant or appealing to women in their 50s, 60s, or 70s. Postmenopausal women might naturally prefer older men, and evaluating a man in his early twenties could introduce an age-appropriateness bias. The researchers acknowledge that future studies should use models of various ages to ensure more accurate ratings.

Despite these limitations, the study provides evidence that biological changes in women influence social perception. The findings support the concept that mating psychology evolves across the lifespan. As the biological need for “good genes” fades, women appear to adjust their criteria for what makes a man attractive.

The study, “The Perception of Women of Different Ages of Men’s Physical attractiveness, Aggression and Social Dominance Based on Male Secondary Sexual Characteristics,” was authored by Aurelia Starzyńska, Maja Pietras, and Łukasz Pawelec.

Genetic risk for depression predicts financial struggles, but the cause isn’t what scientists thought

A new study published in the Journal of Psychopathology and Clinical Science offers a nuanced look at how genetic risk for depression interacts with social and economic life circumstances to influence mental health over time. The findings indicate that while people with a higher genetic liability for depression often experience financial and educational challenges, these challenges may not be directly caused by the genetic risk itself.

Scientists conducted the study to better understand the developmental pathways that lead to depressive symptoms. A major theory in psychology, known as the bioecological model, proposes that genetic predispositions do not operate in a vacuum. Instead, this model suggests that a person’s genetic makeup might shape the environments they select or experience. For example, a genetic tendency toward low mood or low energy might make it harder for an individual to complete higher education or maintain steady employment.

If this theory holds true, those missed opportunities could lead to financial strain or a lack of social resources. These environmental stressors would then feed back into the person’s life, potentially worsening their mental health. The researchers aimed to test whether this specific chain of events is supported by data. They sought to determine if genetic risk for depression predicts changes in depressive symptoms specifically by influencing socioeconomic factors like wealth, debt, and education.

To investigate these questions, the researchers utilized data from two massive, long-term projects in the United States. The first dataset came from the National Longitudinal Study of Adolescent Health, also known as Add Health. This sample included 5,690 participants who provided DNA samples. The researchers tracked these individuals from adolescence, starting around age 16, into early adulthood, ending around age 29.

The second dataset served as a replication effort to see if the findings would hold up in a different group. This sample came from the Wisconsin Longitudinal Study, or WLS, which included 8,964 participants. Unlike the younger cohort in Add Health, the WLS participants were tracked across a decade in mid-to-late life, roughly from age 53 to 64. Using two different age groups allowed the scientists to see if these patterns persisted across the lifespan.

For both groups, the researchers calculated a “polygenic index” for each participant. This is a personalized score that summarizes thousands of tiny genetic variations across the entire genome that are statistically associated with depressive symptoms. A higher score indicates a higher genetic probability of experiencing depression. The researchers then measured four specific socioeconomic resources: educational attainment, total financial assets, total debt, and access to health insurance.

In the initial phase of the analysis, the researchers looked at the population as a whole. This is called a “between-family” analysis because it compares unrelated individuals against one another. In the Add Health sample, they found that higher genetic risk for depression was indeed associated with increases in depressive symptoms over the 12-year period.

The data showed that this link was partially explained by the socioeconomic variables. Participants with higher genetic risk tended to have lower educational attainment, fewer assets, more debt, and more difficulty maintaining health insurance. These difficult life circumstances, in turn, were associated with rising levels of depression.

The researchers then repeated this between-family analysis in the older Wisconsin cohort. The results were largely consistent. Higher genetic risk predicted increases in depression symptoms over the decade. Once again, this association appeared to be mediated by the same social factors. Specifically, participants with higher genetic risk reported lower net worth and were more likely to have gone deeply into debt or experienced healthcare difficulties.

These results initially seemed to support the idea that depression genes cause real-world problems that then cause more depression. However, the researchers took a significant additional step to test for causality. They performed a “within-family” analysis using siblings included in the Wisconsin study.

Comparing siblings provides a much stricter test of cause and effect. Siblings share roughly 50 percent of their DNA and grow up in the same household, which controls for many environmental factors like parenting style and childhood socioeconomic status. If the genetic risk for depression truly causes a person to acquire more debt or achieve less education, the sibling with the higher polygenic score should have worse economic outcomes than the sibling with the lower score.

When the researchers applied this sibling-comparison model, the findings changed. Within families, the sibling with higher genetic risk did report more depressive symptoms. This confirms that the genetic score is picking up on a real biological vulnerability. However, the link between the depression genetic score and the socioeconomic factors largely disappeared.

The sibling with higher genetic risk for depression was not significantly more likely to have lower education, less wealth, or more debt than their co-sibling. This lack of association in the sibling model suggests that the genetic risk for depression does not directly cause these negative socioeconomic outcomes. Instead, the correlation seen in the general population is likely due to other shared factors.

One potential explanation for the discrepancy involves a concept called pleiotropy, where the same genes influence multiple traits. The researchers conducted sensitivity analyses that accounted for genetic scores related to educational attainment. They found that once they controlled for the genetics of education, the apparent link between depression genes and socioeconomic status vanished.

This suggests that the same genetic variations that influence how far someone goes in school might also be correlated with depression risk. It implies that low education or financial struggle is not necessarily a downstream consequence of depression risk, but rather that both depression and socioeconomic struggles may share common genetic roots or be influenced by broader family environments.

The study has some limitations. Both datasets were comprised almost entirely of individuals of European ancestry. This lack of diversity means the results may not apply to people of other racial or ethnic backgrounds. Additionally, the measures of debt and insurance were limited to the questions available in these pre-existing surveys. They may not have captured the full nuance of financial stress.

Furthermore, while sibling models help rule out family-wide environmental factors, they cannot account for every unique experience a person has. Future research is needed to explore how these genetic risks interact with specific life events, such as trauma or job loss, which were not the primary focus of this investigation. The researchers also note that debt and medical insurance difficulties are understudied in this field and deserve more detailed attention in future work.

The study, “Genotypic and Socioeconomic Risks for Depressive Symptoms in Two U.S. Cohorts Spanning Early to Older Adulthood,” was authored by David A. Sbarra, Sam Trejo, K. Paige Harden, Jeffrey C. Oliver, and Yann C. Klimentidis.

Evening screen use may be more relaxing than stimulating for teenagers

A recent study published in the Journal of Sleep Research suggests that evening screen use might not be as physically stimulating for teenagers as many parents and experts have assumed. The findings provide evidence that most digital activities actually coincide with lower heart rates compared to non-screen activities like moving around the house or playing. This indicates that the common connection between screens and poor sleep is likely driven by the timing of device use rather than a state of high physical arousal.

Adolescence is a time when establishing healthy sleep patterns is essential for mental health and growth, yet many young people fall short of the recommended eight to ten hours of sleep. While screen use has been linked to shorter sleep times, the specific reasons why this happens are not yet fully understood.

Existing research has looked at several possibilities, such as the light from screens affecting hormones or the simple fact that screens take up time that could be spent sleeping. Some experts have also worried that the excitement from social media or gaming could keep the body in an active state that prevents relaxation. The new study was designed to investigate the physical arousal theory by looking at heart rate in real-world settings rather than in a laboratory.

“In our previous research, we found that screen use in bed was linked with shorter sleep, largely because teens were falling asleep later. But that left an open question: were screens simply delaying bedtime, or were they physiologically stimulating adolescents in a way that made it harder to fall asleep?” said study author Kim Meredith-Jones, a research associate professor at the University of Otago.

“In this study, we wanted to test whether evening screen use actually increased heart rate — a marker of physiological arousal — and whether that arousal explained delays in falling asleep. In other words, is it what teens are doing on screens that matters, or just the fact that screens are replacing sleep time?”

By using objective tools to track both what teens do on their screens and how their hearts respond, the team hoped to fill gaps in existing knowledge. They aimed to see if different types of digital content, such as texting versus scrolling, had different effects on the heart. Understanding these connections is important for creating better guidelines for digital health in young people.

The research team recruited a group of 70 adolescents from Dunedin, New Zealand, who were between 11 and nearly 15 years old. This sample was designed to be diverse, featuring 31 girls and 39 boys from various backgrounds. Approximately 33 percent of the participants identified as indigenous Māori, while others came from Pacific, Asian, or European backgrounds.

To capture a detailed look at their evening habits, the researchers used a combination of wearable technology and video recordings over four different nights. Each participant wore a high-resolution camera attached to a chest harness starting three hours before their usual bedtime. This camera recorded exactly what they were doing and what screens they were viewing until they entered their beds.

Once the participants were in bed, a stationary camera continued to record their activities until they fell asleep. This allowed the researchers to see if they used devices while under the covers and exactly when they closed their eyes. The video data was then analyzed by trained coders who categorized screen use into ten specific behaviors, such as watching videos, gaming, or using social media.

The researchers also categorized activities as either passive or interactive. Passive activities included watching, listening, reading, or browsing, while interactive activities included gaming, communication, and multitasking. Social media use was analyzed separately to see its specific impact on heart rate compared to other activities.

At the same time, the participants wore a Fitbit Inspire 2 on their dominant wrist to track their heart rate every few seconds. The researchers used this information to see how the heart reacted to each specific screen activity in real time. This objective measurement provided a more accurate picture than asking the teenagers to remember how they felt or what they did.

To measure sleep quality and duration, each youth also wore a motion-sensing device on their other wrist for seven consecutive days. This tool, known as an accelerometer, provided data on when they actually fell asleep and how many times they woke up. The researchers then used statistical models to see if heart rate patterns during screen time could predict these sleep outcomes.

The data revealed that heart rates were consistently higher during periods when the teenagers were not using screens. The average heart rate during non-screen activities was approximately 93 beats per minute, which likely reflects the physical effort of moving around or doing chores. In contrast, when the participants were using their devices, their average heart rate dropped to about 83 beats per minute.

This suggests that screen use is often a sedentary behavior that allows the body to stay relatively calm. When the participants were in bed, the difference was less extreme, but screen use still tended to accompany lower heart rates than other in-bed activities. These findings indicate that digital engagement may function as a way for teenagers to wind down after a long day.

The researchers also looked at how specific types of digital content affected the heart. Social media use was associated with the lowest heart rates, especially when the teenagers were already in bed. Gaming and multitasking between different apps also showed lower heart rate readings compared to other screen-based tasks.

“We were surprised to find that heart rates were lower during social media use,” Meredith-Jones told PsyPost. “Previous research has suggested that social media can be stressful or emotionally intense for adolescents, so we expected to see higher arousal. Instead, our findings suggest that in this context, teens may have been using social media as a way to unwind or switch off. That said, how we define and measure ‘social media use’ matters, and we’re now working on more refined ways to capture the context and type of engagement.”

On the other hand, activities involving communication, such as texting or messaging, were linked to higher heart rates. This type of interaction seemed to be less conducive to relaxation than scrolling through feeds or watching videos. Even so, the heart rate differences between these various digital activities were relatively small.

When examining sleep patterns, the researchers found that heart rate earlier in the evening had a different relationship with sleep than heart rate closer to bedtime. Higher heart rates occurring more than two hours before bed were linked to falling asleep earlier in the night. This may be because higher activity levels in the early evening help the body build up a need for rest.

However, the heart rate in the two hours before bed and while in bed had the opposite effect on falling asleep. For every increase of 10 beats per minute during this window, the participants took about nine minutes longer to drift off. This provides evidence that physical excitement right before bed can delay the start of sleep.

Notably, while a higher heart rate made it harder to fall asleep, it did not seem to reduce the total amount of sleep the teenagers got. It also did not affect how often they woke up during the night or the general quality of their rest. The researchers noted that a person would likely need a very large increase in heart rate to see a major impact on their sleep schedule.

“The effects were relatively small,” Meredith-Jones explained. “For example, our data suggest heart rate would need to increase by around 30 beats per minute to delay sleep onset by about 30 minutes. The largest differences we observed between screen activities were closer to 10 beats per minute, making it unlikely that typical screen use would meaningfully delay sleep through physiological arousal alone.”

“The key takeaway is that most screen use in the evening did not increase heart rate. In fact, many types of screen activity were associated with lower heart rates compared to non-screen time. Although higher heart rate before bed was linked with taking longer to fall asleep, the changes in heart rate we observed during screen use were generally small. Overall, most evening screen activities appeared more relaxing than arousing.”

One limitation of this study is that the researchers did not have a baseline heart rate for each participant while they were completely at rest. Without this information, it is difficult to say for certain if screens were actively lowering the heart rate or if the teens were just naturally calm. Individual differences in biology could account for some of the variations seen in the data.

“One strength of this study was our use of wearable cameras to objectively classify screen behaviours such as gaming, social media, and communication,” Meredith-Jones noted. “This approach provides much richer and more accurate data than self-report questionnaires or simple screen-time analytics. However, a limitation is that we did not measure each participant’s true resting heart rate, so we can’t definitively say whether higher heart rates reflected arousal above baseline or just individual differences. That’s an important area for refinement in future research.”

It is also important to note that the findings don’t imply that screens are always helpful for sleep. Even if they are not physically arousing, using a device late at night can still lead to sleep displacement. This happens when the time spent on a screen replaces time that would otherwise be spent sleeping, leading to tiredness the next day. On the other hand, one shouldn’t assume that screens always impede sleep, either.

“A common assumption is that all screen use is inherently harmful for sleep,” Meredith-Jones explained. “Our findings don’t support that blanket statement. In earlier work, we found that screen use in bed was associated with shorter sleep duration, but in this study, most screen use was not physiologically stimulating. That suggests timing and context matter, and that some forms of screen use may even serve as a wind-down activity before bed.”

Looking ahead, “we want to better distinguish between different types of screen use, for example, interactive versus passive engagement, or emotionally charged versus neutral communication,” Meredith-Jones said. “We’re also developing improved real-world measurement tools that can capture not just how long teens use screens, but what they’re doing, how they’re engaging, and in what context. That level of detail is likely to give us much clearer answers than simple ‘screen time’ totals.”

The study, “Screens, Teens, and Sleep: Is the Impact of Nighttime Screen Use on Sleep Driven by Physiological Arousal?” was authored by Kim A. Meredith-Jones, Jillian J. Haszard, Barbara C. Galland, Shay-Ruby Wickham, Bradley J. Brosnan, Takiwai Russell-Camp, and Rachael W. Taylor.

Methamphetamine increases motivation through brain processes separate from euphoria

A study published in the journal Psychopharmacology has found that the increase in motivation people experience from methamphetamine is separate from the drug’s ability to produce a euphoric high. The findings suggest that these two common effects of stimulant drugs likely involve different underlying biological processes in the brain. This research indicates that a person might become more willing to work hard without necessarily feeling a greater sense of pleasure or well-being.

The researchers conducted the new study to clarify how stimulants affect human motivation and personal feelings. They intended to understand if the pleasurable high people experience while taking these drugs is the primary reason they become more willing to work for rewards. By separating these effects, the team aimed to gain insight into how drugs could potentially be used to treat motivation-related issues without causing addictive euphoria.

Another reason for the study was to investigate how individual differences in personality or brain chemistry change how a person responds to a stimulant. Scientists wanted to see if people who are naturally less motivated benefit more from these drugs than those who are already highly driven. The team also sought to determine if the drug makes tasks feel easier or if it simply makes the final reward seem more attractive to the user.

“Stimulant drugs like amphetamine are thought to produce ‘rewarding’ effects that contribute to abuse or dependence, by increasing levels of the neurotransmitter dopamine. Findings from animal models suggest that stimulant drugs, perhaps because of their effects on dopamine, increase motivation, or the animals’ willingness to exert effort,” explained study author Harriet de Wit, a professor at the University of Chicago.

“Findings from human studies suggest that stimulant drugs lead to repeated use because they produce subjective feelings of wellbeing. In the present study, we tested the effects of amphetamine in healthy volunteers, on both an effort task and self-reported euphoria.”

For their study, the researchers recruited a group of 96 healthy adults from the Chicago area. This group consisted of 48 men and 48 women between the ages of 18 and 35. Each volunteer underwent a rigorous screening process that included a physical exam, a heart health check, and a psychiatric interview to ensure they were healthy.

The study used a double-blind, placebo-controlled design to ensure the results were accurate and unbiased. This means that neither the participants nor the staff knew if a volunteer received the actual drug or an inactive pill on a given day. The participants attended two separate laboratory sessions where they received either 20 milligrams of methamphetamine or a placebo.

During these sessions, the participants completed a specific exercise called the Effort Expenditure for Rewards Task. This task required them to choose between an easy option for a small amount of money or a more difficult option for a larger reward. The researchers used this to measure how much physical effort a person was willing to put in to get a better payoff.

The easy task involved pressing a specific key on a keyboard 30 times with the index finger of the dominant hand within seven seconds. Successfully completing this task always resulted in a small reward of one dollar. This served as a baseline for the minimum amount of effort a person was willing to expend for a guaranteed but small gain.

The hard task required participants to press a different key 100 times using the pinky finger of their non-dominant hand within 21 seconds. The rewards for this more difficult task varied from about one dollar and 24 cents to over four dollars. This task was designed to be physically taxing and required a higher level of commitment to complete.

Before making their choice on each trial, participants were informed of the probability that they would actually receive the money if they finished the task. These probabilities were set at 12 percent, 50 percent, or 88 percent. This added a layer of risk to the decision, as a person might work hard for a reward but still receive nothing if the odds were not in their favor.

Throughout the four-hour sessions, the researchers measured the participants’ personal feelings and physical reactions at regular intervals. They used standardized questionnaires to track how much the participants liked the effects of the drug and how much euphoria they felt. They also monitored physical signs such as heart rate and blood pressure to ensure the safety of the volunteers.

Before the main sessions, the participants completed the task during an orientation to establish their natural effort levels. The researchers then divided the group in half based on these baseline scores. This allowed the team to compare people who were naturally inclined to work hard against those who were naturally less likely to choose the difficult task.

The results showed that methamphetamine increased the frequency with which people chose the hard task over the easy one across the whole group. This effect was most visible when the chances of winning the reward were in the low to medium range. The drug seemed to give participants a boost in motivation when the outcome was somewhat uncertain.

The data provides evidence that the drug had a much stronger impact on people who were naturally less motivated. Participants in the low baseline group showed a significantly larger increase in their willingness to choose the hard task compared to those in the high baseline group. For people who were already high achievers, the drug did not seem to provide much of an additional motivational boost.

To understand why the drug changed behavior, the researchers used a mathematical model to analyze the decision-making process. This model helped the team separate how much a person cares about the difficulty of a task from how much they value the reward itself. It provided a more detailed look at the internal trade-offs people make when deciding to work.

The model showed that methamphetamine specifically reduced a person’s sensitivity to the physical cost of effort. This suggests that the drug makes hard work feel less unpleasant or demanding than it normally would. Instead of making the reward seem more exciting, the drug appears to make the work itself feel less like a burden.

This change in effort sensitivity was primarily found in the participants who started with low motivation levels. For these individuals, the drug appeared to lower the mental or physical barriers that usually made them avoid the difficult option. In contrast, the drug did not significantly change the effort sensitivity of those who were already highly motivated.

Methamphetamine did not change how sensitive people were to the probability of winning the reward. This indicates that the drug affects the drive to work rather than changing how people calculate risks or perceive the odds of success. The volunteers still understood the chances of winning, but they were more willing to try anyway despite the difficulty.

As the researchers expected, the drug increased feelings of happiness and euphoria in the participants. It also caused the usual physical changes associated with stimulants, such as an increase in heart rate and blood pressure. Most participants reported that they liked the effects of the drug while they were performing the tasks.

A major finding of the study is that the boost in mood was not related to the boost in productivity. The participants who felt the highest levels of euphoria were not the same people who showed the greatest increase in hard task choices. “This suggests that different receptor actions of amphetamine mediate willingness to exert effort and feelings of wellbeing,” de Wit explained.

There was no statistical correlation between how much a person liked the drug and how much more effort they were willing to exert. This provides evidence that the brain processes that create pleasure from stimulants are distinct from those that drive motivated behavior. A person can experience the motivational benefits of a stimulant without necessarily feeling the intense pleasure that often leads to drug misuse.

The findings highlight that “drugs have numerous behavioral and cognitive actions, which may be mediated by different neurotransmitter actions,” de Wit told PsyPost. “The purpose of research in this area is to disentangle which effects are relevant to misuse or dependence liability, and which might have clinical benefits, and what brain processes underlie the effects.”

The results also highlight the importance of considering a person’s starting point when predicting how they will respond to a medication. Because the drug helped the least motivated people the most, it suggests that these treatments might be most effective for those with a clear deficit in drive.

The study, like all research, has some limitations. The participants were all healthy young adults, so it is not clear if the results would be the same for older people or those with existing health conditions. A more diverse group of volunteers would be needed to see if these patterns apply to the general population.

The study only tested a single 20-milligram dose of methamphetamine given by mouth. It is possible that different doses or different ways of taking the drug might change the relationship between mood and behavior. Using a range of doses in future studies would help researchers see if there is a point where the mood and effort effects begin to overlap.

Another limitation is that the researchers did not directly look at the chemical changes inside the participants’ brains. While they believe dopamine is involved, they did not use brain imaging technology to confirm this directly. Future research could use specialized scans to see exactly which brain regions are active when these changes in motivation occur.

“The results open the door to further studies to determine what brain mechanisms underlie the two behavioral effects,” de Wit said.

The study, “Effects of methamphetamine on human effort task performance are unrelated to its subjective effects,” was authored by Evan C. Hahn, Hanna Molla, Jessica A. Cooper, Joseph DeBrosse, and Harriet de Wit.

AI boosts worker creativity only if they use specific thinking strategies

A new study published in the Journal of Applied Psychology suggests that generative artificial intelligence can boost creativity among employees in professional settings. But the research indicates that these tools increase innovative output only when workers use specific mental strategies to manage their own thought processes.

Generative artificial intelligence is a type of technology that can produce new content such as text, images, or computer code. Large language models like ChatGPT or Google’s Gemini use massive datasets to predict and generate human-like responses to various prompts. Organizations often implement these tools with the expectation that they will help employees come up with novel and useful ideas. Many leaders believe that providing access to advanced technology will automatically lead to a more innovative workforce.

However, recent surveys indicate that only a small portion of workers feel that these tools actually improve their creative work. The researchers conducted the new study to see if the technology truly helps and to identify which specific factors make it effective. They also wanted to see how these tools function in a real office environment where people manage multiple projects at once. Most previous studies on this topic took place in artificial settings using only one isolated task.

“When ChatGPT was released in November 2022, generative AI quickly became part of daily conversation. Many companies rushed to integrate generative AI tools into their workflows, often expecting that this would make employees more creative and, ultimately, give organizations a competitive advantage,” said study author Shuhua Sun, who holds the Peter W. and Paul A. Callais Professorship in Entrepreneurship at Tulane University’s A. B. Freeman School of Business.

“What struck us, though, was how little direct evidence existed to support those expectations, especially in real workplaces. Early proof-of-concept studies in labs and online settings began to appear, but their results were mixed. Even more surprisingly, there were almost no randomized field experiments examining how generative AI actually affects employee creativity on the job.”

“At the same time, consulting firms started releasing large-scale surveys on generative AI adoption. These reports showed that only a small percentage of employees felt that using generative AI made them more creative. Taken together with the mixed lab/online findings, this raised a simple but important question for us: If generative AI is supposed to enhance creativity, why does it seem to help only some employees and not others? What are those employees doing differently?”

“That question shaped the core of our project. So, instead of asking simply whether generative AI boosts creativity, we wanted to understand how it does so and for whom. Driven by these questions, we developed a theory and tested it using a randomized field experiment in a real organizational setting.”

The researchers worked with a technology consulting firm in China to conduct their field experiment. This company was an ideal setting because consulting work requires employees to find unique solutions for many different clients. The study included a total of 250 nonmanagerial employees from departments such as technology, sales, and administration. These participants had an average age of about 30 years and most held university degrees.

The researchers randomly split the workers into two groups. The first group received access to ChatGPT accounts and was shown how to use the tool for their daily tasks. The second group served as a control and did not receive access to the artificial intelligence software during the study. To make sure the experiment was fair, the company told the first group that the technology was meant to assist them rather than replace them.

The experiment lasted for about one week. During this time, the researchers tracked how often the treated group used their new accounts. At the end of the week, the researchers collected data from several sources to measure the impact of the tool. They used surveys to ask employees about their work experiences and their thinking habits.

They also asked the employees’ direct supervisors to rate their creative performance. These supervisors did not know which employees were using the artificial intelligence tool. Additionally, the researchers used two external evaluators to judge specific ideas produced by the employees. These evaluators looked at how novel and useful the ideas were without knowing who wrote them.

The researchers looked at cognitive job resources, which are the tools and mental space people need to handle complex work. This includes having enough information and the ability to switch between hard and easy tasks. They also measured metacognitive strategies. This term describes how people actively monitor and adjust their own thinking to reach a goal.

A person with high metacognitive strategies might plan out their steps before starting a task. They also tend to check their own progress and change their approach if they are not making enough headway. The study suggests that the artificial intelligence tool increased the cognitive resources available to employees. The tool helped them find information quickly and allowed them to manage their mental energy more effectively.

The results show that the employees who had access to the technology generally received higher creativity ratings from their supervisors. The external evaluators also gave higher scores for novelty to the ideas produced by this group. The evidence suggests that the tool was most effective when workers already used strong metacognitive strategies. These workers were able to use the technology to fill specific gaps in their knowledge.

For employees who did not use these thinking strategies, the tool did not significantly improve their creative output. These individuals appeared to be less effective at using the technology to gain new resources. The study indicates that the tool provides the raw material for creativity, but the worker must know how to direct the process. Specifically, workers who monitored their own mental state knew when to use the tool to take a break or switch tasks.

This ability to switch tasks is important because it prevents a person from getting stuck on a single way of thinking. When the technology handled routine parts of a job, it gave workers more mental space to focus on complex problem solving. The researchers found that the positive effect of the technology became significant once a worker’s use of thinking strategies reached a certain level. Below that threshold, the tool did not provide a clear benefit for creativity.

The cognitive approach to creativity suggests that coming up with new ideas is a mental process of searching through different areas of knowledge. People must find pieces of information and then combine them in ways that have not been tried before. This process can be very demanding because people have a limited amount of time and mental energy. Researchers call this the knowledge burden.

It takes a lot of effort to find, process, and understand new information from different fields. If a person spends all their energy just gathering facts, they might not have enough strength left to actually be creative. Artificial intelligence can help by taking over the task of searching for and summarizing information. This allows the human worker to focus on the high level task of combining those facts into something new.

Metacognition is essentially thinking about one’s own thinking. It involves a person being aware of what they know and what they do not know. When a worker uses metacognitive strategies, they act like a coach for their own brain. They ask themselves if their current plan is working or if they need to try a different path.

The study shows that this self-awareness is what allows a person to use artificial intelligence effectively. Instead of just accepting whatever the computer says, a strategic thinker uses the tool to test specific ideas. The statistical analysis revealed that the artificial intelligence tool provided workers with more room to think. This extra mental space came from having better access to knowledge and more chances to take mental breaks.

The researchers used a specific method called multilevel analysis to account for the way employees were organized within departments and teams. This helps ensure that the findings are not skewed by the influence of a single department or manager. The researchers also checked to see if other factors like past job performance or self-confidence played a role. Even when they accounted for these variables, the link between thinking strategies and the effective use of artificial intelligence remained strong.

The data showed that the positive impact of the tool on creativity was quite large for those who managed their thinking well. For those with low scores in that area, the tool had almost no impact on their creative performance. To test creativity specifically, the researchers asked participants to solve a real problem. They had to provide suggestions for protecting employee privacy in a digital office.

This task required at least 70 Chinese characters in response. It was designed to see if the participants could think of novel ways to prevent information leaks or excessive monitoring by leadership. The external raters then scored these responses based on how original and useful they were. This provided a more objective look at creativity than just asking a supervisor for their opinion.

“The main takeaway is that generative AI does not automatically make people more creative,” Sun told PsyPost. “Simply providing access to AI tools is not enough, and in many cases it yields little creative benefit. Our findings show that the creative value of AI depends on how people engage with it during the creative process. Individuals who actively monitor their own understanding, recognize what kind of help they need, and deliberately decide when and how to use AI are much more likely to benefit creatively.”

“In contrast, relying on AI in a more automatic or unreflective way tends to produce weaker creative outcomes. For the average person, the message is simple: AI helps creativity when it is used thoughtfully: Pausing to reflect on what you need, deciding when AI can be useful, and actively shaping its output iteratively are what distinguish creative gains from generic results.”

As with all research, there are some limitations to consider. The researchers relied on workers to report their own thinking strategies, which can sometimes be inaccurate. The study also took place in a single company within one specific country. People in different cultures might interact with artificial intelligence in different ways.

Future research could look at how long-term use of these tools affects human skills. There is a possibility that relying too much on technology could make people less independent over time. Researchers might also explore how team dynamics influence the way people use these tools. Some office environments might encourage better thinking habits than others.

It would also be helpful to see if the benefits of these tools continue to grow over several months or if they eventually level off. These questions will be important as technology continues to change the way we work. The findings suggest that simply buying new software is not enough to make a company more innovative. Organizations should also consider training their staff to be more aware of their own thinking processes.

Since the benefits of artificial intelligence depend on a worker’s thinking habits, generic software training might not be enough. Instead, programs might need to focus on how to analyze a task and how to monitor one’s own progress. These metacognitive skills are often overlooked in traditional professional development. The researchers note that these skills can be taught through short exercises. Some of these involve reflecting on past successes or practicing new ways to plan out a workday.

The study, “How and for Whom Using Generative AI Affects Creativity: A Field Experiment,” was authored by Shuhua Sun, Zhuyi Angelina Li, Maw-Der Foo, Jing Zhou, and Jackson G. Lu.

Scientists asked men to smell hundreds of different vulvar odors to test the “leaky-cue hypothesis”

A new study published in Evolution and Human Behavior suggests that modern women may not chemically signal fertility through vulvar body odor, a trait commonly observed in other primates. The findings indicate that men are unable to detect when a woman is in the fertile phase of her menstrual cycle based solely on the scent of the vulvar region. This research challenges the idea that humans have retained these specific evolutionary mating signals.

In the animal kingdom, particularly among non-human primates like lemurs, baboons, and chimpanzees, females often broadcast their reproductive status to males. This is frequently done through olfactory signals, specifically odors from the genital region, which change chemically during the fertile window. These scents serve as information for males, helping them identify when a female is capable of conceiving. Because humans share a deep evolutionary history with these primates, scientists have debated whether modern women retain these chemical signals.

A concept known as the “leaky-cue hypothesis” proposes that women might unintentionally emit subtle physiological signs of fertility. While previous research has investigated potential signals in armpit odor, voice pitch, or facial attractiveness, results have been inconsistent.

The specific scent of the vulvar region has remained largely unexplored using modern, rigorous methods, despite its biological potential as a source of chemical communication. To address this gap, a team led by Madita Zetzsche from the Behavioural Ecology Research Group at Leipzig University and the Max Planck Institute for Evolutionary Anthropology conducted a detailed investigation.

The researchers recruited 28 women to serve as odor donors. These participants were between the ages of 20 and 30, did not use hormonal contraception, and had regular menstrual cycles. To ensure the accuracy of the fertility data, the team did not rely on simple calendar counting. Instead, they used high-sensitivity urinary tests to detect luteinizing hormone and analyzed saliva samples to measure levels of estradiol and progesterone. This allowed the scientists to pinpoint the exact day of ovulation for each participant.

To prevent external factors from altering body odor, the donors adhered to a strict lifestyle protocol. They followed a vegetarian or vegan diet and avoided foods with strong scents, such as garlic, onion, and asparagus, as well as alcohol and tobacco. The women provided samples at ten specific points during their menstrual cycle. These points were clustered around the fertile window to capture any rapid changes in odor that might occur just before or during ovulation.

The study consisted of two distinct parts: a chemical analysis and a perceptual test. For the chemical analysis, the researchers collected 146 vulvar odor samples from a subset of 16 women. They used a specialized portable pump to draw air from the vulvar region into stainless steel tubes containing polymers designed to trap volatile compounds. These are the lightweight chemical molecules that evaporate into the air and create scent.

The team analyzed these samples using gas chromatography–mass spectrometry. This is a laboratory technique that separates a mixture into its individual chemical components and identifies them. The researchers looked for changes in the chemical profile that corresponded to the women’s conception risk and hormone levels. They specifically sought to determine if the abundance of certain chemical compounds rose or fell in a pattern that tracked the menstrual cycle.

The chemical analysis revealed no consistent evidence that the overall scent profile changed in a way that would allow fertility to be tracked across the menstrual cycle. While some specific statistical models suggested a potential link between the risk of conception and levels of certain substances—such as an increase in acetic acid and a decrease in a urea-related compound—these findings were not stable. When the researchers ran robustness checks, such as excluding samples from donors who had slightly violated dietary rules, the associations disappeared. The researchers concluded that there is likely a low retention of chemical fertility cues in the vulvar odor of modern women.

In the second part of the study, 139 men participated as odor raters. To collect the scent for this experiment, the female participants wore cotton pads in their underwear overnight for approximately 12 hours. These pads were then frozen to preserve the scent and later presented to the male participants in glass vials. The men, who were unaware of the women’s fertility status, sniffed the samples and rated them on three dimensions: attractiveness, pleasantness, and intensity.

The perceptual results aligned with the chemical findings. The statistical analysis showed that the men’s ratings were not influenced by the women’s fertility status. The men did not find the odor of women in their fertile window to be more attractive or pleasant than the odor collected during non-fertile days. Neither the risk of conception nor the levels of reproductive hormones predicted how the men perceived the scents.

These null results were consistent even when the researchers looked at the data in different ways, such as examining specific hormone levels or the temporal distance to ovulation. The study implies that if humans ever possessed the ability to signal fertility through vulvar scent, this trait has likely diminished significantly over evolutionary time.

The researchers suggest several reasons for why these cues might have been lost or suppressed in humans. Unlike most primates that walk on four legs, humans walk upright. This bipedalism moves the genital region away from the nose of other individuals, potentially reducing the role of genital odor in social communication. Additionally, human cultural practices, such as wearing clothing and maintaining high levels of hygiene, may have further obscured any remaining chemical signals.

It is also possible that social odors in humans have shifted to other parts of the body, such as the armpits, although evidence for axillary fertility cues remains mixed. The researchers noted that while they found no evidence of fertility signaling in this context, it remains possible that such cues require more intimate contact or sexual arousal to be detected, conditions that were not replicated in the laboratory.

Additionally, the strict dietary and behavioral controls, while necessary for scientific rigor, might not reflect real-world conditions where diet varies. The sample size for the chemical analysis was also relatively small, which can make it difficult to detect very subtle effects.

Future research could investigate whether these cues exist in more naturalistic settings or investigate the role of the vaginal microbiome, which differs significantly between humans and non-human primates. The high levels of Lactobacillus bacteria in humans create a more acidic environment, which might alter the chemical volatility of potential fertility signals.

The study, “Understanding olfactory fertility cues in humans: chemical analysis of women’s vulvar odour and perceptual detection of these cues by men,” was authored by Madita Zetzsche, Marlen Kücklich, Brigitte M. Weiß, Julia Stern, Andrea C. Marcillo Lara, Claudia Birkemeyer, Lars Penke, and Anja Widdig.

Childhood trauma scores fail to predict violent misconduct in juvenile detention

New research published in Aggression and Violent Behavior indicates that a history of childhood trauma may not effectively predict which incarcerated youth will engage in the most frequent and violent misconduct. The study suggests that while adverse childhood experiences explain why young people enter the justice system, current factors such as mental health status and gang affiliation are stronger predictors of behavior during incarceration.

Psychologists and criminologists identify childhood adversity as a primary driver of delinquency. Exposure to trauma often hinders emotional regulation and impulse control. This can lead adolescents to interpret social interactions as hostile and resort to aggression. Correctional systems frequently use the Adverse Childhood Experiences score, commonly known as the ACE score, to quantify this history. The traditional ACE score is a cumulative measure of ten specific categories of abuse, neglect, and household dysfunction.

There is a growing consensus that the original ten-item measure may be too narrow for justice-involved youth. It fails to account for systemic issues such as poverty, community violence, and discrimination. Consequently, scholars have proposed expanded measures to capture a broader range of adversities. D

Despite the widespread use of these scores, little research has isolated their ability to predict the behavior of the most serious offenders. Most studies examine general misconduct across all inmates. This study aimed to determine if trauma scores could identify the small fraction of youth responsible for the vast majority of violent and disruptive incidents within state facilities.

“While research has extensively documented that adverse childhood experiences (ACEs) increase the risk of juvenile delinquency, we knew much less about whether ACEs predict the most serious forms of institutional misconduct among already-incarcerated youth,” said study author Jessica M. Craig, an associate professor of criminal justice and director of graduate programs at the University of North Texas.

“We were particularly interested in whether an expanded ACEs measure—which includes experiences like witnessing community violence, homelessness, and extreme poverty beyond the traditional 10-item scale—would better predict which youth become chronic and violent misconduct offenders during incarceration. This matters because institutional misconduct can lead to longer confinement, additional legal consequences, and reduced access to rehabilitation programs.​”

For their study, the researchers analyzed data from a cohort of 4,613 serious and violent juvenile offenders. The sample included all youth adjudicated and incarcerated in state juvenile correctional facilities in Texas between 2009 and 2013 who had completed an initial intake assessment. The participants were predominantly male. Approximately 46 percent were Hispanic and 34 percent were Black. The average age at the time of incarceration was 16 years old.

The researchers utilized the Positive Achievement Change Tool to derive two distinct trauma scores for each individual. The first was the traditional ACE score. This metric summed exposure to ten indicators: physical, emotional, and sexual abuse; physical and emotional neglect; household substance abuse; mental illness in the home; parental separation or divorce; domestic violence against a mother; and the incarceration of a household member.

The second measure was an expanded ACE score. This metric included the original ten items plus four additional variables relevant to high-risk populations. These additions included a history of foster care or shelter placements, witnessing violence in the community, experiencing homelessness, and living in a family with income below the poverty level. The average youth in the sample had a traditional ACE score of roughly 3.3 and an expanded score of nearly 4.9.

The study did not treat misconduct as a simple average. The researchers sought to identify chronic perpetrators. They calculated the rate of total misconduct incidents and violent misconduct incidents for each youth. They then separated the offenders into groups representing the top 10 percent and the top 1 percent of misconduct perpetrators. This allowed the analysis to focus specifically on the individuals who pose the greatest challenge to institutional safety.

The researchers used statistical models to test whether higher trauma scores increased the likelihood of being in these high-rate groups. These models controlled for other potential influences, including prior criminal history, offense type, age, race, and substance abuse history.

The analysis yielded results that challenged the assumption that past trauma dictates future institutional violence. Neither the traditional ACE score nor the expanded ACE score served as a significant predictor for membership in the top 10 percent or top 1 percent of misconduct perpetrators. This finding held true for both general rule-breaking and specific acts of violence. The addition of variables like poverty and community violence to the trauma score did not improve its predictive power regarding institutional behavior.

“We were surprised that even the expanded ACEs measure—which included witnessing violence, foster care placement, homelessness, and poverty—failed to predict high-rate misconduct,” Craig told PsyPost. “Given that previous research suggested the traditional 10-item ACEs scale might underestimate adversity among justice-involved youth, we expected the expanded measure to show stronger predictive power.”​

While trauma history did not predict chronic misconduct, other personal and situational characteristics proved to be strong indicators. The most consistent predictor of violent behavior was a history of serious mental health problems. Youth with such histories had approximately 150 percent increased odds of falling into the top 1 percent of violent misconduct perpetrators compared to their peers. This effect size suggests that current psychological stability is a primary determinant of safety within the facility.

Age and social connections also played significant roles. The data indicated that older youth were substantially less likely to engage in chronic misconduct. Specifically, those who were older at the time of incarceration were about 50 to 60 percent less likely to be in the high-rate misconduct groups. Gang affiliation was another robust predictor. Youth with gang ties were significantly more likely to be among the most frequent violators of institutional rules. This points to the influence of peer dynamics and the prison social structure on individual behavior.

“These are substantively meaningful effects that have real implications for correctional programming and supervision strategies,” Craig said.

The study provides evidence that the factors driving entry into the justice system may differ from the factors driving behavior once inside. While childhood adversity sets a trajectory toward delinquency, the structured environment of a correctional facility introduces new variables. The researchers suggest that the “survival coping” mechanisms youth develop in response to trauma might manifest differently depending on their immediate environment and mental state.

“Contrary to expectations, we found that neither traditional nor expanded ACEs measures significantly predicted which youth became the most frequent perpetrators of institutional misconduct,” Craig explained. “Instead, factors like age at incarceration, gang affiliation, and mental health history were much stronger predictors.”

“This suggests that while childhood trauma remains critically important for understanding how youth enter the justice system, managing their behavior during incarceration may require greater focus on their current mental health needs, developmental stage, and institutional factors rather than trauma history alone.​”

These findings imply that correctional administrators should look beyond a cumulative trauma score when assessing risk. Screening processes that emphasize current mental health conditions and gang involvement may offer more utility for preventing violence than those focusing solely on historical adversity. Effective management of high-risk populations appears to require targeted mental health interventions and strategies to disrupt gang activity.

There are some limitations to consider. The data came from a single state, which may limit the ability to generalize the findings to other jurisdictions with different correctional cultures or demographics.

The study also relied on cumulative scores that count the presence of adverse events but do not measure their severity, frequency, or timing. It is possible that specific types of trauma, such as physical abuse, have different impacts than others, such as parental divorce. A simple sum of these events might obscure specific patterns that do predict violence.

“It’s important to emphasize that our findings don’t diminish the significance of childhood trauma in understanding juvenile justice involvement overall,” Craig said. “ACEs remain crucial for understanding pathways into the system and should absolutely be addressed through trauma-informed programming. However, when it comes to predicting institutional violence specifically among already deeply-entrenched offenders, personal characteristics and current mental health status appear more salient than historical trauma exposure.​”

“Future research should examine whether specific patterns or combinations of traumatic experiences—rather than cumulative scores—might better predict institutional violence. We’d also like to investigate whether trauma-informed treatment programs, when youth actually receive them during incarceration, can reduce misconduct even when trauma history alone doesn’t predict it. Additionally, examining the timing and severity of ACEs, rather than just their presence or absence, could clarify the trauma-violence relationship.”

The study, “Looking back: The impact of childhood adversity on institutional misconduct among a cohort of serious and violent institutionalized delinquents,” was authored by Jessica M. Craig, Haley Zettler, and Chad R. Trulson.

High rates of screen time linked to specific differences in toddler vocabulary

New research published in the journal Developmental Science provides evidence that the amount of time toddlers spend watching videos is associated with the specific types of words they learn, distinct from the total number of words they know. The findings indicate that higher levels of digital media consumption are linked to a vocabulary containing a smaller proportion of body part words and a larger proportion of words related to people and furniture.

The widespread integration of digital media into family life has prompted questions about its influence on early child development. Current estimates suggest that many children under the age of two spend roughly two hours per day interacting with screens, primarily watching videos or television.

Previous research has often focused on the relationship between screen time and the overall size of a child’s vocabulary. These earlier studies generally established that high exposure to low-quality programming correlates with a lower total number of words spoken by the child.

However, language acquisition is a multifaceted process. Children do not learn all words in the same manner. The acquisition of certain types of words relies heavily on specific environmental inputs.

“There is no doubt that use of digital media by young children has been on the rise in the past few years, and growing evidence suggest that this has impacts on their language learning, especially during the first few years of life,” said study author Sarah C. Kucker, an assistant professor of psychology at Southern Methodist University.

“For instance, we know that children who watch high rates of low-quality television/videos tend to have smaller vocabularies and less advanced language skills (this is work by my own lab, but also many others such as Brushe et al., 2025; Madigan et al., 2024). However, we also know that some forms of media do not have negative effects and can, in fact, be useful for language when the media is high-quality, socially-interactive, and educational in nature (work by Sundqvist as well Jing et al., 2024).”

“On top of this, we know that children’s language development and specifically their vocabulary learning is not an all-or-nothing, but rather that children learn different types of words at different times and in different ways – e.g. learning words for body parts is easier when you can touch the body part when named, and names for people (mama, dada) are learned earlier than most other nouns,” Kucker continued.

“When we put this together it means that we shouldn’t be looking at digital media’s influence on language as just an all-or-nothing, or blanket good-or-bad, but rather take a more nuanced look. So we did just that by looking at the types of words children are learning and the association with the time they spend with digital media.”

For their study, the researchers recruited 388 caregivers of children aged 17 to 30 months. This age range represents a period of rapid language expansion often referred to as the vocabulary spurt. Participants were recruited through online research platforms and in-person visits to a university laboratory. The researchers combined these groups into a single dataset for analysis.

Caregivers completed a comprehensive survey known as the Media Assessment Questionnaire. This instrument asked parents to report the number of minutes their child spent using various forms of technology, such as television, tablets, and video chat.

The researchers collected data for both typical weekdays and weekends. They used these reports to calculate a weighted daily average of screen time for each child. The data revealed that video and television viewing was the most common media activity. On average, the children in the sample watched videos for approximately 110 minutes per day.

To measure language development, caregivers completed the MacArthur-Bates Communicative Development Inventory. This is a standardized checklist containing hundreds of words commonly learned by young children. Parents marked the words their child could say.

This tool allowed the researchers to calculate the total size of each child’s noun vocabulary. It also enabled them to break down the vocabulary into specific semantic categories. These categories included animals, vehicles, toys, food and drink, clothing, body parts, small household items, furniture and rooms, outside things, places to go, and people.

The researchers also analyzed the vocabulary data through a different lens. They classified nouns based on the features that define their categories. Specifically, they looked at shape-based nouns and material-based nouns.

Shape-based nouns usually refer to solid objects defined by their physical form, such as “ball” or “cup.” Material-based nouns often refer to nonsolid substances or items defined by what they are made of, such as “applesauce” or “chalk.” This distinction is significant in developmental psychology because physical handling of objects is thought to help children learn these concepts.

The researchers found that children with higher rates of video viewing produced a smaller proportion of body part words. In a typical toddler’s vocabulary, words like “nose,” “feet,” or “ears” are often among the first learned. However, as screen time increased, the density of these words in the child’s repertoire decreased relative to other word types.

In contrast, the researchers found a positive association between video time and words related to people. This category includes proper names, titles like “teacher” or “grandma,” and general terms like “baby.” Children who watched more videos tended to have a vocabulary composition that was more heavily weighted toward these social labels.

A similar positive association was found for the category of furniture and rooms. Heavy media users were more likely to produce words such as “couch,” “TV,” or “kitchen” relative to their peers with lower media use.

“While we expected that children with high media use would have fewer body part words in their vocabulary, we were surprised to find that children with high media knew relatively more people words and furniture words,” Kucker told PsyPost. “We suspect this may have to do with the content of the media highlighting those terms, or perhaps the physical context in which children are using media (e.g. while sitting on a couch or when working with mom), but the tools to capture this information are currently limited.”

The researchers found no significant relationship between video watching and the other semantic categories measured, such as animals, toys, or food. Additionally, the researchers found no evidence that video exposure altered the balance between shape-based and material-based nouns. The proportion of words related to solid objects versus nonsolid substances remained stable regardless of screen time habits.

The research highlights that the impact of digital media is not uniformly negative or positive. The findings suggest that screen time changes the landscape of early learning in specific ways.

“Most caregivers have heard the advice to avoid screen time with their young children,” Kucker said. “However, the reality is that that is very difficult to do 100% of the time in today’s tech-based world. What this study shows is that a high amount of low-quality videos/TV is associated with lower overall vocabulary sizes in 2-year-old children, but that that videos/TV may not impact all types of words equally.”

“For instance, children with more video/TV time have fewer names for body parts, but seem to learn most other nouns at relatively equal levels, potentially because some videos/TV do a good job teaching children some basics.”

“So do try to limit children’s screen time, but don’t fret about avoiding it completely,” Kucker explained. “Instead, consider the content and context for when the media is being used and why – high-quality, educational use, or those that are social (e.g. FaceTime, Zoom), may not be detrimental as long as children are still getting rich interactive play outside of the screen.”

As with all research, there are some limitations to consider. The data relied on caregiver reports, which can introduce memory errors or bias.

The study was also cross-sectional, meaning it captured a snapshot of the children’s lives rather than following them over time. It is not possible to determine causality from this data alone. For example, it is unknown if watching videos causes the change in vocabulary or if families with different communication styles rely more on media.

“We are currently looking at more longitudinal impacts of digital media on children’s language over time as well as individual differences across children, such as considering personality and temperament,” Kucker noted.

Additionally, the study focused primarily on the duration of screen time. It did not fully capture the specific content of the videos the children watched or the nature of the interactions parents had with their children during viewing. The researchers noted that educational content and co-viewing with a parent can mitigate potential negative effects.

“Not all media is bad!” Kucker said. “Media’s effect on children is nuanced and interacts with the rest of their experiences. I always like to tell parents that if your child watches an educational show for a few minutes so you can have a few minutes of quiet, that may be helping you to then be a better parent later which will more than offset that few minutes of media time.”

“Children who get rich, social experiences are often still developing in very strong ways even if they have a bit of high-quality screen time here and there. Just considering the content and context of the media is key!”

“We have a lot of work left still to do and understand in this area, and much of the support for this work has come from various grants and foundations, such as NIH and NSF,” Kucker added. “Without those funding avenues, this work couldn’t be done.”

The study, “Videos and Vocabulary – How Digital Media Use Impacts the Types of Words Children Know,” was authored by Sarah C. Kucker, Rachel F. Barr, and Lynn K. Perry.

Psychology study sheds light on the phenomenon of waifus and husbandos

A new study published in Psychology of Popular Media suggests that human romantic attraction to fictional characters may operate through the same psychological mechanisms that drive relationships between real people. The research offers insight into how individuals form deep attachments to non-existent partners in an increasingly digital world.

The concept of falling in love with an artificial being is not a modern invention, the researchers behind the new study noted. The ancient Greek narrative of Pygmalion describes a sculptor who creates a statue so beautiful that he falls in love with it. This theme of attributing human qualities and agency to inanimate creations has persisted throughout history.

In the contemporary landscape, this phenomenon is often observed within the anime fan community. Fans of Japanese animation sometimes utilize specific terminology to describe characters they hold in special regard. The terms “waifu” and “husbando” are derived from the English words for wife and husband. These labels imply a desire for a significant, often romantic, relationship with the character if they were to exist in reality.

The researchers conducted the new study to better understand the nature of relationships with “virtual agents.” A virtual agent is any character that exists solely on a screen but projects a sense of agency or independence to the audience. As technology advances, these characters are becoming more interactive and realistic. The authors sought to determine if the reasons people connect with these characters align with evolutionary theories regarding human mating strategies.

“Given the popularity of AI agents and chatbots, we were interested in people who have attraction to fictional characters,” said study author Connor Leshner, a PhD candidate in the Department of Psychology at Trent University.

“Through years of research, we have access to a large and charitable sample of anime fans, and it is a norm within this community to have relationships (sometimes real, sometimes now) with fictional characters. We mainly wanted to understand whether a large group of people have the capacity for relationships with fictional characters, because, if they do, then a logical future study would be studying relationships with something like AI.”

To investigate this, the research team recruited a large sample of self-identified anime fans. Participants were gathered from various online platforms, including specific communities on the website Reddit. The final sample consisted of 977 individuals who indicated that they currently had a waifu or husbando.

The demographic makeup of the sample was predominantly male. Approximately 78 percent of the respondents identified as men, while the remainder identified as women. The average age of the participants was roughly 26 years old, and more than half were from the United States. This provided a snapshot of a specific, highly engaged subculture.

The researchers employed a quantitative survey to assess the participants’ feelings and motivations. They asked participants to rate their agreement with various statements on a seven-point scale. The survey measured four potential reasons for choosing a specific character. These reasons were physical appearance, personality, the character’s role in the story, and the character’s similarity to the participant.

The researchers also sought to categorize the type of connection the fan felt toward the character. The three categories measured were emotional connection, sexual attraction, and feelings of genuine love.

The results provided evidence supporting the idea that fictional attraction mirrors real-world attraction. The data showed a positive association between a character’s physical appearance and the participant’s sexual attraction to them. This suggests that visual appeal is a primary driver for sexual interest in virtual agents, much as it is in human interaction.

However, physical appearance was not the only factor at play. The researchers found that a character’s personality was a strong predictor of emotional connection. Additionally, participants who felt that a character was similar to themselves were more likely to report a deep emotional bond. This indicates that shared traits and relatable behaviors foster feelings of closeness even when the partner is not real.

A central focus of the study was the influence of gender on these connections. The analysis revealed distinct differences between how men and women engaged with their chosen characters. Men were significantly more likely to report feelings of sexual attraction toward their waifus or husbandos. This aligns with prior research on male mating strategies that emphasizes visual and sexual stimuli.

Women, in contrast, reported higher levels of emotional connection with their fictional partners. While they also valued personality, their bonds were characterized more by affection and emotional intimacy than by sexual desire. This finding supports the hypothesis that women apply criteria focused on emotional compatibility even when the relationship is entirely imagined.

The researchers also explored the concept of “genuine love” for these characters. They found that feelings of love were predicted by a combination of factors. Physical appearance, personality, and similarity to the self all contributed to the sensation of being in love. This suggests that for a fan to feel love, the character must appeal to them on multiple levels simultaneously.

“People do have the capacity for these relationships,” Leshner told PsyPost. “Sometimes they are based in physical attraction, especially for men, while others are based on platonic, personality-based attraction, especially for women. Overall, people can feel a deep, intimate connection with people who don’t exist on our plane of reality, and I think that’s neat.”

The findings were not particularly surprising. “Everything matches what you’d expect from related theories, like evolutionary mating strategy where men want physical or sexual relationships, while women find more appeal in the platonic, long-term relationship,” Leshner said. “We have ongoing research that helps contextualize these findings more, but until that’s published, we cannot say much more.”

One potential predictor that did not yield significant results was the character’s role in the media. The “mere exposure effect” suggests that people tend to like things simply because they are familiar with them. The researchers tested if characters with larger roles, such as protagonists who appear on screen frequently, were more likely to be chosen. The data did not support this link.

The specific narrative function of the character did not predict sexual attraction, emotional connection, or love. A supporting character with limited screen time appeared just as capable of inspiring deep affection as a main hero. This implies that the specific attributes of the character matter more than their prominence in the story.

These findings carry implications that extend beyond the anime community. As artificial intelligence and robotics continue to develop, human interactions with non-human entities will likely become more common. The study suggests that people are capable of forming complex, multifaceted relationships with entities that do not physically exist.

“Anime characters don’t have agency, nor do they have consciousness, so the extent to which the average person might have a serious relationship with an anime characters is probably limited,” Leshner told PsyPost. “With that said, the same can is true of AI, and the New York Times published a huge article on human-AI romantic relationships. So maybe these relationships are more appealing than we really capture here.”

There are limitations to the study. The research relied on cross-sectional data, which means it captured a single moment in time. This design prevents researchers from proving that specific character traits caused the attraction. It is possible that attraction causes a participant to perceive traits differently.

Additionally, the sample was heavily skewed toward Western, male participants. Cultural differences in how relationships are viewed could influence these results. The anime fandom in Japan, for instance, might exhibit different patterns of attachment than those observed in the United States. Future research would benefit from a more diverse, global pool of participants.

Despite these limitations, the study provides a foundation for understanding the future of human connection. It challenges the notion that relationships with fictional characters are fundamentally different from real relationships. The psychological needs and drives that lead someone to download a soulmate appear to be remarkably human.

“People might either find these relationships weird, or might say that AI is significantly different from what we show here,” Leshner added. “My first response is that these relationships aren’t weird, and we’ve been discussing similar relationships for centuries. The article opens with a reference to Pygmalion, which is a Greek story about a guy falling in love with a statue. At minimum, it’s a repeated idea in our culture.”

“To my second point about the similarities between AI and anime characters, I think about it like this: AI might seem more human, but it’s just Bayesian statistics with extra steps. If you watch an anime all the way through, you can spend up to hundreds of hours with characters who have their own human struggles, triumphs, loves and losses. To be drawn toward that story and character is, to me, functionally similar to talking to an AI chatbot. The only difference is that an AI chatbot can feel more responsive, and might have more options for customization.”

“I think this research is foundational to the future of relationships, but I don’t think people know enough about anime characters, or really media or parasocial relationships broadly, to see things the same way,” Leshner continued. “I’m going to keep going down this road to understand the parallels with AI and modern technologies, but I fully believe that this is an uphill battle for recognition.”

“I hope this work inspires people to look into why people might be attracted to anime characters more broadly. It feels like the average anime character is made to be conventionally attractive in a way that is not true of most animation. It might still be weird to someone with no knowledge of the field if they engage in this quick exercise, but I have the utmost confidence that the average person might say, ‘Well, although it is not for me, I can understand it better now.'”

The study, “You would not download a soulmate: Attributes of fictional characters that inspire intimate connection,” was authored by Connor Leshner, Stephen Reysen, Courtney N. Plante, Sharon E. Roberts, and Kathleen C. Gerbasi.

Scientists: A common vaccine appears to have a surprising impact on brain health

A new scientific commentary suggests that annual influenza vaccination could serve as a practical and accessible strategy to help delay or prevent the onset of dementia in older adults. By mitigating the risk of severe cardiovascular events and reducing systemic inflammation, the seasonal flu shot may offer neurological protection that extends well beyond respiratory health. This perspective article was published in the journal Aging Clinical and Experimental Research.

Dementia poses a significant and growing challenge to aging societies worldwide, creating an urgent need for scalable prevention strategies. While controlling midlife risk factors like high blood pressure remains a primary focus, medical experts are looking for additional tools that can be easily integrated into existing healthcare routines.

Lorenzo Blandi from the Vita-Salute San Raffaele University and Marco Del Riccio from the University of Florence authored this analysis to highlight the potential of influenza vaccination as a cognitive preservation tool. They argue that the current medical understanding of the flu shot is often too limited. The researchers propose that by preventing the cascade of physical damage caused by influenza, vaccination can help maintain the brain’s vascular and cellular health.

The rationale for this perspective stems from the observation that influenza is not merely a respiratory illness. It is a systemic infection that can cause severe complications throughout the body. The authors note that influenza infection is associated with a marked increase in the risk of heart attacks and strokes in the days following illness.

These vascular events are known to contribute to cumulative brain injury. Consequently, Blandi and Del Riccio sought to synthesize existing evidence linking vaccination to improved cognitive outcomes. They posit that preventing these viral insults could modify the trajectory of dementia risk in the elderly population.

To support their argument, the authors detail evidence from four major epidemiological studies that demonstrate a link between receiving the flu shot and a lower incidence of dementia. The first piece of evidence cited is a 2023 meta-analysis. This massive review aggregated data from observational cohort studies involving approximately 2.09 million adults.

The participants in these studies were followed for periods ranging from four to thirteen years. The analysis found that individuals who received influenza vaccinations had a 31 percent lower risk of developing incident dementia compared to those who did not.

The second key study referenced was a claims-based cohort study. This research utilized propensity-score matching, a statistical technique designed to create comparable groups by accounting for various baseline characteristics. The researchers analyzed data from 935,887 matched pairs of older adults who were at least 65 years old.

The results showed that those who had received an influenza vaccination had a 40 percent lower relative risk of developing Alzheimer’s disease over a follow-up period of roughly four years. The study calculated an absolute risk reduction of 3.4 percent, suggesting that for every 29 people vaccinated, one case of Alzheimer’s might be prevented during that timeframe.

The third study highlighted in the perspective used data from the Veterans Health Administration. This study was significant because it used time-to-event models to address potential biases related to when vaccinations occurred.

The researchers found that vaccinated older adults had a hazard ratio for dementia of 0.86. This statistic indicates a risk reduction of roughly 14 percent. The data also revealed a dose-response relationship. This means that the protective signal was strongest among participants who received multiple vaccine doses across different years and seasons, rather than just a single shot.

The fourth and final study cited was a prospective analysis of the UK Biobank. This study modeled vaccination as an exposure that varies over time, allowing for a nuanced view of cumulative effects.

The researchers observed a reduced risk for all-cause dementia, with a hazard ratio of 0.83. The reduction in risk was even more pronounced for vascular dementia, showing a hazard ratio of 0.58. Similar to the veterans’ study, this analysis supported the idea of a dose-response relationship. The accumulation of vaccinations over time appeared to correlate with better cognitive outcomes.

Blandi and Del Riccio explain several biological mechanisms that could account for these protective effects. The primary pathway involves the prevention of vascular damage. Influenza infection is a potent trigger for inflammation and blood clotting.

Research shows that the risk of acute myocardial infarction can be six times greater in the first week after a flu infection. By preventing the flu, the vaccine likely prevents these specific vascular assaults. Since vascular health is closely tied to brain health, avoiding these events helps preserve cognitive reserve. The cumulative burden of small strokes or reduced blood flow to the brain is a major predictor of cognitive decline.

In addition to vascular protection, the authors discuss the role of neuroinflammation. Studies in animal models have shown that influenza viruses can trigger activation of microglia, which are the immune cells of the brain. This activation can lead to the loss of synapses and memory decline, even if the virus itself does not enter the brain.

Systemic inflammation caused by the flu can cross into the nervous system. The authors suggest that vaccination may dampen these inflammatory surges. There is also a hypothesis known as “trained immunity,” where vaccines might program the immune system to respond more efficiently to threats, reducing off-target damage to the brain.

Based on this evidence, the authors propose several policy changes and organizational strategies. They argue that public health messaging needs to be reconceptualized. Instead of framing the flu shot solely as a way to avoid a winter cold, health officials should present it as a measure to reduce heart attacks, strokes, and potential cognitive decline. This approach addresses the priorities of older adults, who often fear dementia and loss of independence more than respiratory illness.

The authors also recommend specific clinical practices. They suggest that health systems should prioritize the use of high-dose or adjuvanted vaccines for adults over the age of 65. These formulations are designed to overcome the weaker immune response often seen in aging bodies.

Additionally, the authors advocate for making vaccination a default part of hospital discharge procedures. When an older adult is leaving the hospital after a cardiac or pulmonary event, vaccination should be a standard component of their care plan. This would help close the gap between the known benefits of the vaccine and the currently low rates of uptake in many regions.

Despite the promising data, Blandi and Del Riccio acknowledge certain limitations in the current body of evidence. The majority of the data comes from observational studies. This type of research can identify associations but cannot definitively prove causality.

There is always a possibility of “healthy user bias,” where people who choose to get vaccinated are already more health-conscious and have better lifestyle habits than those who do not. While the studies cited used advanced statistical methods to control for these factors, residual confounding can still exist.

The authors also note that studies based on medical claims data can suffer from inaccuracies in how dementia is diagnosed and recorded. Furthermore, the precise biological mechanisms remain a hypothesis that requires further validation. The authors call for future research to include pragmatic randomized trials that specifically measure cognitive endpoints. They suggest that future studies should track biological markers of neuroinflammation in vaccinated versus unvaccinated groups to confirm the proposed mechanisms.

The study, “From breath to brain: influenza vaccination as a pragmatic strategy for dementia prevention,” was authored by Lorenzo Blandi and Marco Del Riccio.

Does sexual activity before exercise harm athletic performance?

New research published in the journal Physiology & Behavior provides evidence that sexual activity shortly before high-intensity exercise does not harm athletic performance. The study suggests that masturbation-induced orgasm 30 minutes prior to exertion may actually enhance exercise duration and reaction time. These findings challenge long-standing beliefs regarding the necessity of sexual abstinence before athletic competition.

The motivation for the new study stems from a persistent debate in the sports world. Coaches and athletes have frequently adhered to the idea that sexual activity drains energy and reduces aggression. This belief has led to common recommendations for abstinence in the days leading up to major events. Diego Fernández-Lázaro from the University of Valladolid led a research team to investigate whether these restrictions are scientifically justified.

Previous scientific literature on this topic has been inconsistent or limited in scope. Many prior studies focused on sexual activity occurring the night before competition, leaving a gap in knowledge regarding immediate effects. Fernández-Lázaro and his colleagues aimed to examine the physiological and performance outcomes of sexual activity that occurs less than an hour before maximal effort.

To conduct the investigation, the researchers recruited 21 healthy, well-trained male athletes. The participants included basketball players, long-distance runners, and boxers. The average age of the volunteers was 22 years. The study utilized a randomized crossover design to ensure robust comparisons. This means that every participant completed both the experimental condition and the control condition.

In the control condition, participants abstained from any sexual activity for at least seven days. On the day of testing, they watched a neutral documentary film for 15 minutes before beginning the exercise assessments. In the experimental condition, the participants engaged in masturbation to orgasm in a private setting 30 minutes before the tests. They viewed a standardized erotic film to facilitate this process. Afterward, they watched the same neutral documentary to standardize the rest period.

The researchers employed two primary physical tests to measure performance. The first was an isometric handgrip strength test using a dynamometer. The second was an incremental cycling test performed on a stationary bike. The cycling test began at a set resistance and increased in difficulty every minute until the participant could no longer continue. This type of test is designed to measure aerobic capacity and time to exhaustion.

In addition to physical performance, the team collected blood samples to analyze various biomarkers. They looked for changes in hormones such as testosterone, cortisol, and luteinizing hormone. They also measured markers of muscle damage, including creatine kinase and lactate dehydrogenase. Inflammatory markers like C-reactive protein were also assessed to see if sexual activity placed additional stress on the body.

The results indicated that sexual activity did not have a negative impact on physical capabilities. The participants demonstrated a small but statistically significant increase in the total duration of the cycling test following sexual activity compared to the abstinence condition. This improvement represented a 3.2 percent increase in performance time.

The researchers also observed changes in handgrip strength. The mean strength values were slightly higher in the group that had engaged in sexual activity. This suggests that the neuromuscular system remained fully functional and perhaps slightly primed for action.

Physiological monitoring revealed that heart rates were higher during the exercise sessions that followed sexual activity. This elevation in heart rate aligns with the activation of the sympathetic nervous system. This system is responsible for the “fight or flight” response that prepares the body for physical exertion.

Hormonal analysis provided further insight into the body’s response. The study found that concentrations of both testosterone and cortisol were higher after sexual activity. Testosterone is an anabolic hormone associated with strength and aggression. Cortisol is a stress hormone that helps mobilize energy stores. The simultaneous rise in both hormones indicates a state of physiological activation rather than a state of fatigue.

The study also examined markers of muscle damage to see if the combination of sex and exercise caused more tissue stress. The findings showed that levels of lactate dehydrogenase were actually lower in the sexual activity condition. This specific enzyme leaks into the blood when muscle cells are damaged or stressed. The reduction suggests that the pre-exercise sexual activity did not exacerbate muscle stress and may have had a protective or neutral effect.

Other markers of muscle damage, such as creatine kinase and myoglobin, showed no significant differences between the two conditions. Similarly, inflammatory markers like interleukin-6 remained stable. This implies that the short-term physiological stress of sexual activity does not compound the stress caused by the exercise itself.

These findings diverge from some historical perspectives and specific past studies. For example, a study by Kirecci and colleagues reported that sexual intercourse within 24 hours of exercise reduced lower limb strength. The current study contradicts that conclusion by showing maintained or improved strength. The difference may lie in the specific timing or the nature of the sexual activity, as the current study focused on masturbation rather than partnered intercourse.

The results align more closely with a body of research summarized by Zavorsky and others. Those reviews generally concluded that sexual activity the night before competition has little to no impact on performance. The current study builds on that foundation by narrowing the window to just 30 minutes. It provides evidence that even immediate pre-competition sexual activity is not detrimental.

The researchers propose that the observed effects are likely due to a “priming” mechanism. Sexual arousal activates the sympathetic nervous system and triggers the release of catecholamines. This physiological cascade resembles a warm-up. It increases heart rate and alertness, which may translate into better readiness for immediate physical exertion.

The psychological aspect of the findings is also worth noting. The participants did not report any difference in their perceived rate of exertion between the two conditions. This means the exercise did not feel harder after sexual activity, even though their heart rates were higher. This consistency suggests that motivation and psychological fatigue were not negatively affected.

There are limitations to this study that affect how the results should be interpreted. The sample consisted entirely of young, well-trained men. Consequently, the findings may not apply to female athletes, older adults, or those with lower fitness levels. The physiological responses to sexual activity can vary across these different demographics.

The study restricted sexual activity to masturbation to maintain experimental control. Partnered sexual intercourse involves different physical demands and psychological dynamics. Intercourse often requires more energy expenditure and involves oxytocin release related to bonding, which might influence sedation or relaxation differently than masturbation.

The sample size of 21 participants is relatively small, although adequate for a crossover design of this nature. Larger studies would be needed to confirm these results and explore potential nuances. The study also relied on a one-week washout period between trials. While this is standard, residual psychological effects from the first session cannot be entirely ruled out.

Future research should aim to include female participants to determine if similar hormonal and performance patterns exist. It would also be beneficial to investigate different time intervals between sexual activity and exercise. Understanding the effects of partnered sex versus masturbation remains a key area for further exploration.

The study provides evidence that the “abstinence myth” may be unfounded for many athletes. The data indicates that sexual activity 30 minutes before exercise does not induce fatigue or muscle damage. Instead, it appears to trigger a neuroendocrine response that supports physical performance. Athletes and coaches may need to reconsider strict abstinence policies based on these physiological observations.

The study, “Sexual activity before exercise influences physiological response and sports performance in high-level trained men athletes,” was authored by Diego Fernández-Lázaro, Manuel Garrosa, Gema Santamaría, Enrique Roche, José María Izquierdo, Jesús Seco-Calvo, and Juan Mielgo-Ayuso.

❌